Viscosity methods giving uniqueness for martingale problems
EE l e c t r o n i c J o u r n a l o f P r o b a b i l i t yElectron. J. Probab. (2015), no. 67, 1–27.ISSN: DOI:
Viscosity methods giving uniqueness for martingaleproblems * Cristina Costantini † Thomas G. Kurtz ‡ Abstract
Let E be a complete, separable metric space and A be an operator on C b ( E ) . Wegive an abstract definition of viscosity sub/supersolution of the resolvent equation λu − Au = h and show that, if the comparison principle holds, then the martingaleproblem for A has a unique solution. Our proofs work also under two alternativedefinitions of viscosity sub/supersolution which might be useful, in particular, ininfinite dimensional spaces, for instance to study measure-valued processes.We prove the analogous result for stochastic processes that must satisfy boundaryconditions, modeled as solutions of constrained martingale problems. In the case ofreflecting diffusions in D ⊂ R d , our assumptions allow D to be nonsmooth and thedirection of reflection to be degenerate.Two examples are presented: A diffusion with degenerate oblique direction of re-flection and a class of jump diffusion processes with infinite variation jump componentand possibly degenerate diffusion matrix. Keywords: martingale problem; uniqueness; metric space; viscosity solution; boundary condi-tions; constrained martingale problem.
AMS MSC 2010:
Primary 60J25; 60J35, Secondary 60G46; 47D07.Submitted to EJP on June 25, 2014, final version accepted on May 31, 2015.Supersedes arXiv:1406.6650v1 . There are many ways of specifying Markov processes, the most popular being assolutions of stochastic equations, as solutions of martingale problems, or in terms ofsolutions of the Kolmogorov forward equation (the Fokker-Planck equation or the masterequation depending on context). The solution of a stochastic equation explicitly givesa process while a solution of a martingale problem gives the distribution of a processand a solution of a forward equation gives the one dimensional distributions of a process.Typically, these approaches are equivalent (assuming that there is a stochastic equation * Research supported in part by NSF grant DMS 11-06424 † Dipartimento di Economia, Università di Chieti-Pescara, Italy. E-mail: [email protected] ‡ Departments of Mathematics and Statistics, University of Wisconsin - Madison, USA. E-mail: a r X i v : . [ m a t h . P R ] N ov ormulation) in the sense that existence of a solution specified by one method impliesexistence of corresponding solutions to the other two (weak existence for the stochasticequation) and hence uniqueness for one method implies uniqueness for the other two(distributional uniqueness for the stochastic equation).One approach to proving uniqueness for a forward equation and hence for thecorresponding martingale problem is to verify a condition on the generator similar tothe range condition of the Hille-Yosida theorem. (See Corollary 2.14.) We show that theoriginal generator A of our martingale problem (or a restriction of the original generator A , in the case of martingale problems with boundary conditions) can be extended to agenerator (cid:98) A such that every solution of the martingale problem for A is a solution for (cid:98) A and (cid:98) A satisfies the range condition of Corollary 2.14. Our extension is constructed byincluding viscosity solutions of the resolvent equation λu − Au = h. (1.1)Viscosity solutions have been used to study value functions in stochastic optimalcontrol and optimal stopping theory since the very beginning (see the classical references[6], [24], as well as [12]). It may be interesting to note that, in the context of Hamilton-Jacobi equations, the idea of studying a parabolic equation by solving a resolvent equationin the viscosity sense appears already in [7], Section VI.3, where it is applied to a modelproblem. The methodology is also important for related problems in finance (for example[26], [3], [18], [1], [5] and many others).Viscosity solutions have also been used to study the partial differential equationsassociated with forward-backward stochastic differential equations ([23],[9]) and in thetheory of large deviations ([11]).The basic data for our work is an operator A ⊂ C b ( E ) × C b ( E ) on a complete, separablemetric space E . We offer an abstract definition of viscosity sub/supersolution for (1.1)(which for integro-differential operators in R d is equivalent to the usual one) and prove,under very general conditions, that the martingale problem for A has a unique solutionif the comparison principle for (1 . holds.We believe the interest of this result is twofold: on one hand it clarifies the generalconnection between viscosity solutions and martingale problems; on the other, there arestill many martingale problems, for instance in infinite dimension, for which uniquenessis an open question.We also discuss two alternative abstract definitions of viscosity sub/supersolutionthat might be especially useful in infinite dimensional spaces. All our proofs work underthese alternative definitions as well.The first alternative definition is a modification of a definition suggested to us byNizar Touzi and used in [9]. Being a stronger definition (it allows for more test functions),it should be easier to prove comparison results under this definition.The second alternative definition appears in [11] and is a stronger definition too.Under this definition, a sort of converse of our main result holds, namely if h belongs to R ( λ − A ) (under uniform convergence), then the comparison principle for semisolutionsof (1.1) holds (Theorem 4.8). When E is compact, this definition is equivalent to our maindefinition, hence the comparison principle holds for semisolutions in that sense as well.Next we consider stochastic processes that must satisfy some boundary conditions,for example, reflecting diffusions. Boundary conditions are expressed in terms of anoperator B which enters into the formulation of a constrained martingale problem (see[19]). We restrict our attention to models in which the boundary term in the constrainedmartingale problem is expressed as an integral against a local time. Then it still holdsthat uniqueness of the solution of the constrained martingale problem follows fromthe comparison principle between viscosity sub and supersolutions of (1.1) with the EJP (2015), paper 67. Page 2/27 ejp.ejpecp.org ppropriate boundary conditions. Notice that, as for the standard martingale problem,uniqueness for the constrained martingale problem implies that the solution is Markovian(see [19], Proposition 2.6).In the presence of boundary conditions, even for R d -valued diffusions, there areexamples for which uniqueness of the martingale problem is not known. Processes indomains with boundaries that are only piecewise smooth or with boundary operators thatare second order or with directions of reflection that are tangential on some part of theboundary continue to be a challenge. In this last case, as an example of an application ofour results, we use the comparison principle proved in [25] to obtain uniqueness.The strategy of our proofs has been initially inspired by the proof of Krylov’s selectiontheorem for martingale problems that appears in [10] and originally appeared in unpub-lished work of [14]. In that proof the generator is recursively extended in such a waythat there are always solutions of the martingale problem for the extended generator,but eventually only one. If uniqueness fails for the original martingale problem, there ismore than one way to do the extension. Conversely if, at each stage of the recursion,there is only one way to do the extension and all solutions of the martingale problemfor the original generator remain solutions for the extended generator, then uniquenessmust hold for the original generator.Analogously, assuming the comparison principle for (1.1) (or (1.1) with the appropriateboundary conditions) holds for a large enough class of functions h , we construct anextension (cid:98) A of the original operator A (of a restriction of the original operator A , inthe case of constrained martingale problems) such that all solutions of the martingaleproblem (the constrained martingale problem) for A are solutions of the martingaleproblem for (cid:98) A , and such that uniqueness holds for (cid:98) A . Actually, in the case of ordinarymartingale problems the extension, although possible, is not needed, because thecomparison principle for (1.1) directly yields a condition ( (2 . ) that, if valid for a largeenough class of functions h , ensures uniqueness of the one-dimensional distributions ofsolutions to the martingale problem, and hence uniqueness of the solution. The extensionis needed, instead, for constrained martingale problems.A few works on viscosity solutions of partial differential equations and weak solutionsof stochastic differential equations have appeared in recent years. For diffusions in R d ,[2], assuming a comparison principle exists, show that the backward equation has aunique viscosity solution, and it follows that the corresponding stochastic differentialequation has a unique weak solution. For Markovian forward-backward stochasticdifferential equations, [23] also derive uniqueness of the weak solution from existence ofa comparison principle for the corresponding partial differential equation. In the non-Markovian case, the associated partial differential equation becomes path dependent. [9]propose the notion of viscosity solution of (semilinear) path dependent partial differentialequations on the space of continuous paths already mentioned above and prove acomparison principle.The rest of this paper is organized as follows: Section 2 contains some backgroundmaterial on martingale problems and on viscosity solutions; Section 3 deals with martin-gale problems; the alternative definitions of viscosity solution are discussed in Section 4;Section 5 deals with martingale problems with boundary conditions; finally in Section6, we present two examples, including the application to diffusions with degeneratedirection of reflection. EJP (2015), paper 67. Page 3/27 ejp.ejpecp.org
Background
Throughout this paper we will assume that ( E, r ) is a complete separable metricspace, D E [0 , ∞ ) is the space of cadlag, E -valued functions endowed with the Skorohodtopology, B ( D E [0 , ∞ ))) is the σ -algebra of Borel sets of D E [0 , ∞ ) , C b ( E ) denotes thespace of bounded, continuous functions on ( E, r ) , B ( E ) denotes the space of bounded,measurable functions on ( E, r ) , and P ( E ) denotes the space of probability measures on ( E, r ) . || · || will denote the supremum norm on C b ( E ) or B ( E ) . Definition 2.1.
A measurable stochastic process X defined on a probability space (Ω , F , P ) is a solution of the martingale problem for A ⊂ B ( E ) × B ( E ) , provided there exists a filtration {F t } such that X and (cid:82) ˙0 g ( X ( s )) ds are {F t } -adapted,for every g ∈ B ( E ) , and M f ( t ) = f ( X ( t )) − f ( X (0)) − (cid:90) t g ( X ( s )) ds (2.1) is a {F t } -martingale for each ( f, g ) ∈ A . If X (0) has distribution µ , we say X is a solutionof the martingale problem for ( A, µ ) . Remark 2.2.
Because linear combinations of martingales are martingales, without lossof generality, we can, but need not, assume that A is linear and that (1 , ∈ A .We do not, and cannot for our purposes, require A to be single valued. In particular,the operator (cid:98) A defined in the proof of Theorem 5.11 will typically not be single valued.In the next sections we will restrict our attention to processes X with sample pathsin D E [0 , ∞ ) . Definition 2.3.
A stochastic process X , defined on a probability space (Ω , F , P ) , withsample paths in D E [0 , ∞ ) is a solution of the martingale problem for A ⊂ B ( E ) × B ( E ) , in D E [0 , ∞ ) provided there exists a filtration {F t } such that X is {F t } -adapted and M f ( t ) = f ( X ( t )) − f ( X (0)) − (cid:90) t g ( X ( s )) ds (2.2) is a {F t } -martingale for each ( f, g ) ∈ A . If X (0) has distribution µ , we say X is a solutionof the martingale problem for ( A, µ ) in D E [0 , ∞ ) . Remark 2.4.
Since X has sample paths in D E [0 , ∞ ) , the fact that it is {F t } -adaptedimplies that (cid:82) ˙0 g ( X ( s )) ds is {F t } -adapted, for every g ∈ B ( E ) . Remark 2.5.
The requirement that X have sample paths in D E [0 , ∞ ) is usually fulfilledprovided the state space E is selected appropriately. Moreover, if A ⊂ C b ( E ) × C b ( E ) , D ( A ) is dense in C b ( E ) in the topology of uniform convergence on compact sets and foreach compact K ⊂ E , ε > , and T > , there exists a compact K (cid:48) ⊂ E such that P { X ( t ) ∈ K (cid:48) , t ≤ T, X (0) ∈ K } ≥ (1 − ε ) P { X (0) ∈ K } , for every progressive process such that (2.1) is a martingale, then every such processhas a modification with sample paths in D E [0 , ∞ ) (See Theorem 4.3.6 of [10].) EJP (2015), paper 67. Page 4/27 ejp.ejpecp.org emark 2.6.
The martingale property is equivalent to the requirement that E [( f ( X ( t + r )) − f ( X ( t )) − (cid:90) t + rt g ( X ( s )) ds ) (cid:89) i h i ( X ( t i ))] = 0 for all choices of ( f, g ) ∈ A , h i ∈ B ( E ) , t, r ≥ , and ≤ t i ≤ t . Consequently, the propertyof being a solution of a martingale problem is a property of the finite- dimensionaldistributions of X .In particular, for the martingale problem in D E [0 , ∞ ) , the property of being a solutionis a property of the distribution of X on D E [0 , ∞ ) . Much of what follows in the nextsections will be formulated in terms of the collection Π ⊂ P ( D E [0 , ∞ )) of distributions ofsolutions of the martingale problem. For some purposes, it will be convenient to assumethat X is the canonical process defined on (Ω , F , P ) = ( D E [0 , ∞ ) , B ( D E [0 , ∞ ))) , P ) , forsome P ∈ Π .In view of Remark 2.6 it is clear that uniqueness of the solution to the martingaleproblem for an operator A is to be meant as uniqueness of the finite-dimensionaldistributions. Definition 2.7.
We say that uniqueness holds for the martingale problem for A if, forevery µ , any two solutions of the martingale problem for ( A, µ ) have the same finite-dimensional distributions. If we restrict our attention to solutions in D E [0 , ∞ ) , thenuniqueness holds if any two solutions for ( A, µ ) have the same distribution on D E [0 , ∞ ) One of the most important consequences of the martingale approach to Markovprocesses is that uniqueness of one-dimensional distributions implies uniqueness offinite-dimensional distributions.
Theorem 2.8.
Suppose that for each µ ∈ P ( E ) , any two solutions of the martingaleproblem for ( A, µ ) have the same one-dimensional distributions. Then any two solutionshave the same finite-dimensional distributions. If any two solutions of the martingaleproblem for ( A, µ ) in D E [0 , ∞ ) have the same one-dimensional distributions, then theyhave the same distribution on D E [0 , ∞ ) .Proof. This is a classical result. See for instance Theorem 4.4.2 and Corollary 4.4.3 of[10].For µ ∈ P ( E ) and f ∈ B ( E ) we will use the notation µf = (cid:90) E f ( x ) µ ( dx ) . (2.3) Lemma 2.9.
Let X be a {F t } -adapted stochastic process with sample paths in D E [0 , ∞ ) ,with initial distribution µ , f, g ∈ B ( E ) and λ > . Then ( ) is a {F t } -martingale if andonly if M λf ( t ) = e − λt f ( X ( t )) − f ( X (0)) + (cid:90) t e − λs ( λf ( X ( s )) − g ( X ( s )) ds (2.4) is a {F t } -martingale. In particular, if ( ) is a {F t } -martingale µf = E [ (cid:90) ∞ e − λs ( λf ( X ( s )) − g ( X ( s ))) ds ] . (2.5) Proof.
The general statement is a special case of Lemma 4.3.2 of [10]. If f is continuous,as will typically be the case in the next sections, M f will be cadlag and we can applyItô’s formula to obtain e − λt f ( X ( t )) − f ( X (0)) = (cid:90) t ( − f ( X ( s )) λe − λs + e − λs g ( X ( s ))) ds + (cid:90) t e − λs dM f ( s ) , EJP (2015), paper 67. Page 5/27 ejp.ejpecp.org here the last term on right is a {F t } -martingale. (Note that, since all the processesinvolved are cadlag, we do not need to require the filtration {F t } to satisfy the ‘usualconditions’.) Conversely, if ( ) is a {F t } -martingale, the assertion follows by applyingItô’s formula to f ( X ( t )) = e λt (cid:16) e − λt f ( X ( t )) (cid:17) .In particular, if ( ) is a {F t } -martingale E [ f ( X (0)] − E [ e − λt f ( X ( t ))] = E [ (cid:90) t e − λs ( λf ( X ( s )) − g ( X ( s )) ds ] , and the second statement follows by taking t → ∞ . Lemma 2.10.
Let X be a solution of the martingale problem for A ⊂ C b ( E ) × C b ( E ) in D E [0 , ∞ ) with respect to a filtration {F t } . Let τ ≥ be a finite {F t } -stopping time and H ≥ be a F τ -measurable random variable such that < E [ H ] < ∞ . Define P τ,H by P τ,H ( C ) = E [ H C ( X ( τ + · ))] E [ H ] , C ∈ B ( D E [0 , ∞ )) . (2.6) Then P τ,H is the distribution of a solution of the martingale problem for A in D E [0 , ∞ ) .Proof. Let (Ω , F , P ) be the probability space on which X is defined, and define P H on (Ω , F ) by P H ( C ) = E P [ H C ] E P [ H ] , C ∈ F . Define X τ by X τ ( t ) = X ( τ + t ) . X τ is adapted to the filtration {F τ + t } and for ≤ t < · · · < t n < t n +1 and f , · · · , f n ∈ B ( E ) , E P H (cid:104)(cid:110) f ( X τ ( t n +1 )) − f ( X τ ( t n )) − (cid:90) t n +1 t n Af ( X τ ( s )) ds (cid:111) Π ni =1 f i ( X τ ( t i )) (cid:105) = 1 E P [ H ] E P (cid:104) H (cid:110) f ( X ( τ + t n +1 )) − f ( X ( τ + t n )) − (cid:90) τ + t n +1 τ + t n Af ( X ( s )) ds (cid:111) Π ni =1 f i ( X ( τ + t i )) (cid:105) = 0 by the optional sampling theorem. Therefore, under P H , X τ is a solution of the martin-gale problem. P τ,H , given by (2.6), is the distribution of X τ on D E [0 , ∞ ) . Lemma 2.11.
Let λ > . Suppose u, h ∈ B ( E ) satisfy µu = E [ (cid:90) ∞ e − λt h ( X ( t )) dt ] , (2.7) for every solution of the martingale problem for A in D E [0 , ∞ ) with initial distribution µ and for every µ ∈ P ( E ) . Then u ( X ( t )) − (cid:90) t ( λu ( X ( s )) − h ( X ( s ))) ds (2.8) is a (cid:8) F Xt (cid:9) -martingale for every solution X of the martingale problem for A in D E [0 , ∞ ) .Proof. The lemma is Lemma 4.5.18 of [10], but we repeat the proof here for theconvenience of the reader.
EJP (2015), paper 67. Page 6/27 ejp.ejpecp.org et X be a solution of the martingale problem for A on a probability space (Ω , F , P ) .For t ≥ and B ∈ F t with P ( B ) > , define Q on (Ω , F ) by Q ( C ) = E P [ B C ] P ( B ) , C ∈ F . Then E P [ B e λt (cid:90) ∞ t e − λs h ( X ( s )) ds ] = E P [ B (cid:90) ∞ e − λs h ( X ( t + s )) ds ]= P ( B ) E Q [ (cid:90) ∞ e − λs h ( X ( t + s )) ds ] . By the same arguments as in the proof of Lemma 2.10, X ( t + · ) is a solution of themartingale problem for A on the probability space (Ω , F , Q ) with respect to the filtration {F t + · } , with initial distribution ν ( C ) = Q ( X ( t ) ∈ C ) = P ( X ( t ) ∈ C | B ) . Hence, by theassumptions of the lemma, E P [ B e t (cid:90) ∞ t e − λs h ( X ( s )) ds ] = P ( B ) νu = E P [ B u ( X ( t ))] . Therefore E [ (cid:90) ∞ e − λs h ( X ( s )) ds |F t ] = e − λt u ( X ( t )) + (cid:90) t e − λs h ( X ( s )) ds, and the assertion follows from Lemma 2.9 with f = u and g = λu − h .Our approach to proving uniqueness of the solution of the martingale problem for ( A, µ ) relies on the following theorem. Definition 2.12.
A class of functions M ⊂ C b ( E ) is separating if, for µ, ν ∈ P ( E ) , µf = νf for all f ∈ M implies µ = ν . Theorem 2.13.
For each λ > , suppose that E [ (cid:90) ∞ e − λs h ( X ( s )) ds ] = E [ (cid:90) ∞ e − λs h ( Y ( s )) ds ] , (2.9) for any two solutions of the martingale problem for A in D E [0 , ∞ ) with the same initialdistribution and for all h in a separating class of functions M λ . Then any two solutionsof the martingale problem for A in D E [0 , ∞ ) with the same initial distribution have thesame distribution on D E [0 , ∞ ) .Proof. The proof is actually implicit in the proof of Corollary 4.4.4 of [10], but we give ithere for clarity. For any two solutions of the martingale problem for A in D E [0 , ∞ ) withthe same initial distribution, we have, for all h ∈ M λ , (cid:90) ∞ e − λs E [ h ( X ( s ))] ds = (cid:90) ∞ e − λs E [ h ( Y ( s ))] ds. (2.10)Consider the measures on ( E, B ( E )) defined by m X ( C ) = (cid:90) ∞ e − λs E [1 C ( X ( s ))] ds, m Y ( C ) = (cid:90) ∞ e − λs E [1 C ( Y ( s ))] ds. Then (2 . reads m X h = m Y h, EJP (2015), paper 67. Page 7/27 ejp.ejpecp.org nd, as M λ is separating, (2.10) holds for all h ∈ B ( E ) . Since the identity holds for each λ > , by uniqueness of the Laplace transform, E [ h ( X ( s ))] = E [ h ( Y ( s ))] , for almost every s ≥ , and by the right continuity of X and Y , for all s ≥ . Consequently, X and Y havethe same one-dimensional distributions, and hence by Theorem 2.8, the same finite-dimensional distributions and the same distribution on D E [0 , ∞ ) . Corollary 2.14.
Suppose that for each λ > , R ( λ − A ) is separating. Then for eachinitial distribution µ ∈ P ( E ) , any two cadlag solutions of the martingale problem for ( A, µ ) have the same distribution on D E [0 , ∞ ) .Proof. The assertion follows immediately from Lemma 2.11 and Theorem 2.13.Martingale problems and dissipative operators are closely related.
Definition 2.15.
A linear operator A ⊂ B ( E ) × B ( E ) is dissipative provided (cid:107) λf − g (cid:107) ≥ λ (cid:107) f (cid:107) , for each ( f, g ) ∈ A and each λ > . Lemma 2.16.
Suppose that for each x ∈ E , there exists a solution of the martingaleproblem for ( A, δ x ) . Then A is dissipative.Proof. By Lemma 2.9, | f ( x ) | ≤ E [ (cid:90) ∞ e − λs | λf ( X ( s )) − g ( X ( s )) | ds ] ≤ λ (cid:107) λf − g (cid:107) . Let A ⊂ C b ( E ) × C b ( E ) . Theorem 2.14 implies that if for each λ > , the equation λu − g = h (2.11)has a solution ( u, g ) ∈ A for every h in a class of functions M λ ⊆ C b ( E ) that is separating,then for each initial distribution µ , the martingale problem for ( A, µ ) has at most onesolution. Unfortunately in many situations it is hard to verify that (2 . has a solutionin A . Thus one is lead to consider a weaker notion of solution, namely the notion of viscosity solution . Definition 2.17. (viscosity (semi)solution)
Let A be as above, λ > , and h ∈ C b ( E ) .a) u ∈ B ( E ) is a viscosity subsolution of (2.11) if and only if u is upper semicontinuousand if ( f, g ) ∈ A and x ∈ E satisfy sup x ( u − f )( x ) = ( u − f )( x ) , (2.12) then λu ( x ) − g ( x ) ≤ h ( x ) . (2.13) EJP (2015), paper 67. Page 8/27 ejp.ejpecp.org ) u ∈ B ( E ) is a viscosity supersolution of ( ) if and only if u is lower semicontinu-ous and if ( f, g ) ∈ A and x ∈ E satisfy inf x ( u − f )( x ) = ( u − f )( x ) , (2.14) then λu ( x ) − g ( x ) ≥ h ( x ) . (2.15) A function u ∈ C b ( E ) is a viscosity solution of (2.11) if it is both a subsolution and asupersolution. In the theory of viscosity solutions, usually existence of a viscosity solution followsby existence of a viscosity subsolution and a viscosity supersolution, together with thefollowing comparison principle . Definition 2.18.
The comparison principle holds for ( ) when every subsolution ispointwise less than or equal to every supersolution. Remark 2.19.
To better motivate the notion of viscosity solution in the context ofmartingale problems, assume that there exists a solution of the martingale problem for ( A, δ x ) for each x ∈ E . Suppose that there exists v ∈ C b ( E ) such that e − λt v ( X ( t )) + (cid:90) t e − λs h ( X ( s )) ds (2.16)is a {F Xt } -martingale for every solution X of the martingale problem for A . Let ( f, g ) ∈ A and x satisfy sup x ( v − f )( x ) = ( v − f )( x ) . Let X be a solution of the martingale problem for ( A, δ x ) . Then e − λt ( v ( X ( t )) − f ( X ( t ))) + (cid:90) t e − λs ( h ( X ( s ) − λf ( X ( s )) + g ( X ( s ))) ds is a {F Xt } -martingale by Lemma 2.9, and E [ (cid:90) t e − λs ( λv ( X ( s )) − g ( X ( s )) − h ( X ( s ))) ds ]= E [ (cid:90) t e − λs λ ( v ( X ( s )) − f ( X ( s ))) ds ]+ E [ e − λt ( v ( X ( t ) − f ( X ( t ))) − ( v ( x ) − f ( x ))] ≤ . Dividing by t and letting t → , we see that λv ( x ) − g ( x ) ≤ h ( x ) , so v is a subsolution for (2.11). A similar argument shows that it is also a supersolutionand hence a viscosity solution. We will give conditions such that if the comparisonprinciple holds for some h , then a viscosity solution v exists and (2.16) is a martingalefor every solution of the martingale problem for A .In the case of a domain with boundary, in order to uniquely determine the solution ofthe martingale problem for A one usually must specify some boundary conditions, bymeans of boundary operators B . . . , B m . EJP (2015), paper 67. Page 9/27 ejp.ejpecp.org et E ⊆ E be an open set and let ∂E = ∪ mk =1 E k , for disjoint, nonempty Borel sets E , . . . , E m .Let A ⊆ C b ( E ) × C b ( E ) , B k ⊆ C b ( E ) × C ( E ) , k = 1 , ..., m , be linear operatorswith a common domain D . For simplicity we will assume that E is compact (hence thesubscript b will be dropped) and that A, B , . . . , B m are single valued. Definition 2.20.
Let
A, B . . . , B m be as above, and let λ > . For h ∈ C b ( E ) , considerthe equation λu − Au = h, on E (2.17) − B k u = 0 , on E k , k = 1 , · · · , m. a) u ∈ B ( E ) is a viscosity subsolution of (2.17) if and only if u is upper semicontinuousand if f ∈ D and x ∈ E satisfy sup x ( u − f )( x ) = ( u − f )( x ) , (2.18) then λu ( x ) − Af ( x ) ≤ h ( x ) , if x ∈ E , (2.19) ( λu ( x ) − Af ( x ) − h ( x )) ∧ min k : x ∈ E k ( − B k f ( x )) ≤ , if x ∈ ∂E . (2.20) b) u ∈ B ( E ) is a viscosity supersolution of ( ) if and only if u is lower semicontin-uous and if f ∈ D and x ∈ E satisfy inf x ( u − f )( x ) = ( u − f )( x ) , (2.21) then λu ( x ) − Af ( x ) ≥ h ( x ) , if x ∈ E , (2.22) ( λu ( x ) − Af ( x ) − h ( x )) ∨ max k : x ∈ E k ( − B k f ( x )) ≥ , if x ∈ ∂E . A function u ∈ C ( E ) is a viscosity solution of (2.17) if it is both a subsolution and asupersolution. Remark 2.21.
The above definition, with the ‘relaxed’ requirement that on the boundaryeither the interior inequality or the boundary inequality be satisfied by at least one among − B f, · · · , − B m f is the standard one in the theory of viscosity solutions where it is usedin particular because it is stable under limit operations and because it can be localized.As will be clear in Section 5, it suits perfectly our approach to martingale problems withboundary conditions. In this section, we restrict our attention to A ⊂ C b ( E ) × C b ( E ) and consider the martingale problem for A in D E [0 , ∞ ) . Let Π ⊂ P ( D E [0 , ∞ )) denote thecollection of distributions of solutions of the martingale problem for A in D E [0 , ∞ ) , and,for µ ∈ P ( E ) , let Π µ ⊂ Π denote the subcollection with initial distribution µ . If µ = δ x ,we will write Π x for Π δ x . In this section X will be the canonical process on D E [0 , ∞ ) .Assume the following condition. EJP (2015), paper 67. Page 10/27 ejp.ejpecp.org ondition 3.1. a) D ( A ) is dense in C b ( E ) in the topology of uniform convergence on compact sets.b) For each µ ∈ P ( E ) , Π µ (cid:54) = ∅ .c) If K ⊂ P ( E ) is compact, then ∪ µ ∈K Π µ is compact. (See Proposition 3.3 below.) Remark 3.2.
In working with these conditions, it is simplest to take the usual Skorohodtopology on D E [0 , ∞ ) . (See, for example, Sections 3.5-3.9 of [10].) The results of thispaper also hold if we take the Jakubowski topology ([17]). The σ -algebra of Borel sets B ( D E [0 , ∞ )) is the same for both topologies and, in fact, is simply the smallest σ -algebraunder which all mappings of the form x ∈ D E [0 , ∞ ) → x ( t ) , t ≥ , are measurable.It is also relevant to note that mappings of the form x ∈ D E [0 , ∞ ) → (cid:90) ∞ e − λt h ( x ( t )) dt, h ∈ C b ( E ) , λ > , are continuous under both topologies. The Jakubowski topology could be particularlyuseful for extensions of the results of Section 5 to constrained martingale problems inwhich the boundary terms are not local-time integrals. Proposition 3.3.
In addition to Condition 3.1(a), assume that for each compact K ⊂ E , ε > , and T > , there exists a compact K (cid:48) ⊂ E such that P { X ( t ) ∈ K (cid:48) , t ≤ T, X (0) ∈ K } ≥ (1 − ε ) P { X (0) ∈ K } , ∀ P ∈ Π . Then Condition 3.1(c) holds.Proof.
The assertion is part of the thesis of Theorem 4.5.11 (b) of [10].Let λ > , and for h ∈ C b ( E ) , define u + ( x ) = u + ( x, h ) = sup P ∈ Π x E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] , (3.1) u − ( x ) = u − ( x, h ) = inf P ∈ Π x E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] . (3.2) π + (Π µ , h ) = sup P ∈ Π µ E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] , (3.3) π − (Π µ , h ) = inf P ∈ Π µ E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] . (3.4) Lemma 3.4.
Under Condition 3.1, for h ∈ C b ( E ) , u + ( x, h ) is upper semicontinuous(hence measurable), and π + (Π µ , h ) = (cid:90) E u + ( x, h ) µ ( dx ) ∀ µ ∈ P ( E ) . (3.5) The analogous result holds for u − and π − .Proof. For π + , the lemma is a combination of Theorem 4.5.11(a), Lemma 4.5.8, Lemma4.5.9 and Lemma 4.5.10 of [10], but we recall here the main steps of the proof for theconvenience of the reader. Throughout the proof, h will be fixed and will be omitted. Inaddition we will use the notation π + (Π µ , h ) = π + ( µ ) .First of all let us show that, for µ n → µ , lim sup n →∞ π + ( µ n ) ≤ π + ( µ ) . EJP (2015), paper 67. Page 11/27 ejp.ejpecp.org n fact, by the compactness of Π µ there is P ∈ Π µ that achieves the supremum. Moreover,by the compactness of Π µ ∪ ∪ n Π µ ni , for every convergent subsequence { π + ( µ n i ) } = (cid:8) E P ni [ (cid:82) ∞ e − λt h ( X ( t )) dt ] (cid:9) , we can extract a subsequence (cid:110) P n ij (cid:111) that converges to some P ∈ Π µ . Since (cid:82) ∞ e − λt h ( x ( t )) dt ] is continuous on D E [0 , ∞ ) , we then have lim i →∞ π + ( µ n i ) = lim j →∞ E P nij [ (cid:90) ∞ e − λt h ( X ( t )) dt ] = E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] ≤ π + ( µ ) . This yields, in particular, the upper semicontinuity (and hence the measurability) of u + ( x ) = π + ( δ x ) .Next, Condition 3.1 (b) and (c) implies that, for µ , µ ∈ P ( E ) , ≤ α ≤ , π + ( αµ + (1 − α ) µ ) = απ + ( µ ) + (1 − α ) π + ( µ ) , (Theorem 4.5.11(a), Lemma 4.5.8 and Lemma 4.5.10 of [10]. We will not recall this partof the proof). This yields, for { µ i } ⊂ P ( E ) , α i ≥ , (cid:80) i α i = 1 : π + ( (cid:88) i α i µ i ) = N (cid:88) α i π + ( 1 (cid:80) N α i N (cid:88) α i µ i ) + ∞ (cid:88) N +1 α i π + ( 1 (cid:80) ∞ N +1 α i ∞ (cid:88) N +1 α i µ i ) , and hence (cid:12)(cid:12)(cid:12) π + ( (cid:88) i α i µ i ) − (cid:88) i α i π + ( µ i ) (cid:12)(cid:12)(cid:12) ≤ || h || λ ∞ (cid:88) N +1 α i , ∀ N. Finally, for each n , let { E ni } be a countable collection of disjoint subsets of E withdiameter less than n and such that E = (cid:83) i E ni . In addition let x ni ∈ E ni satisfy u + ( x ni ) ≥ sup E ni u + ( x ) − n . Define u n ( x ) = (cid:80) i u + ( x ni )1 E ni ( x ) and µ n = (cid:80) i µ ( E ni ) δ x ni . Then { u n } converges to u + pointwise and boundedly, and { µ n } converges to µ . Therefore (cid:90) E u + ( x ) µ ( dx ) = lim n (cid:90) E u n ( x ) µ ( dx ) = lim n (cid:88) i π + ( δ x ni ) µ ( E ni ) = lim n π + ( µ n ) ≤ π + ( µ ) . To prove the opposite inequality, let µ ni ( B ) = µ ( B ∩ E ni ) /µ ( E ni ) , for µ ( E ni ) > , and u n ( x ) = (cid:80) i π + ( µ ni )1 E ni ( x ) . For each x ∈ E , for every n there exists a (unique) i ( n ) suchthat x ∈ E ni ( n ) . Then u n ( x ) = π + ( µ ni ( n ) ) and µ ni ( n ) → δ x , hence lim sup n u n ( x ) ≤ u + ( x ) . Therefore π + ( µ ) = π + ( (cid:88) i µ ( E ni ) µ ni ) = (cid:90) E u n ( x ) µ ( dx ) ≤ (cid:90) E lim sup n u n ( x ) µ ( dx ) ≤ (cid:90) E u + ( x ) µ ( dx ) , where the last but one inequality follows from the fact that the u n are uniformly bounded.To prove the assertion for π − use the fact that π − (Π µ , h ) = − π + (Π µ , − h ) . Lemma 3.5.
Assume that Condition 3.1 holds. Then u + is a viscosity subsolution of(2.11) and u − is a viscosity supersolution of the same equation.Proof. Since u − ( x, h ) = − u + ( x, − h ) it is enough to consider u + . Let ( f, g ) ∈ A . Suppose x is a point such that u + ( x ) − f ( x ) = sup x ( u + ( x ) − f ( x )) . Since we can always add aconstant to f , we can assume u + ( x ) − f ( x ) = 0 . By compactness (Condition 3.1(c)), wehave u + ( x ) = E P (cid:20)(cid:90) ∞ e − λt h ( X ( t ) dt (cid:21) for some P ∈ Π x . EJP (2015), paper 67. Page 12/27 ejp.ejpecp.org or (cid:15) > , define τ (cid:15) = (cid:15) ∧ inf { t > r ( X ( t ) , x ) ≥ (cid:15) or r ( X ( t − ) , x ) ≥ (cid:15) } (3.6)and let H (cid:15) = e − λτ (cid:15) . Then, by Lemma 2.9, u + ( x ) − f ( x )= E P (cid:20)(cid:90) ∞ e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) = E P (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) + E P (cid:20) e − λτ (cid:15) (cid:90) ∞ e − λt ( h ( X ( t + τ (cid:15) )) − λf ( X ( t + τ (cid:15) )) + g ( X ( t + τ (cid:15) ))) dt (cid:21) = E P (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) + E P [ H (cid:15) ] E P τ(cid:15),H(cid:15) (cid:20)(cid:90) ∞ e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) . Setting µ (cid:15) ( · ) = P τ (cid:15) ,H (cid:15) ( X (0) ∈ · ) , by Lemma 2.10 and Lemma 2.9, the above chain ofequalities can be continued as (with the notation ( ) ) ≤ E P (cid:2)(cid:82) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:3) + E P [ H (cid:15) ]( π + (Π µ (cid:15) , h ) − µ (cid:15) f ) , and, by Lemma 3.4, = E P (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) + E P [ H (cid:15) ]( µ (cid:15) u + − µ (cid:15) f )= E P (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) + E P (cid:2) e − λτ (cid:15) ( u + ( X ( τ (cid:15) )) − f ( X ( τ (cid:15) )) (cid:3) ≤ E P (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) , where the last inequality uses the fact that u + − f ≤ . Therefore ≤ lim (cid:15) → E P (cid:2)(cid:82) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:3) E P [ τ (cid:15) ]= h ( x ) − λf ( x ) + g ( x )= h ( x ) − λu + ( x ) + g ( x ) . Corollary 3.6.
Let h ∈ C b ( E ) . If in addition to Condition 3.1, the comparison principleholds for equation (2.11), then u = u + = u − is the unique viscosity solution of equation(2.11). Theorem 3.7.
Assume that Condition 3.1 holds. For λ > , let M λ be the set of h ∈ C b ( E ) ,such that the comparison principle holds for (2.11). If for each λ > , M λ is separating,then uniqueness holds for the martingale problem for A in D E [0 , ∞ ) .Proof. If the comparison principle for (2.11) holds for some h ∈ C b ( E ) , then by Lemma3.5, u + = u − . Then, by the definition of u + and u − and Lemma 3.4, for any two solutions P , P ∈ Π µ , we must have E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] = E P [ (cid:90) ∞ e − λt h ( X ( t )) dt ] . Consequently, if M λ is separating, Theorem 2.13 implies P = P . EJP (2015), paper 67. Page 13/27 ejp.ejpecp.org emark 3.8.
Another way of viewing the role of the comparison principle in the issue ofuniqueness for the martingale problem for A is the following.Suppose the comparison principle holds for some h and let u + = u − = u . By Lemmas3.4 and 2.11, u ( X ( t )) − (cid:90) t ( λu ( X ( s )) − h ( X ( s ))) ds, is a (cid:8) F Xt (cid:9) -martingale for every P ∈ Π . Then, even though u + and u − are defined innonlinear ways, the linearity of the martingale property ensures that A can be extendedto the linear span A u of A ∪ { ( u, λu − h ) } and every solution of the martingale problemfor A will be a solution of the martingale problem for A u . By applying this procedureto all functions h ∈ M λ , we obtain an extension (cid:98) A of A such that every solution of themartingale problem for A will be a solution of the martingale problem for (cid:98) A and such that R ( λ − (cid:98) A ) ⊃ M λ and hence is separating. Therefore uniqueness follows from Corollary2.14.Notice that, even if the comparison principle does not hold, under Condition 3.1, byLemma 4.5.18 of [10], for each µ ∈ P ( E ) , there exists P ∈ Π µ such that under Pu + ( X ( t )) − (cid:90) t ( λu + ( X ( s )) − h ( X ( s ))) ds is a {F Xt } -martingale. Remark 3.9.
If, for some h , there exists ( u, g ) ∈ (cid:98) A such that λu − g = h (essentially u isthe analog of a stochastic solution as defined in [27]), then, by Lemma 2.9 and Remark2.19, u is a viscosity solution of ( ) . Different definitions of viscosity solution may be useful, depending on the setting.Here we discuss two other possibilities. As mentioned in the Introduction, the first,which is stated in terms of solutions of the martingale problem, is a modification ofa definition used in [9], while the second is a stronger version of definitions in [11].We show that Lemma 3.5 still holds under these alternative definitions and hence allthe results of Section 3 carry over. Both definitions are stronger in the sense that theinequalities (2.13) and (2.15) required by the previous definition are required by thesedefinitions. Consequently, in both cases, it should be easier to prove the comparisonprinciple. T will denote the set of {F Xt } -stopping times. Definition 4.1. (Stopped viscosity (semi)solution)
Let A ⊂ C b ( E ) × C b ( E ) , λ > , and h ∈ C b ( E ) .a) u ∈ B ( E ) is a stopped viscosity subsolution of (2.11) if and only if u is uppersemicontinuous and if ( f, g ) ∈ A , x ∈ E , and there exists a strictly positive τ ∈ T such that sup P ∈ Π x , τ ∈T E P [ e − λτ ∧ τ ( u − f )( X ( τ ∧ τ ))] E P [ e − λτ ∧ τ ] = ( u − f )( x ) , (4.1) then λu ( x ) − g ( x ) ≤ h ( x ) . (4.2) EJP (2015), paper 67. Page 14/27 ejp.ejpecp.org ) u ∈ B ( E ) is a stopped viscosity supersolution of (2.11) if and only if u is lowersemicontinuous and if ( f, g ) ∈ A , x ∈ E , and there exists a strictly positive τ ∈ T such that inf P ∈ Π x , τ ∈T E P [ e − λτ ∧ τ ( u − f )( X ( τ ∧ τ ))] E P [ e − λτ ∧ τ ] = ( u − f )( x ) , (4.3) then λu ( x ) − g ( x ) ≥ h ( x ) . (4.4) A function u ∈ C b ( E ) is a stopped viscosity solution of (2.11) if it is both a subsolutionand a supersolution. Remark 4.2. If ( u − f )( x ) = sup x ( u − f )( x ) , then (4.1) is satisfied. Consequently, everystopped sub/supersolution in the sense of Definition 4.1 is a sub/supersolution in thesense of Definition 2.17. Remark 4.3.
Definition 4.1 requires (4 . ( (4 . ) to hold only at points x for which (4 . ( (4 . ) is verified for some τ . Note that, as in Definition 2.17, such an x might not exist.Definition 4.1 essentially requires a local maximum principle and is related to thenotion of characteristic operator as given in [8].For Definition 4.1, we have the following analog of Lemma 3.5. Lemma 4.4.
Assume that Condition 3.1 holds. Then u + given by (3.1) is a stoppedviscosity subsolution of (2.11) and u − given by (3.2) is a stopped viscosity supersolutionof the same equation.Proof. Let ( f, g ) ∈ A . Suppose x is a point such that (4 . holds for u + for some τ ∈ T , τ > . Since we can always add a constant to f , we can assume u + ( x ) − f ( x ) = 0 . Bythe same arguments used in the proof of Lemma 3.5, defining τ (cid:15) and H (cid:15) in the same way,we obtain, for some P ∈ Π x (independent of (cid:15) ) , u + ( x ) − f ( x ) ≤ E P (cid:20)(cid:90) τ (cid:15) ∧ τ e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) + E P (cid:2) e − λτ (cid:15) ∧ τ ( u + ( X ( τ (cid:15) ∧ τ )) − f ( X ( τ (cid:15) ∧ τ )) (cid:3) ≤ E P (cid:20)(cid:90) τ (cid:15) ∧ τ e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) , where the last inequality uses (4 . and the fact that u + ( x ) − f ( x ) = 0 . Then the resultfollows as in Lemma 3.5.The following is essentially Definition 7.1 of [11]. Definition 4.5. (Sequential viscosity (semi)solution)
Let A ⊂ C b ( E ) × C b ( E ) , λ > , and h ∈ C b ( E ) .a) u ∈ B ( E ) is a sequential viscosity subsolution of (2.11) if and only if u is uppersemicontinuous and for each ( f, g ) ∈ A and each sequence y n ∈ E satisfying lim n →∞ ( u − f )( y n ) = sup x ( u − f )( x ) , (4.5) we have lim sup n →∞ ( λu ( y n ) − g ( y n ) − h ( y n )) ≤ . (4.6) EJP (2015), paper 67. Page 15/27 ejp.ejpecp.org ) u ∈ B ( E ) is a sequential viscosity supersolution of (2.11)if and only if u is lower semicontinuous and for each ( f, g ) ∈ A and each sequence y n ∈ E satisfying lim n →∞ ( u − f )( y n ) = inf x ( u − f )( x ) , (4.7) we have lim inf n →∞ ( λu ( y n ) − g ( y n ) − h ( y n )) ≥ . (4.8) A function u ∈ C b ( E ) is a sequential viscosity solution of (2.11) if it is both a subsolu-tion and a supersolution. Remark 4.6.
For E compact, every viscosity sub/supersolution is a sequential viscositysub/supersolution.For sequential viscosity semisolutions, we have the following analog of Lemma 3.5. C b,u ( E ) denotes the space of bounded, uniformly continuous functions on E . Lemma 4.7.
For (cid:15) > , define τ (cid:15) = (cid:15) ∧ inf { t > r ( X ( t ) , X (0)) ≥ (cid:15) or r ( X ( t − ) , X (0)) ≥ (cid:15) } . Assume A ⊂ C b,u ( E ) × C b,u ( E ) , for each (cid:15) > , inf P ∈ Π E P [ τ (cid:15) ] > , and that Condition 3.1holds. Then, for h ∈ C b,u ( E ) , u + given by (3 . is a sequential viscosity subsolution of(2.11) and u − given by (3 . is a sequential viscosity supersolution of the same equation.Proof. Let ( f, g ) ∈ A . Suppose { y n } is a sequence such that (4 . holds for u + . Sincewe can always add a constant to f , we can assume sup x ( u + − f )( x ) = 0 . Let H (cid:15) = e − λτ (cid:15) .Then, by the same arguments as in Lemma 3.5, we have, for some P n ∈ Π y n (independentof (cid:15) ) , ( u + − f )( y n ) ≤ E P n (cid:20)(cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:21) , where we have used the fact that sup x ( u + − f )( x ) = 0 . Therefore ( u + − f )( y n ) E P n [ τ (cid:15) ] ≤ E P n (cid:2)(cid:82) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + g ( X ( t ))) dt (cid:3) E P n [ τ (cid:15) ] . Replacing (cid:15) by (cid:15) n going to zero sufficiently slowly so that the left side converges tozero, the uniform continuity of f , g , and h implies the right side is asymptotic to h ( y n ) − λf ( y n ) + g ( y n ) giving ≤ lim inf n →∞ ( h ( y n ) − λf ( y n ) + g ( y n ))= lim inf( h ( y n ) − λu + ( y n ) + g ( y n ) . The following theorem is essentially Lemma 7.4 of [11]. It gives the intuitively naturalresult that if h ∈ R ( λ − A ) (where the closure is taken under uniform convergence), thenthe comparison principle holds for sequential viscosity semisolutions of λu − Au = h .If E is compact, the same results hold for viscosity semisolutions, by Remark 4.6. Theorem 4.8.
Suppose h ∈ C b ( E ) and there exist ( f n , g n ) ∈ A satisfying sup x | λf n ( x ) − g n ( x ) − h | → . Then the comparison principle holds for sequential viscosity semisolutionsof (2 . . EJP (2015), paper 67. Page 16/27 ejp.ejpecp.org roof.
Suppose u is a sequential viscosity subsolution. Set h n = λf n − g n . For (cid:15) n > , (cid:15) n → , there exist y n ∈ E satisfying u ( y n ) − f n ( y n ) ≥ sup x ( u ( x ) − f n ( x )) − (cid:15) n /λ and λu ( y n ) − g n ( y n ) − h ( y n ) ≤ (cid:15) n . Then sup x ( λu ( x ) − λf n ( x )) ≤ λu ( y n ) − λf n ( y n ) + (cid:15) n ≤ h ( y n ) + g n ( y n ) − λf n ( y n ) + 2 (cid:15) n = h ( y n ) − h n ( y n ) + 2 (cid:15) n → . Similarly, if u is a supersolution of λu − Au = h , lim inf n →∞ inf x ( u ( x ) − f n ( x )) ≥ , and it follows that u ≤ u . The study of stochastic processes that are constrained to some set E and mustsatisfy some boundary condition on ∂E , described by one or more boundary operators B , . . . , B m , is typically carried out by incorporating the boundary condition in thedefinition of the domain D ( A ) (see Remark 5.12 below). However, this approach restrictsthe problems that can be dealt with to fairly regular ones, so we follow the formulationof a constrained martingale problem given in [19]. (See also [20, 22]).Let E ⊆ E be an open set and let ∂E = ∪ mk =1 E k , for disjoint, nonempty Borel sets E , . . . , E m . Let A ⊆ C b ( E ) × C b ( E ) , B k ⊆ C b ( E ) × C b ( E ) , k = 1 , ..., m , be linear operators with a common domain D such that (1 , ∈ A , (1 , ∈ B k , k = 1 , ..., m . For simplicity we will assume that E is compact (hence thesubscript b will be dropped) and that A, B , . . . , B m are single-valued. Definition 5.1.
A stochastic process X with sample paths in D E [0 , ∞ ) is a solutionof the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) provided thereexist a filtration {F t } and continuous, nondecreasing processes γ , . . . , γ m such that X , γ , . . . , γ m are {F t } -adapted, γ k ( t ) = (cid:90) t E k ( X ( s − )) dγ k ( s ) , and for each f ∈ D , M f ( t ) = f ( X ( t )) − f ( X (0)) − (cid:90) t Af ( X ( s )) ds − m (cid:88) k =1 (cid:90) t B k f ( X ( s − )) dγ k ( s ) (5.1) is a {F t } -martingale. Remark 5.2. γ , . . . , γ m will be called local times since γ k increases only when X is in E k . Without loss of generality, we can assume that the γ k are {F Xt } -adapted. (Replace γ k by its dual, predictable projection on {F Xt } .) Definition 5.1 does not require that the γ k be uniquely determined by the distribution of X , but if γ k and γ k , k = 1 , . . . , m , arecontinuous and satisfy the martingale requirement with the same filtration, we musthave m (cid:88) k =1 (cid:90) t B k f ( X ( s − )) dγ k ( s ) − m (cid:88) k =1 (cid:90) t B k f ( X ( s − )) dγ k ( s ) = 0 , since this expression will be a continuous martingale with finite variation paths. EJP (2015), paper 67. Page 17/27 ejp.ejpecp.org emark 5.3.
The main example of a constrained martingale problem in the sense ofthe above definition is the constrained martingale problem that describes a reflecteddiffusion process. In this case, A is a second order elliptic operator and the B k are firstorder differential operators. Although there is a vast literature on this topic, there arestill relevant cases of reflected diffusions that have not been uniquely characterized assolutions of martingale problems or stochastic differential equations. In Section 6.1,the results of this section are used in one of these cases. More general constraineddiffusions where the B k are second order elliptic operators, for instance diffusions withsticky reflection, also satisfy Definition 5.1.Definition 5.1 is a special case of a more general definition of constrained martingaleproblem given in [22]. This broader definition allows for more general boundary behavior,such as models considered in [4].Many results for solutions of martingale problems carry over to solutions of con-strained martingale problems. In particular Lemma 2.9 still holds. In addition thefollowing lemma holds. Lemma 5.4.
Let X be a stochastic process with sample paths in D E [0 , ∞ ) , γ , . . . , γ m be continuous, nondecreasing processes such that X , γ , . . . , γ m are {F t } -adapted. Thenfor f ∈ D such that (5 . is a {F t } -martingale and λ > , M λf ( t ) = e − λt f ( X ( t )) − f ( X (0)) + (cid:90) t e − λs ( λf ( X ( s )) − Af ( X ( s ))) ds − m (cid:88) k =1 (cid:90) t e − λs B k f ( X ( s − )) dγ k ( s ) is a {F t } -martingale.Proof. Since
D ⊂ C ( E ) , M f is cadlag, we can apply Itô’s formula to e − λt f ( X ( t )) andobtain e − λt f ( X ( t )) − f ( X (0)) = (cid:90) t ( − f ( X ( s )) λe − λs + e − λs Af ( X ( s ))) ds + m (cid:88) k =1 (cid:90) t e − λs B k f ( X ( s − )) dγ k ( s ) + (cid:90) t e − λs dM f ( s ) . Lemma 2.10 is replaced by Lemma 5.5 below.
Lemma 5.5. a) The set of distributions of solutions of the constrained martingaleproblem for ( A, E ; B , E ; . . . ; B m , E m ) is convex.b) Let X , γ , . . . , γ m satisfy Definition 5.1. Let τ ≥ be a bounded {F t } -stopping timeand H ≥ be a F τ -measurable random variable such that < E [ H ] < ∞ . Then themeasure P τ,H ∈ P ( D E [0 , ∞ ) defined by P τ,H ( C ) = E [ H { C ( X ( τ + · ))] E [ H ] , C ∈ B ( D E [0 , ∞ )) , (5.2) is the distribution of a solution of the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) .Proof. Part (a) is immediate. For Part (b), let (Ω , F , P ) be the probability space on which X , γ , . . . , γ m are defined, and define P H by P H ( C ) = E P [ H C ] E P [ H ] , C ∈ F . EJP (2015), paper 67. Page 18/27 ejp.ejpecp.org efine X τ and γ τk by X τ ( t ) = X ( τ + t ) and γ τk ( t ) = γ k ( τ + t ) − γ k ( τ ) . X τ and the γ τk areadapted to the filtration {F τ + t } and for ≤ t < · · · < t n < t n +1 and f , · · · , f n ∈ B ( E ) , E P H (cid:104)(cid:110) f ( X τ ( t n +1 )) − f ( X τ ( t n )) − (cid:90) t n +1 t n Af ( X τ ( s )) ds − m (cid:88) k =1 (cid:90) t n +1 t n B k f ( X τ ( s − )) dγ τk ( s ) (cid:111) Π ni =1 f i ( X τ ( t i )) (cid:105) = 1 E P [ H ] E P (cid:104) H (cid:110) f ( X ( τ + t n +1 )) − f ( X ( τ + t n )) − (cid:90) τ + t n +1 τ + t n Af ( X ( s )) ds − m (cid:88) k =1 (cid:90) τ + t n +1 τ + t n B k f ( X ( s − )) dγ k ( s ) (cid:111) Π ni =1 f i ( X ( τ + t i )) (cid:105) = 0 by the optional sampling theorem. Therefore, under P H , X τ is a solution of the con-strained martingale problem with local times γ τ , · · · , γ τm . P τ,H , given by (5.2), is thedistribution of X τ on D E [0 , ∞ ) .As in Section 3, let Π denote the set of distributions of solutions of the constrainedmartingale problem and Π µ denote the set of distributions of solutions with initialcondition µ . In the rest of this section X is the canonical process on D E [0 , ∞ ) and γ , . . . , γ m are a set of {F Xt } -adapted local times (see Remark 5.2). We assume that thefollowing conditions hold. See Section 5.1 below for settings in which these conditionsare valid. Recall that we are assuming E is compact. Condition 5.6. a) D is dense in C ( E ) in the topology of uniform convergence.b) For each µ ∈ P ( E ) , Π µ (cid:54) = ∅ (see Proposition 5.13).c) Π is compact (see Proposition 5.13).d) For each P ∈ Π and λ > , there exist γ , . . . , γ m satisfying the requirements ofDefinition 5.1 such that E P (cid:104) (cid:82) ∞ e − λt dγ k ( t ) (cid:105) < ∞ , k = 1 , · · · , m (see Proposition5.13). Remark 5.7.
For P ∈ Π , Condition 5.6(d) and Lemma 5.4 give µf = E [ (cid:90) ∞ e − λs ( λf ( X ( s )) − Af ( X ( s ))) ds − m (cid:88) k =1 (cid:90) ∞ e − λs B k f ( X ( s − )) dγ k ( s )] . (5.3) Remark 5.8.
We can take the topology on D E [0 , ∞ ) to be either the Skorohod topologyor the Jakubowski topology (see Remark 3.2).The definitions of u + , u − , π + and π − are still given by (3.1), (3.2), (3.3) and (3.4).With Condition 5.6 replacing Condition 3.1, the proof of Lemma 3.4 carries over (Lemma5.5 above guarantees that Lemmas 4.5.8 and 4.5.10 in [10] can be applied). Lemma 5.9.
Assume Condition 5.6 holds. Then u + is a viscosity subsolution of (2 . and u − is a viscosity supersolution of the same equation.Proof. The proof is similar to the proof of Lemma 3.5, so we will only sketch the argument.For f ∈ D , let x satisfy sup x ∈ E ( u + − f )( x ) = u + ( x ) − f ( x ) . By adding a constant to f if necessary, we can assume that u + ( x ) − f ( x ) = 0 . EJP (2015), paper 67. Page 19/27 ejp.ejpecp.org ith τ (cid:15) as in the proof of Lemma 3.5, by Lemmas 5.4, 5.5 and 3.4, and the compactnessof Π x (Condition 5.6(c)), for some P ∈ Π x (independent of (cid:15) ) we have ≤ E P (cid:20) (cid:90) τ (cid:15) e − λt ( h ( X ( t )) − λf ( X ( t )) + Af ( X ( t ))) dt + m (cid:88) k =1 (cid:90) τ (cid:15) e − λt B k f ( X ( t − )) dγ k ( t ) (cid:21) , Dividing the expectations by E P [ λ − (1 − e − λτ (cid:15) ) + (cid:80) mk =1 (cid:82) τ (cid:15) e − λt dγ k ( t )] and letting (cid:15) → ,we must have ≤ ( h ( x ) − λf ( x ) + Af ( x )) ∨ max k : x ∈ E k B k f ( x ) , which, since f ( x ) = u + ( x ) , implies (2.20), if x ∈ ∂E , and (2.19), if x ∈ E . Corollary 5.10.
Let h ∈ C ( E ) . If, in addition to Condition 5.6, the comparison principleholds for equation (2.17), then u = u + = u − is the unique viscosity solution of equation(2.17). The following theorem is the analog of Theorem 3.7.
Theorem 5.11.
Assume Condition 5.6 holds. For λ > , let M λ be the set of h ∈ C ( E ) such that the comparison principle holds for (2.17). If for every λ > , M λ isseparating, then the distribution of the solution X of the constrained martingale problemfor ( A, E ; B , E ; . . . ; B m , E m ) is uniquely determined.Proof. The proof of this result is in the spirit of Remark 3.8. Let (cid:98) A be the collectionof ( f, g ) ∈ B ( E ) × B ( E ) such that f ( X ( t )) − (cid:82) t g ( X ( s )) ds is a (cid:8) F Xt (cid:9) -martingale for all P ∈ Π . Denote by (cid:98) Π the set of the distributions of solutions of the martingale problemfor (cid:98) A , and by (cid:98) Π µ the set of solutions with initial distribution µ . Then, by construction, foreach µ ∈ P ( E ) , Π µ ⊆ (cid:98) Π µ . By the comparison principle, Lemmas 5.9, 3.4 and 2.11, foreach h ∈ M λ and u = u + = u − given by (3.1) (or equivalently, (3.2)), ( u, λu − h ) belongsto (cid:98) A , or equivalently the pair ( u, h ) belongs to λ − (cid:98) A . Consequently R ( λ − (cid:98) A ) ⊇ M λ isseparating and the thesis follows from Corollary 2.14. Remark 5.12.
Differently from Remark 3.8, the operator (cid:98) A is not an extension of A as an operator on the domain D , but it is an extension of A restricted to the domain D = { f ∈ D : B k f ( x ) = 0 ∀ x ∈ E k , k = 1 , · · · , m } . The distribution of the solution X of the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m , µ ) is uniquelydetermined even though the same might not hold for the solution of the martingaleproblem for ( A (cid:12)(cid:12)(cid:12) D , µ ) . In what follows, we assume that E − E = ∪ mk =1 (cid:101) E k , where the (cid:101) E k are disjoint Borelsets satisfying (cid:101) E k ⊃ E k , k = 1 , . . . , m . Proposition 5.13.
Assume Condition 5.6(a) and that the following hold:i) There exist linear operators (cid:101) A, (cid:101) B , · · · , (cid:101) B m : (cid:101) D ⊆ C ( E ) → C ( E ) with (cid:101) D dense in C ( E ) , (1 , ∈ (cid:101) A , (1 , ∈ (cid:101) B k , k = 1 , ..., m , that are extensions of A, B , . . . , B m inthe sense that for every f ∈ D there exists (cid:101) f ∈ (cid:101) D such that f = (cid:101) f (cid:12)(cid:12)(cid:12) E , Af = (cid:101) A (cid:101) f (cid:12)(cid:12)(cid:12) E ,and B k f = (cid:101) B k (cid:101) f (cid:12)(cid:12)(cid:12) E , k = 1 , ..., m , and such that the martingale problem for each of (cid:101) A, (cid:101) B , · · · , (cid:101) B m with initial condition δ x has a solution for every x ∈ E . EJP (2015), paper 67. Page 20/27 ejp.ejpecp.org i) If E (cid:54) = E , there exists ϕ ∈ (cid:101) D such that ϕ = 0 on E , ϕ > on E − E and (cid:101) Aϕ = 0 on E , (cid:101) B k ϕ ≤ on (cid:101) E k , k = 1 , ..., m .iii) There exists { ϕ n } , ϕ n ∈ D , such that sup n,x | ϕ n ( x ) | < ∞ and B k ϕ n ( x ) ≥ n on E k for all k = 1 , ..., m .Then b) c) and d) in Condition 5.6 are verified.Proof. Condition 5.6(b) . We will obtain a solution of the constrained martingaleproblem for ( A, E ; B , E ; . . . ; B m , E m ) by constructing a solution of the constrainedmartingale problem for ( (cid:101) A, E ; (cid:101) B , (cid:101) E ; . . . ; (cid:101) B m , (cid:101) E m ) and showing that any such solutionthat starts in E stays in E for all times. Following [19], we will construct a solutionof the constrained martingale problem for ( (cid:101) A, E ; (cid:101) B , (cid:101) E ; . . . ; (cid:101) B m , (cid:101) E m ) from a solution ofthe corresponding patchwork martingale problem. (cid:101) A, (cid:101) B , · · · , (cid:101) B m are dissipative operators by i) and Lemma 2.16. Then, by Lemma1.1 in [19], for each initial distribution on E , there exists a solution of the patchworkmartingale problem for ( (cid:101) A, E ; (cid:101) B , (cid:101) E ; . . . ; (cid:101) B m , (cid:101) E m ) . In addition, if E (cid:54) = E , by ii) andthe same argument used in the proof of Lemma 1.4 in [19], for every solution Y of thepatchwork martingale problem for ( (cid:101) A, E ; (cid:101) B , (cid:101) E ; . . . ; (cid:101) B m , (cid:101) E m ) with initial distributionconcentrated on E , Y ( t ) ∈ E for all t ≥ . Therefore Y is also a solution of thepatchwork martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) . By iii) and Lemma 1.8,Lemma 1.9, Proposition 2.2, and Proposition 2.3 in [19], from Y , a solution X of theconstrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) can be constructed. Condition 5.6(c) . If X is a solution of the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) and γ , . . . , γ m are associated local times, then η ( t ) = inf { s : s + γ ( s ) + . . . + γ m ( s ) > t } is strictly increasing and diverging to infinity as t goes to infinity,with probability one, and Y = X ◦ η is a solution of the patchwork martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) , η , η = γ ◦ η , . . . , η m = γ m ◦ η are associated increasing pro-cesses (see the proof of Corollary 2.5 of [19]). Let { ( X n , γ n , . . . , γ nm ) } be a sequence of so-lutions of the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) with initialconditions { µ n } , µ n ∈ P ( E ) , with associated local times. Since P ( E ) is compact, we mayassume, without loss of generality, that { µ n } converges to µ . Let { ( Y n , η n , η n , . . . , η nm ) } be the sequence of the corresponding solutions of the patchwork martingale problemand associated increasing processes. Then by the density of D and Theorems 3.9.1 and3.9.4 of [10], { ( Y n , η n , η n , . . . , η nm ) } is relatively compact under the Skorohod topologyon D E × R m +1 [0 , ∞ ) .Let { ( Y n k , η n k , η n k , . . . , η n k m ) } be a subsequence converging to a limit ( Y, η , η , . . . , η m ) .Then Y is a solution of the patchwork martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) with initial condition µ and η , η , . . . , η m are associated increasing processes. By iii) andLemma 1.8 and Lemma 1.9 in [19], η is strictly increasing and diverging to infinity as t goes to infinity, with probability one. It follows that (cid:8) ( η n k ) − (cid:9) converges to ( η ) − andhence { ( X n k , γ n k , . . . , γ n k m ) } = (cid:8) ( Y n k ◦ ( η n k ) − , η n k ◦ ( η n k ) − , . . . , η n k m ◦ ( η n k ) − ) (cid:9) convergesto ( Y ◦ ( η ) − , η ◦ ( η ) − , . . . , η m ◦ ( η ) − ) and Y ◦ ( η ) − with associated local times η ◦ ( η ) − , . . . , η m ◦ ( η ) − is a solution of the constrained martingale problem for ( A, E ; B , E ; . . . ; B m , E m ) with initial condition µ . Condition 5.6(d) . Let ϕ be the function of iii) for n = 1 . By Lemma 5.4 and iii) we have E (cid:104) (cid:80) mk =1 (cid:82) t e − λs dγ k ( s ) (cid:105) ≤ e − λt E [ ϕ ( X ( t ))] − E [ ϕ ( X (0))] + (cid:82) t e − λs E [ λϕ ( X ( s )) − Aϕ ( X ( s ))] ds. EJP (2015), paper 67. Page 21/27 ejp.ejpecp.org
Examples
Several examples of application of the results of the previous sections can be givenby exploiting comparison principles proved in the literature. Here we will discuss indetail two examples.The first example is a class of diffusion processes reflecting in a domain D ⊆ R d according to an oblique direction of reflection which may become tangential. This caseis not covered by the existing literature on reflecting diffusions, which assumes that thedirection of reflection is uniformly bounded away from the tangent hyperplane.The second example is a large class of jump diffusion processes with jump componentof unbounded variation and possibly degenerate diffusion matrix. In this case uniquenessresults are already available in the literature (see e.g. [15], [13], [21]) but we believe itis still a good benchmark to show how our method works. Let D ⊆ R d , d ≥ , be a bounded domain with C boundary, i.e. D = { x ∈ R d : ψ ( x ) > } , ∂D = { x ∈ R d : ψ ( x ) = 0 } , |∇ ψ ( x ) | > , for x ∈ ∂D, for some function ψ ∈ C ( R d ) , where ∇ denotes the gradient, viewed as a row vector.Let l : D → R d be a vector field in C ( D ) such that | l ( x ) | > and (cid:104) l ( x ) , ν ( x ) (cid:105) ≥ , ∀ x ∈ ∂D, (6.1) ν being the unit inward normal vector field, and let ∂ D = { x ∈ ∂D : (cid:104) l ( x ) , ν ( x ) (cid:105) = 0 } . (6.2)We assume that ∂ D has dimension d − . More precisely, for d ≥ , we assume that ∂ D has a finite number of connected components, each of the form { x ∈ ∂D : ψ ( x ) = 0 , (cid:101) ψ ( x ) = 0 } , (6.3)where ψ is the function above and (cid:101) ψ is another function in C ( R d ) such that the level set { x ∈ ∂D : (cid:101) ψ ( x ) = 0 } is bounded and |∇ (cid:101) ψ ( x ) | > on it. For d = 2 , we assume that ∂ D consists of a finite number of points. In addition, we assume that l ( x ) is never tangentialto ∂ D .Our goal is to prove uniqueness of the reflecting diffusion process with generator ofthe form Af ( x ) = 12 tr (cid:0) D f ( x ) σ ( x ) σ ( x ) T (cid:1) + ∇ f ( x ) b ( x ) , (6.4)where σ and b are Lipschitz continuous functions on D , and direction of reflection l . We will characterize this reflecting diffusion process as the unique solution of theconstrained martingale problem for ( A, D ; B, ∂D ) , where A is given by (6.4), Bf ( x ) = ∇ f ( x ) l ( x ) , (6.5)and the common domain of A and B is D = C ( D ) . Our tools will be the results ofSection 5 and the comparison principle proved by [25]. Proposition 6.1.
Condition 5.6 is verified.Proof.
Condition 5.6a) is obviously verified. Therefore we only need to prove that theassumptions of Proposition 5.13 are satisfied. Let < r < be small enough that for EJP (2015), paper 67. Page 22/27 ejp.ejpecp.org ( x, ∂D ) < r the normal projection of x on ∂D , π ν ( x ) , is well defined and |∇ ψ ( x ) | > .Set U ( D ) = (cid:8) x : d ( x, D ) < r (cid:9) . Let χ ( c ) be a nondecreasing function in C ∞ ( R ) such that ≤ χ ( c ) ≤ , χ ( c ) = 1 for c ≥ r , χ ( c ) = 0 for c ≤ r . We can extend l to a Lipschitzcontinuous vector field on U ( D ) by setting, for x ∈ U ( D ) − D , l ( x ) = (1 − χ ( d ( x, ∂D ))) l ( π ν ( x )) . We can also extend σ and b to Lipschitz continuous functions on U ( D ) by setting, for x ∈ U ( D ) − D , σ ( x ) = (1 − χ ( d ( x, ∂D ))) σ ( π ν ( x )) ,b ( x ) = (1 − χ ( d ( x, ∂D ))) b ( π ν ( x )) . Clearly, both the martingale problem for A , with domain C ( U ( D )) , and the martingaleproblem for B , with the same domain, have a solution for every initial condition δ x , x ∈ U ( D ) . Since every f ∈ C ( D ) can be extended to a function (cid:101) f ∈ C ( U ( D )) and Af = (cid:16) A (cid:101) f (cid:17)(cid:12)(cid:12)(cid:12) D , Bf = (cid:16) B (cid:101) f (cid:17)(cid:12)(cid:12)(cid:12) D , Condition (i) in Proposition 5.13 is verified.Next, consider the function ϕ defined as ϕ ( x ) (cid:40) = 0 , for x ∈ D, = exp { − d ( x,∂D ) } , for x ∈ U ( D ) − D, where U ( D ) is as above. Since ∂D is of class C , ϕ ∈ C ( U ( D )) . Moreover ∇ ϕ ( x ) = − |∇ ϕ ( x ) | ν ( π ν ( x )) , for x ∈ U ( D ) − D. Therefore ϕ satisfies Condition (ii) in Proposition 5.13.Finally, in order to verify iii) of Proposition 5.13, we just need to modify slightly theproof of Lemma 3.1 in [25]. Suppose first that ∂ D is connected. Let (cid:101) ψ be the functionin (6 . . Since l ( x ) is never tangent to ∂ D , it must hold ∇ (cid:101) ψ ( x ) l ( x ) (cid:54) = 0 for each x ∈ ∂ D ,and hence, possibly replacing ψ by − ψ , we can assume that (cid:101) ψ ( x ) = 0 , ∇ (cid:101) ψ ( x ) l ( x ) > , ∀ x ∈ ∂ D. (6.6)Let U ( ∂ D ) be a neighborhood of ∂ D such that inf U ( ∂ D ) ∇ (cid:101) ψ ( x ) l ( x ) > , and for each n ∈ N , set ∂ n D = (cid:110) x ∈ ∂D ∩ U ( ∂ D ) : | (cid:101) ψ ( x ) | < n (cid:111) , (cid:101) C n = ∂n D ∇ (cid:101) ψ ( x ) l ( x ) ,C n = (cid:101) C n sup ∂D |∇ (cid:101) ψ ( x ) l ( x ) | +1inf ∂D − ∂n D ∇ ψ ( x ) l ( x ) . Let χ n be a function in C ∞ ( R ) such that χ n ( c ) = nc for | c | ≤ n , χ n ( c ) = − for c ≤ − n , χ n ( c ) = 1 for c ≥ n , ≤ χ (cid:48) n ( c ) ≤ n for every c ∈ R , and define ϕ n ( x ) = χ n ( C n ψ ( x )) + (cid:101) C n χ n ( (cid:101) ψ ( x )) . Then | ϕ n ( x ) | is bounded by ∂ D ∇ (cid:101) ψ ( x ) l ( x ) and we have, for x ∈ ∂ n D , ∇ ϕ n ( x ) l ( x ) = n (cid:104) C n ∇ ψ ( x ) l ( x ) + (cid:101) C n ∇ (cid:101) ψ ( x ) l ( x ) (cid:105) ≥ n, EJP (2015), paper 67. Page 23/27 ejp.ejpecp.org nd for x ∈ ∂D − ∂ n D , ∇ ϕ n ( x ) l ( x ) = nC n ∇ ψ ( x ) l ( x ) + (cid:101) C n χ (cid:48) n ( (cid:101) ψ ( x )) ∇ (cid:101) ψ ( x ) l ( x ) ≥ n. If ∂ D is not connected, there is a function (cid:101) ψ k satisfying (6 . for each connectedcomponent ∂ k D . Let U k ( ∂ k D ) be neighborhoods such that inf U k ( ∂ k D ) ∇ (cid:101) ψ k ( x ) l ( x ) > .We can assume, without loss of generality, that U k ( ∂ k D ) ⊆ V k ( ∂ k D ) , where V k ( ∂ k D ) are pairwise disjoint and (cid:101) ψ k vanishes outside V k ( ∂ k D ) . Then, defining ∂ k,n D and (cid:101) C kn asabove, C kn = (cid:101) C kn sup ∂D |∇ (cid:101) ψ k ( x ) l ( x ) | + 1inf ∂D −∪ k ∂ k,n D ∇ ψ ( x ) l ( x ) , and ϕ kn as above, ϕ n ( x ) = (cid:80) k ϕ kn ( x ) verifies iii) of Proposition 5.13..Theorem 2.6 of [25] gives the comparison principle for a class of linear and nonlinearequations that includes, in particular, the partial differential equation with boundaryconditions λu ( x ) − Au ( x ) = h ( x ) , in D, − Bu ( x ) = 0 , on ∂D, (6.7)where h is a Lipschitz continuous function, and A , B are given by (6.4), (6.5) and verify, inaddition to the the assumptions formulated at the beginning of this section, the followinglocal condition on ∂ D . Condition 6.2.
For every x ∈ ∂ D , let φ be a C diffeomorphism from the closure of a suitableneighborhood V of the origin into the closure of a suitable neighborhood of x , U ( x ) ,such that φ (0) = x and the d th column of Jφ ( z ) , J d φ ( z ) , satisfies J d φ ( z ) = − l ( φ ( z )) , ∀ z ∈ φ − (cid:16) ∂D ∩ U ( x ) (cid:17) . (6.8) Let (cid:101) A , (cid:101) Af ( z ) = 12 tr (cid:0) D f ( z ) (cid:101) σ ( z ) (cid:101) σ T ( z ) (cid:1) + ∇ f ( z ) (cid:101) b ( z ) , be the operator such that (cid:101) A ( f ◦ φ )( z ) = Af ( φ ( z )) , ∀ z ∈ φ − (cid:16) D ∩ U ( x ) (cid:17) . Assumea) (cid:101) b i , i = 1 , ..., d − , is a function of the first d − coordinates ( z , ..., z d − ) only, and (cid:101) b d is a function of z d only.b) (cid:101) σ ij , i = 1 , ..., d − , j = 1 , ..., d is a function of the first d − coordinates ( z , ..., z d − ) only. Remark 6.3.
For every x ∈ ∂ D , some coordinate of l ( x ) , say the d th coordinate, mustbe nonzero. Then in (6.8) we can choose U ( x ) such that in U ( x ) l d ( x ) (cid:54) = 0 and wecan replace l ( x ) by l ( x ) / | l d ( x ) | , since this normalization does not change the boundarycondition of (6.7) in D ∩ U ( x ) (i.e. any viscosity sub/supersolution of (6.7) in D ∩ U ( x ) is a viscosity sub/supersolution of (6.7) in D ∩ U ( x ) with the normalized vector field andconversely).Moreover, since (6.8) must be verified only in φ − (cid:16) ∂D ∩ U ( x ) (cid:17) , in the constructionof φ we can use any C vector field l that agrees with l , or the above normalization of l, on ∂D ∩ U ( x ) . EJP (2015), paper 67. Page 24/27 ejp.ejpecp.org herefore, whenever Condition 6.2 is satisfied Theorem 5.11 applies and there existsone and only one diffusion process reflecting in D according to the degenerate obliquedirection of reflection l .The following is a concrete example where Condition 6.2 is satisfied. Example 6.4.
Let D = B (0) ⊆ R . and suppose the direction of reflection l satisfies (6.1) with the strict inequality at every x ∈ ∂D except at x = (1 , , where l (1 ,
0) = (cid:20) − (cid:21) . Of course, in a neighborhood of x = (1 , we can always assume that l depends only onthe second coordinate x . In addition, by Remark 6.3, we can suppose l ( x ) = − . Consider σ ( x ) = σ = (cid:20) (cid:21) . and a drift b that, in a neighborhood of x = (1 , , depends only on the second coordinate,i.e. b ( x ) = b ( x ) . Assume that, in a neighborhood of x = (1 , , the direction of reflection l is parallel to b .Then we can find a nonlinear change of coordinates φ such that Condition 6.2 is verified,namely φ ( z ) = z φ ( z ) = − (cid:82) z l ( ζ ) dζ + z + 1 , which yields (cid:101) σ ( z ) = σ, (cid:101) b ( z ) = 0 , (cid:101) b ( z ) = b ( z ) . .Consider the operator Af = Lf + JfLf ( x ) = 12 tr (cid:0) a ( x ) D f ( x ) (cid:1) + ∇ f ( x ) b ( x ) (6.9) Jf ( x ) = (cid:90) R d (cid:48) −{ } (cid:2) f ( x + η ( x, z )) − f ( x ) − ∇ f ( x ) η ( x, z ) I | z | < (cid:3) m ( dz ) , where ∇ is viewed as a row vector. Assume: Condition 6.5. a) a = σσ T , σ and b are continuous.b) η ( · , z ) is continuous for every z , η ( x, · ) is Borel measurable for every x , sup | z | < | η ( x, z ) | < + ∞ for every x and | η ( x, z ) | I | z | < ≤ ρ ( z )(1 + | x | ) I | z | < , for some positive, measurable function ρ such that lim | z |→ ρ ( z ) = 0 . EJP (2015), paper 67. Page 25/27 ejp.ejpecp.org ) m is a Borel measure such that (cid:90) R d (cid:48) −{ } (cid:2) ρ ( z ) I | z | < + I | z |≥ (cid:3) m ( dz ) < + ∞ . Then, with D ( A ) = { f + c : f ∈ C c ( R d ) , c ∈ R } , A ⊂ C b ( E ) × C b ( E ) and A satisfiesCondition 3.1.A comparison principle for bounded subsolutions and supersolutions of the equation (2 . when A is given by (6 . is proven in [16], as a special case of a more generalresult, under the following assumptions: Condition 6.6. a) σ and b are Lipschitz continuous.b) | η ( x, z ) − η ( y, z ) | I | z | < ≤ ρ ( z ) | x − y | I | z | < . c) h is uniformly continuous. Then, under the above assumptions, our result of Theorem 3.7 applies and uniquenessof the solution of the martingale problem for A is granted. References [1] Bardi, M. Cesaroni, A. and Manca, L.:
Convergence by viscosity methods in multiscalefinancial models with stochastic volatility , SIAM J. Financial Math. (2010), no. 1, 230–265.MR-2658580[2] Bayraktar, E. and Sîrbu, M.: Stochastic Perron’s method and verification without smoothnessusing viscosity comparison: the linear case , Proc. Amer. Math. Soc. (2012), no. 10,3645–3654. MR-2929032[3] Benth, F. E., Karlsen, K. H. and Reikvam, K.:
Optimal portfolio selection with consumptionand nonlinear integro-differential equations with gradient constraint: a viscosity solutionapproach , Finance Stoch. (2001), no. 3, 275–303. MR-1849422[4] Costantini, C. and Kurtz, T. G.: Diffusion approximation for transport processes with generalreflection boundary conditions , Math. Models Methods Appl. Sci. (2006), no. 5, 717–762.MR-2226124[5] Costantini, C, Papi, M. and D’Ippoliti, F.: Singular risk-neutral valuation equations , FinanceStoch. (2012), no. 2, 249–274. MR-2903624[6] Crandall, M. G., Ishii, H. and Lions, P. L.: User’s guide to viscosity solutions of secondorder partial differential equations , Bull. Amer. Math. Soc. (N.S.) (1992), no. 1, 1–67.MR-1118699[7] Crandall, M. G. and Lions, P. L.: Viscosity solutions of Hamilton-Jacobi equations , Trans. Amer.Math. Soc. (1983), no. 1, 1–42.[8] Dynkin, E. B.:
Markov Processes. Vols. I, II , Translated with the authorization and assistanceof the author by J. Fabius, V. Greenberg, A. Maitra, G. Majone. Die Grundlehren der Mathe-matischen Wi ssenschaften, Bände 121, vol. 122, Academic Press Inc., Publishers, New York,1965. MR-MR0193671[9] Ekren, I., Keller, C, Touzi, N. and Zhang, J.:
On viscosity solutions of path dependent PDEs ,Ann. Probab. (2014), no. 1, 204–236. MR-3161485[10] Ethier, S. N. and Kurtz, T.G.: Markov Processes: Characterization and Convergence , WileySeries in Probability and Mathematical Statistics: Probability and Mathematical Statistics,John Wiley & Sons Inc., New York, 1986. MR-MR838085[11] Feng, J. and Kurtz, T. G.:
Large Deviations for Stochastic Processes , Mathematical Surveys andMonographs, vol. 131, American Mathematical Society, Providence, RI, 2006. MR-2260560
EJP (2015), paper 67. Page 26/27 ejp.ejpecp.org
12] Fleming, W. H. and Soner, H. M.:
Controlled Markov Processes and Viscosity Solutions ,second ed., Stochastic Modelling and Applied Probability, vol. 25, Springer, New York, 2006.MR-2179357[13] Graham, C.:
McKean-Vlasov Itô-Skorohod equations, and nonlinear diffusions with discretejump sets , Stochastic Process. Appl. (1992), no. 1, 69–82. MR-MR1145460[14] Gray, L. and Griffeath, D.: Unpublished manuscript, 1977.[15] Ikeda, N. and Watanabe, S.: Stochastic Differential Equations and Diffusion Processes , seconded., North-Holland Mathematical Library, vol. 24, North-Holland Publishing Co., Amsterdam,1989. MR-MR1011252[16] Jakobsen, E. R. and Karlsen, K. H.:
A “maximum principle for semicontinuous functions”applicable to integro-partial differential equations , NoDEA Nonlinear Differential EquationsAppl. (2006), no. 2, 137–165. MR-2243708[17] Jakubowski, A.: A non-Skorohod topology on the Skorohod space , Electron. J. Probab. (1997), no. 4, 21 pp. (electronic). MR-1475862[18] Kabanov, Y. and Klüppelberg, C.: A geometric approach to portfolio optimization in modelswith transaction costs , Finance Stoch. (2004), no. 2, 207–227. MR-2048828[19] Kurtz, T. G.: Martingale problems for constrained Markov problems , Recent advances instochastic calculus (College Park, MD, 1987), Progr. Automat. Info. Systems, Springer, NewYork, 1990, pp. 151–168. MR-MR1255166[20] Kurtz, T. G.:
A control formulation for constrained Markov processes , Mathematics of randommedia (Blacksburg, VA, 1989), Lectures in Appl. Math., vol. 27, Amer. Math. Soc., Providence,RI, 1991, pp. 139–150. MR-MR1117242[21] Kurtz, T. G. and Protter, P. E.:
Weak convergence of stochastic integrals and differentialequations. II. Infinite-dimensional case , Probabilistic models for nonlinear partial differentialequations (Montecatini Terme, 1995), Lecture Notes in Math., vol. 1627, Springer, Berlin,1996, pp. 197–285. MR-MR1431303[22] Kurtz, T. G. and Stockbridge, R. H.:
Stationary solutions and forward equations for controlledand singular martingale problems , Electron. J. Probab. (2001), no. 17, 52 pp. (electronic).MR-MR1873294[23] Ma, J. and Zhang, J.: On weak solutions of forward-backward SDEs , Probab. Theory RelatedFields (2011), no. 3-4, 475–507. MR-2851690[24] Pham, H.:
Optimal stopping of controlled jump diffusion processes: a viscosity solutionapproach , J. Math. Systems Estim. Control (1998), no. 1, 27 pp. (electronic). MR-1650147[25] Popivanov, P. and Kutev, N.: Viscosity solutions to the degenerate oblique derivative problemfor fully nonlinear elliptic equations , Math. Nachr. (2005), no. 7-8, 888–903. MR-2141965[26] Soner, H. M. and Touzi, N.