Translation Invariant Diffusions and Stochastic Partial Differential Equations in ${\cal S}^{\prime}
aa r X i v : . [ m a t h . P R ] M a y Translation Invariant Diffusions and StochasticPartial Differential Equations in S ′ B.RajeevEmail: [email protected] 7, 2019
Abstract
In this article we show that the ordinary stochastic differentialequations of K.Itˆo maybe considered as part of a larger class of secondorder stochastic PDE’s that are quasi linear and have the property oftranslation invariance. We show using the ‘monotonicity inequality’and the Lipshitz continuity of the coefficients σ ij and b i , existenceand uniqueness of strong solutions for these stochastic PDE’s. Usingpathwise uniqueness, we prove the strong Markov property. Keywords : S ′ valued process, diffusion processes, Hermite-Sobolev space,Strong solution, quasi linear SPDE, Monotonicity inequality, Translation in-varianceSubject classification :[2010]60G51, 60H10, 60H15 The notion of an ordinary stochastic differential equation (SDE) was in-troduced by K.Itˆo in [23] and since then has become the main tool for1odelling diffusion phenomena as a random process (see for example [34]).The approach to diffusions as a random process goes back to the worksof A.N.Kolmogorov [26], and was studied by N.Weiner [48], W.Feller[16],J.L.Doob [13] and P.L´evy [31]. The theory was extended further by D.W.Stroockand S.R.S.Varadhan in their well known ‘weak formulation’ or ‘martingaleformulation’ [43]. The subject of stochastic partial differential equations(SPDE) on the other hand, is of more recent vintage ([47],[30] ). It extendsthe logic of perturbing an ordinary differential equation by noise, inherent inthe Itˆo approach, to partial differential equations. Although the underlyingprobabilistic logic is the same, the mathematics of these two models can bevastly different, the latter more often than not involving the tools and tech-niques of function space analysis (see for example [21], [12]). On the otherhand, one of the fundamental features that continues to sustain interest inthe Itˆo approach, both in applications and theory, is the connection withother areas of mathematics like partial differential equations and potentialtheory ([29],[3],[42]); more recent examples are the notion of ‘viscosity solu-tions’ related to the Hamilton-Jacobi-Bellman equation ([32]), and backwardSDE’s ([35]). In this paper, we show that the two approaches viz. the SDEand the SPDE approaches can be unified into a single framework, in whichthe SDE approach (with an extra parameter) is equivalent to the SPDE ap-proach and mathematically speaking both maybe viewed as part of a singlestructure. Our method may be considered a variant of the well known ‘method of characteristics’ in PDE, that constructs solutions of PDE’s fromthe ordinary differential equations satisfied by the ‘characteristic curves’ as-sociated with the PDE (see [15], Chapter 3, and [30], Chapter 6, for thestochastic case). The difference in our approach lies in the treatment of nonlinearities i.e. in the manner in which the coefficients in the PDE or SPDEare allowed to depend on the solutions. In this paper, we first construct thesolutions of the SPDE and then deduce the solutions of the correspondingSDE. The reverse construction of solutions of SPDE’s from that of the as-sociated SDE’s was already done, via the Itˆo formula, in [36],[39]. It turnsout that the solutions of Itˆo’s SDE’s correspond to rather singular solutionsof the associated (quasi linear) SPDE in a manner analogous to the way inwhich ‘fundamental solutions’ are associated to certain second order partialdifferential equations. The solutions of the SPDE so constructed, arise, in aunique fashion, as translations of the initial condition of the SPDE by thesolution of the ‘characteristic’ SDE starting at the origin.2n more detail, we construct in this paper a general method of solving thestochastic partial differential equation (SPDE) driven by an n-dimensionalBrownian motion ( B t ) in the form dY t = L ( Y t ) dt + A ( Y t ) · dB t ; Y = y. Here L and A = ( A , · · · , A n ) are non linear partial differential operators ofthe second and first order respectively on the space of tempered distributions S ′ → S ′ on R d given by equations (2) and (3) below. The initial condition y is an arbitrary tempered distribution whose regularity maybe measuredon a decreasing scale of Hilbert spaces S p , p ∈ R . In particular y ∈ S p forsome p ∈ R . The operators L and A i are quasi-linear i.e. they are constantcoefficient differential operators once the value of the coefficients σ ij , b i : S p → R are fixed, L is of order two and the A i ’s of order one. Consequently, L, A i : S p → S q , q ≤ p − , i = 1 , · · · , n (see Section 2). Thus the domain andrange of the operators L and A i differ, leading to what K.Itˆo in [24] refers toas a ‘Type 2’ equation. This also introduces the principal difficulty in solvingour SPDE, since there is no obvious way of using techniques such as ‘Picarditeration’. However by assuming a Lipschitz condition on σ ij and b i withrespect to the norm k · k q , q ≤ p − L, A i we implement a modified form of Picarditeration to solve the above SPDE. The solutions of the above equation havethe property that they are translation invariant i.e. they can be written as Y t ( y ) = τ Z t ( y ) y , where τ x : S ′ → S ′ are the translation operators and ( Z t ( y ))is a finite dimensional process that depends on the initial value y . This hasthe consequence that the solution corresponding to the translate τ x y is thetranslate of y by the process ( x + Z t ( τ x y )). Note that the S p themselvesare invariant under translations i.e. τ x : S p → S p ([37]). The action ofthe translation operators on y gives rise to finite dimensional coefficients¯ σ ij , ¯ b i , i = 1 , · · · , d, j = 1 , · · · n by ¯ σ ij ( z ) := σ ij ( τ z y ) , ¯ b i ( z ) := b i ( τ z y ) , z ∈ R d . It turns out that X xt := x + Z t ( τ x y ) solves the ordinary stochastic differentialequation driven by ( B t ) with coefficients ¯ σ ij , ¯ b i and initial value X x = x .In recent times distribution dependent SDE’s have become an active area ofresearch (see for example [2],[11],[27],[1],[10] and references therein). We referto Example 6 in Section 6 below, for some connections between distributiondependent SDE’s and our results. Our work also relates to the problemof identifying ‘ invariant submanifolds’ of solutions of SPDEs that arise infinance (see [8],[9],[14],[44]). In effect, the set of translates { τ x y : x ∈ R d } y ∈ S p , under some smoothness assumptions on y .Our method relies on three ingredients viz. one, a quasi-linear extension oflinear differential operators by identifying the coefficients σ ij ( x ) as a restric-tion of the functional h σ ij , φ i , φ ∈ S − p = S ′ p , p > d, σ ij ∈ S p to the distribution φ = δ x ; two, an Itˆo formula for translations of tempered distributions by semi-martingales (see [36], [4],[46]); and finally, the monotonicity inequality (see[5],[20]). Indeed, this last inequality, whose abstract version has been knownfor some time (see [28], [25], [18]), has proved to be an indispensable tool forproving uniqueness results for SPDE’s in the framework of a scale of Hilbertspaces of the type discussed above(see [20],[39]). Our results below show thatit can also be used for proving existence results.The paper is organised as follows. After the preliminaries in Section 2, weprove in Section 3, using the monotonicity inequality (Theorem 3.1), someextensions of the same in Theorems (3.2) and (3.3) respectively; viz. in thecase that the pair of operators ( A, L ) have variable coefficients. These in-equalities are crucial for the convergence results in Section 4 which containsthe main existence and uniqueness results in Theorem (4.3). Our proof ofexistence is tailored for the infinite dimensional situation and applies to moregeneral situations. A simpler proof is indicated in Remark 4.4. In Section 5,we construct the ‘maximal’ solutions upto an explosion time and prove thestrong Markov property (Theorem (5.6)) using the pathwise uniqueness es-tablished in Section 4 . In section 6, we look at several examples. Examples1,2 & 3 relate to finite dimensional diffusions. Example 4 relates solutions ofour SPDE with solutions of the associated martingale problem for L . Exam-ple 5 deals with the stochastic representation of the solutions of non-linearevolution equation canonically associated with the operator L. In Example6, we consider the situation where the coefficients in the finite dimensionalequation depends on the marginal law of the process. Finally Example 7deals with extensions of the operator L , that have a zeroth order term andis related to the Feynman-Kac formula. In Section 7 we make some remarkson ‘duality’ and invariant measures in the context our SPDE. Some technicalresults are in the Appendix. We use well known results on stochastic calculusfor processes with values in a Hilbert space, for the proofs of which we referto [12],[18],[33]. 4 Preliminaries
Let (cid:0) Ω , F , {F t } t ≥ , P (cid:1) be a filtered probability space satisfying the usualconditions viz. 1) (Ω , F , P ) is a complete probability space. 2) F con-tains all A ∈ F , such that P ( A ) = 0, and 3) F t = \ s>t F s , t ≥
0. On thisprobability space is given a standard n-dimensional F t - Brownian motion( B t ) ≡ ( B t , . . . , B nt ). We will denote the filtration generated by ( B t ) as ( F Bt ).Let ¯ σ ij , ¯ b i be locally Lipshitz functions on R d for i = 1 , · · · , d, j = 1 , · · · n .Let ¯ σ := (¯ σ ij ) (so that (¯ σ ij ( x )) , x ∈ R d is a d × n matrix) and ¯ b := (¯ b , · · · , ¯ b d )be a vector field on R d . We use the notation ˆ R d = R d ∪ {∞} for the onepoint compactification of R d . Theorem 2.1
Let ¯ σ, ¯ b, ( B t ) be as above and x ∈ R d . Then ∃ η : Ω → (0 , ∞ ] , η an ( F Bt ) stopping time and an ˆ R d -valued, ( F Bt ) adapted process ( X t ) t ≥ such that1. For all ω ∈ Ω , X . ( ω ) : [0 , η ( ω )) → R d , is continuous and X t ( ω ) = ∞ , t ≥ η ( ω )
2. a.s. (P), η ( ω ) < ∞ implies lim t ↑ η ( ω ) X t ( ω ) = ∞ .
3. a.s.(P), X t = x + t Z ¯ σ ( X s ) · dB s + t Z ¯ b ( X s ) ds (1) for ≤ t < η ( ω ) .The solution ( X t , η ) is (pathwise) unique i.e. if ( X t , η ) is another solutionthen P { X t = X t , ≤ t < η ∧ η } = 1 . roof : We refer to [22], Chapter IV,Theorem 2.3 and Theorem 3.1 for theproofs (with appropriate modifications for the case d = r ) of existence anduniqueness respectively. (cid:3) Let α, β ∈ Z d + := { ( x , · · · , x d ) : x i ≥ , x i integer } . Let x α be the prod-uct x α := x α . . . x α d d ∈ R and ∂ β := ∂ β . . . ∂ β d d , the differential operatorof order β + · · · β d corresponding to the monomial x β . For a multi index α , we use the notation | α | := d P i =1 α i . Let S denote the space of rapidlydecreasing smooth real functions on R d with the topology given by thefamily of semi norms {∧ α,β } , defined for f ∈ S and multi indices α, β by ∧ α,β ( f ) := sup x | x α ∂ β f ( x ) | . Then {S , ∧ α,β : α, β ∈ Z d + } is a locally convex,complete, metrisable topological vector space i.e. a Fr´echet space. S ′ willdenote its continuous dual. The duality between S and S ′ will be denoted by h ψ, φ i for φ ∈ S and ψ ∈ S ′ . For x ∈ R d the translation operators τ x : S → S are defined as τ x f ( y ) := f ( y − x ) for f ∈ S and then for φ ∈ S ′ by duality : h τ x φ, f i := h φ, τ − x f i .Let { h k ; k ∈ Z d + } be the orthonormal basis in the real Hilbert space L ( R d , dx ) ⊃S consisting of the Hermite functions (see for eg. [45]); here dx denotesLebesgue measure, where the dependence on the dimension is suppressedwhenever there is no risk of confusion. Let h· , ·i be the inner product in L ( R d , dx ). For f ∈ S and p ∈ R define the inner product h f, g i p on S asfollows : h f, g i p := X k =( k , ··· ,k d ) ∈ Z d + (2 | k | + d ) p h f, h k i h g, h k i The corresponding norm will be denoted by k · k p . We define the Hilbertspace S p as the completion of S with respect to the norm k · k p over the fieldof real numbers. The following basic relations hold between the S p spaces(see for eg. [24], [25]): For 0 < q < p, S ⊂ S p ⊂ S q ⊂ L = S ⊂ S − q ⊂S − p ⊂ S ′ . Further, S ′ = S p ∈ R S p and T p ∈ R S p = S . If { h pk : k ∈ Z d + } denotesthe orthonormal basis in S p consisting of the (normalised) Hermite functions h pk := (2 | k | + d ) − p h k , then the dual space S ′ p may be identified with S − p , viathe basis { h − pk : k ∈ Z d + } of S − p . For φ ∈ S and ψ ∈ S ′ the bilinear form( ψ, φ ) → h ψ, φ i also gives the duality between S p ( ⊃ S ) and S − p ( ⊂ S ′ ). It isalso well known that ∂ i : S p → S p − are bounded linear operators for every6 ∈ R and i = 1 , · · · , d . For Banach spaces X and Y, L ( X, Y ) will denote theBanach space of bounded linear operators from X into Y .Let p ∈ R and let σ ij , b i : S p → R , i = 1 , · · · , d, j = 1 , · · · , n . We considerthe (non-linear) operators A := ( A , · · · , A n ) : S p → L ( R n , S p − ), from S p to the space of linear operators from R n to S p − , defined by A i ( φ ) = − d X k =1 σ ki ( φ ) ∂ k φ i = 1 , · · · , n (2)and the non-linear operator L : S p → S p − defined as follows : L ( φ ) = 12 d X i,j =1 a ij ( φ ) ∂ ij φ − d X i =1 b i ( φ ) ∂ i φ (3)where a ij ( φ ) := ( σ ( φ ) σ ( φ ) t ) ij and the superscript ‘t’ denotes matrix trans-pose. Clearly if σ ij ( φ ) and b i ( φ ) are bounded on the set { φ ∈ S p : k φ k p ≤ λ } for some λ >
0, then ∃ C = C ( λ ) > q ≤ p − { e i : i = 1 , · · · , n } is the standard orthonormal basis in R n , then k A ( φ ) k HS ( q ) := n X i =1 k A ( φ ) e i k q =: n X i =1 k A i ( φ ) k q ≤ C · k φ k p k L ( φ ) k q ≤ C · k φ k p for φ ∈ S p with k φ k p ≤ λ . In the above equalities and in what follows we usethe notation A ( φ ) · h := n P i =1 A i ( φ ) · h i , h ∈ R n . The subscript ‘HS’ refers to theHilbert-Schmidt norm. The above inequalities follow from the boundednessof the operators ∂ i : S p → S p − and the assumed (local) bounds on thecoefficients σ ij and b i . In this section we will prove the ‘monotonicity inequality’ involving the pair(
L, A ) defined in equations (2) and (3) and which we will use in the proof7f existence and uniqueness of the SPDE (18). The constant coefficient casewas proved in [20]. Using techniques developed in [5] we prove the corre-sponding inequality and a variant of the same in the variable coefficient case,in theorems (3.2) and (3.3) below.Let σ = ( σ ij ) ∈ R dn , h = ( h . . . h n ) ∈ R n and φ ∈ S p . Then we define A : R dn × S p → L ( R n , S q ), a bilinear map, as follows: A ( σ, φ ) · h := − n X i =1 d X k =1 σ ki ∂ k φ h i Note that the symbols σ ij , σ ij ( · ) have different meanings, the latter being afunction on S p and the former an element of R . A similar remark holds for b i and b i ( · ). For ψ, φ ∈ S p , and q ≤ p − A ( φ ) · h − A ( ψ ) · h = A ( σ ( φ ) , φ − ψ ) · h + A ( σ ( φ ) − σ ( ψ ) , ψ ) · h Similarly we can write L ( φ ) − L ( ψ ) = L ( b ( φ ) , φ − ψ ) + L ( b ( φ ) − b ( ψ ) , ψ )+ L ( a ( φ ) , φ − ψ ) + L ( a ( φ ) − a ( ψ ) , ψ )where L , L are S q valued, bilinear maps on R d × S p and R d × S p , respec-tively, given as follows : Let ( b, φ ) ∈ R d × S p , b := ( b , · · · , b d ) . Then L ( b, φ ) := − d X i =1 b i ∂ i φ and to define L , let ( σ, φ ) ∈ R d × S p , σ := ( σ ij ) . Then, L ( σ, φ ) := 12 d X i,j =1 σ ij ∂ ij φ. Note that for φ ∈ S p , we have the d × d matrix a ( φ ) ≡ σσ t ( φ ) := (( σ ( φ ) σ t ( φ )) ij )and the d-dimensional vector b ( φ ) := ( b ( φ ) , · · · , b d ( φ )). For λ >
0, definethe constant K ( λ ) as follows : K ( λ ) := max i,j sup k φ k p ≤ λ {| σ ij ( φ ) | , | b i ( φ ) |} . We then have the following restatement of the Monotonicity inequality forconstant coefficient operators [20]. 8 heorem 3.1
Let p ∈ R , q ≤ p − . Suppose that σ ij ( · ) , b i ( · ) , i = 1 , . . . , d , j = 1 , . . . n are bounded on the set B p (0 , λ ) := { φ ∈ S p : k φ k p ≤ λ } for every λ > . Then for every λ > , ∃ a constant C = C ( n, d, p, K ( λ )) > suchthat h ψ, L ( a ( φ ) , ψ ) i q + k A ( σ ( φ ) , ψ ) k HS ( q ) ≤ C k ψ k q h ψ, L ( b ( φ ) , ψ ) i q ≤ C · k ψ k q for all ψ ∈ S p and for all φ ∈ B p (0 , λ ) . Proof:
It follows from the Monotonicity inequality (see [20],[5]) that theinequalities in the statement of the theorem holds for fixed ψ, φ ∈ S p witha constant C ′ that depends quadratically on the numbers max {| σ ij ( φ ) | : i =1 , . . . , d, j = 1 , . . . , n } and linearly on max {| b i ( φ ) | : i = 1 , . . . , d } . Takingsupremum over k φ k p ≤ λ , we get the required constants. (cid:3) We now prove the Monotonicity inequality in the form required to obtainuniqueness of solutions to our stochastic partial differential equation (18)below.
Theorem 3.2
Let p ∈ R , q ≤ p − . Let σ ij , b i : S p → R , i = 1 , . . . , d , j = 1 , . . . n . Suppose that for λ > , ∃ K ( λ ) > such that | σ ij ( φ ) − σ ij ( ψ ) | ≤ K ( λ ) k φ − ψ k q , (4) | b i ( φ ) − b i ( ψ ) | ≤ K ( λ ) k φ − ψ k q , for φ, ψ ∈ B p (0 , λ ) . Then ∃ a constant C = C ( n, d, p, q, λ, K ( λ ) , K ( λ )) such that h φ − ψ, L ( φ ) − L ( ψ ) i q + k A ( φ ) − A ( ψ ) k H S ( q ) ≤ C k φ − ψ k q (5) for all φ, ψ ∈ B p (0 , λ ) . Proof:
Using the notation established in the discussion preceding the state-9ent of Theorem (3.1),2 h φ − ψ, L ( φ ) − L ( ψ ) i q + k A ( φ ) − A ( ψ ) k HS ( q ) = 2 h φ − ψ, L ( b ( φ ) , φ − ψ ) i q +2 h φ − ψ, L ( b ( φ ) − b ( ψ ) , ψ ) i q + 2 h φ − ψ, L ( σσ t ( φ ) , φ − ψ ) i q +2 h φ − ψ, L ( σσ t ( φ ) − σσ t ( ψ ) , ψ ) i q + k A ( σ ( φ ) , φ − ψ ) k HS ( q ) + k A ( σ ( φ ) − σ ( ψ ) , ψ ) k HS ( q ) + n X i =1 h A ( σ ( φ ) , φ − ψ ) e i , A ( σ ( φ ) − σ ( ψ ) , ψ ) e i i q . (6)From Theorem (3.1), ∃ C = C ( n, d, p, q, K ( λ )) > φ ∈ B p (0 , λ ) 2 h φ − ψ, L ( b ( φ ) , φ − ψ ) i q ≤ C k φ − ψ k q (7)2 h φ − ψ, L ( σσ t ( φ ) , φ − ψ ) i q + k A ( σ ( φ ) , φ − ψ ) k HS ( q ) ≤ C k φ − ψ k q . (8)for all ψ ∈ S p . Using the Lipschitz continuity of σ ij and b i , the fact thatproducts of locally Lipschitz continuous functions are again locally Lipschitzcontinuous, and the boundedness of ∂ i : S p → S q , q ≤ p − , we have2 h φ − ψ, L ( b ( φ ) − b ( ψ ) , ψ ) i q = 2 d X i =1 ( b i ( φ ) − b i ( ψ )) h φ − ψ, ∂ i ψ i q ≤ C k φ − ψ k q (9)and for q ≤ p − h φ − ψ, L ( σσ t ( φ ) − σσ t ( ψ ) , ψ ) i q = X i,j [( σσ t ) ij ( φ ) − ( σσ t ) ij ( ψ )] h φ − ψ, ∂ ij ψ i q ≤ C k φ − ψ k q (10)for some constants C = C ( n, d, p, q, λ, K ( λ )) > C = C ( n, d, p, q, λ, K ( λ )) > φ, ψ ∈ B p (0 , λ ). Similarly ∃ C = C ( r, d, p, q, λ, K ( λ )) > k A ( σ ( φ ) − σ ( ψ ) , ψ ) k HS ( q ) ≤ C k φ − ψ k q . (11)10e now show that ∃ C = C ( r, d, p, q, λ, K ( λ ) , K ( λ )) > n X i =1 h A ( σ ( φ ) , φ − ψ ) e i , A ( σ ( φ ) − σ ( ψ ) , ψ ) e i i q ≤ C ( λ ) k φ − ψ k q (12)for φ, ψ ∈ B p (0 , λ ). Consequently, the inequality (5) in the statement nowfollows from equality (6) and the inequalities (7) - (12), with the constant C in (5) given by C = 2 C + C + C + C + C . To prove (12), we note thatfrom the definition of A that2 n X i =1 h A ( σ ( φ ) , φ − ψ ) e i , A ( σ ( φ ) − σ ( ψ ) , ψ ) e i i q = 2 n X i =1 * d X j =1 σ ji ( φ ) ∂ j ( φ − ψ ) , d X j =1 ( σ ji ( φ ) − σ ji ( ψ )) ∂ j ψ + q = 2 n X i =1 d X j,k =1 σ ji ( φ ) ( σ ki ( φ ) − σ ki ( ψ )) h ∂ j ( φ − ψ ) , ∂ k ψ i q . Clearly it suffices to show that ∃ C ′ := C ′ ( n, d, p, q, λ ) > j, k = 1 , . . . d and φ, ψ ∈ B p (0 , λ ) T S|h ∂ j φ, ∂ k ψ i q | ≤ C ′ k φ k q k ψ k q +1 ≤ C k φ k q (13)where C := sup ψ ∈ B p (0 ,λ ) T S k ψ k q +1 C ′ . But this is an immediate consequenceof the representation of the adjoint ∂ ∗ j of ∂ j : S ⊂ S q → S q . Indeed it wasshown in [5] that for f ∈ S , we have, ∂ ∗ j f = − ∂ j f + T j f, where T j : S q → S q is a bounded operator. In particular, for φ, ψ ∈ S , h ∂ j φ, ∂ j ψ i q = h φ, ∂ ∗ j ∂ j ψ i q = −h φ, ∂ j ψ i q + h φ, T j ∂ j ψ i q and (13) follows. This completes the proof of Theorem (3.2). (cid:3) The following variant of the monotonicity inequality will be needed in theproof of existence of translation invariant diffusions. Before stating the result,we introduce some notation. 11et a ij ( φ ) , σ ij ( φ ) , b i ( φ ) be as in Section 2. For i = 1 , · · · , n we define A i : S p × S p → S q and L : S p × S p → S q as follows : A i ( φ, ψ ) := − d X k =1 σ ki ( φ ) ∂ k ψ (14) L ( φ, ψ ) := 12 d X i,j =1 a ij ( φ ) ∂ ij ψ − d X i =1 b i ( φ ) ∂ i ψ (15)Note that A i ( φ, φ ) = A i ( φ ) where the non-linear operator A i ( φ ) (of a singleargument) is given by equation (2). Note also that A i ( φ, ψ ) is linear in thesecond variable and is given in terms of the operator A ( · , · ) defined in thebeginning of Section 3 as A i ( φ, ψ ) = A ( σ ( φ ) , ψ ) · e i . Similar remarks holdfor the operator L ( φ, ψ ). Theorem 3.3
Let p ∈ R , q ≤ p − . Let σ ij ( . ) , b i ( . ) be as in Theorem (3.2).Then there exists a positive constant C = C ( n, d, p, q, λ, K ( λ ) , K ( λ )) suchthat h φ − φ , L ( φ , φ ) − L ( φ , φ ) i q + n X i =1 k A i ( φ , φ ) − A i ( φ , φ ) k q ≤ C ( k φ − φ k q + k φ − φ k q ) (16) for all φ , φ , φ ∈ B p (0 , λ ) . Proof:
Let φ i , i = 1 , , ∈ B p (0 , λ ). 12he left hand side of (16)= 2 h φ − φ , L ( φ , φ ) − L ( φ , φ ) + L ( φ , φ ) − L ( φ , φ ) i q + n X i =1 k A i ( φ , φ ) − A i ( φ , φ ) + A i ( φ , φ ) − A i ( φ , φ ) k q = 2 h φ − φ , L ( φ , φ ) − L ( φ , φ ) i q +2 h φ − φ , L ( φ , φ ) − L ( φ , φ ) i q + n X i =1 k A i ( φ , φ ) − A i ( φ , φ ) k q (17)+ n X i =1 k A i ( φ , φ ) − A i ( φ , φ ) k q +2 n X i =1 h A i ( φ , φ ) − A i ( φ , φ ) , A i ( φ , φ ) − A i ( φ , φ ) i q The 1st term + the 3rd term in the right hand side of (17)= 2 h φ − φ , L ( a ( φ ) , φ − φ ) i q + n X i =1 k A i ( σ ( φ ) , φ − φ ) k q +2 h φ − φ , L ( b ( φ ) , φ − φ ) i q ≤ C ′ k φ − φ k q where C ′ = C ′ ( n, d, p, q, K ( λ )).Using the Lipshitz continuity of the coefficients σ ij , b i (see (4)), the 2nd termin right hand side of (17)= 2 h φ − φ , L ( b ( φ ) − b ( φ ) , φ ) i q + h φ − φ , L ( a ( φ ) − a ( φ ) , φ ) i q ≤ C ′′ k φ − φ k q k φ − φ k q + C ′′ k φ − φ k q k φ − φ k q ≤ C ′ k φ − φ k q + C ′ k φ − φ k q where C ′′ , C ′′ , C ′ , C ′ are positive constants depending only on n, d, p, q, λ, K ( λ )and K ( λ ). 13imilarly, the 4th term in right hand side of (17)= n X i =1 k A i ( σ ( φ ) − σ ( φ ) , φ ) k q ≤ C k φ − φ k q where C = C ( n, d, p, q, K ( λ ) , K ( λ )).Finally in the same manner as in the proof of Theorem (3.2), the 5th termin right hand side of (17)= 2 n X i =1 h A i ( σ ( φ ) , φ − φ ) , A i ( σ ( φ ) − σ ( φ ) , φ ) i q ≤ C ′ k φ − φ k q k φ − φ k q ≤ C ( k φ − φ k q + k φ − φ k q )where C = C ( n, d, p, q, λ, K ( λ ) , K ( λ )). The proof of the theorem followsby summing up the terms in the RHS of (17), using the above inequalities. (cid:3) Let p ∈ R . Let σ ij , b i : S p → R i = 1 , · · · d, j = 1 , · · · , n be locally boundedfunctions on S p . Let ( B t ) be a given n-dimensional F t - Brownian motion on(Ω , F , P ) as in Section 2. Let A i , i = 1 , · · · , n and L be partial differentialoperators as defined in equations (2) and (3). We now consider a stochasticpartial differential equation in S ′ driven by the Brownian motion ( B t ) and‘coefficients ’given by the differential operators A i , i = 1 , · · · , n and L definedabove and initial condition y ∈ S p viz. dY t = A ( Y t ) · dB t + L ( Y t ) dt ; Y = Y, (18)where Y : Ω → S p . Note that if ( Y t ) is an S p valued, locally bounded,( F t ) adapted process then A i ( Y s ) , i = 1 , · · · , n and L ( Y s ) are S p − valued,adapted , locally bounded processes and hence for i = 1 , · · · n , the stochastic14ntegrals R t A i ( Y s ) dB is and R t L ( Y s ) ds are well defined S p − valued, con-tinuous F t -adapted processes and in addition, the former processes are F t local martingales . We then have the following definition of a ‘local’ strongsolution of equation (18). Definition 4.1
Let p ∈ R . Let σ ij , b i : S p → R , i = 1 , · · · , d, j = 1 , · · · , n be locally bounded functions, { B t , F t } a given standard n -dimensional ( F t )Brownian motion and Y : Ω → S p an F measurable random variable in-dependent of the filtration ( F Bt ). Let δ be an arbitrary state,viewed as anisolated point of ˆ S p := S p ∪ { δ } . By an ˆ S p valued, strong, (local) solution ofequation (18), we mean a pair ( Y t , η ) where η : Ω → (0 , ∞ ] is an F Bt -stoppingtime and ( Y t ) an ˆ S p valued ( F Bt ) adapted process such that1. For all ω ∈ Ω , Y . ( ω ) : [0 , η ( ω )) → S p is a continuous map and Y t ( ω ) = δ, t ≥ η ( ω )2. a.s. (P) the following equation holds in S p − for 0 ≤ t < η ( ω ), Y t = Y + n X j =1 t Z A j ( Y s ) dB js + t Z L ( Y s ) ds. (19)We note that equation (19) also holds in S q for any q ≤ p −
1. To provethe existence of solutions to equation (18) we need a few well known facts.Let (¯ σ ij ( s, ω )) , (¯ b i ( s, ω )) , j = 1 , · · · , n, i = 1 , · · · , d be locally bounded, F t -adapted processes and let Z t := ( Z t , · · · , Z dt ) be a d -dimensional ( F t ) semi-martingale defined as follows : Z t := t Z ¯ σ ( s, ω ) · dB s + t Z ¯ b ( s, ω ) ds. For p ∈ R , define the operator valued adapted processes ¯ L ( s, ω ) , ¯ A i ( s, ω ) :[0 , ∞ ) × Ω → L ( S p , S p − ) i = 1 , · · · , n as follows : For φ ∈ S p L ( s, ω ) φ := 12 d X i,j =1 ¯ a ij ( s, ω ) ∂ ij φ − d X i =1 ¯ b i ( s, ω ) ∂ i φ and for i = 1 , · · · , n , ¯ A i ( s, ω ) φ := − d X j =1 ¯ σ ji ( s, ω ) ∂ j φ . where ¯ a ij ( s, ω ) := (¯ σ ( s, ω )¯ σ t ( s, ω )) ij , i, j = 1 , · · · , d . Let Y : Ω → S p .Note that the S p valued process τ Z t ( Y ) has the S p valued trajectories t → τ Z t ( ω ) ( Y ( ω )). We then have the following Lemma. Lemma 4.2
Let p ∈ R . Let ¯ Y t := τ Z t ( Y ) where Y : Ω → S p an F measurable random variable independent of the filtration ( F Bt ) , and ( Z t ) asabove.(a) Suppose (¯ σ ij ( s, ω )) , (¯ b i ( s, ω )) , j = 1 , · · · , n, i = 1 , · · · , d are F Bt -adaptedlocally bounded processes. Then ( ¯ Y t ) is an S p -valued continuous F Bt -adapted process which is the unique solution of the following linear equa-tion in S q , q ≤ p − : almost surely, ¯ Y t = Y + t Z ¯ L ( s, ω ) ¯ Y s ds + t Z ¯ A ( s, ω ) ¯ Y s · dB s for every t ≥ .(b) Let ( X t ) be an S p -valued progressively measurable process which is uni-formly bounded i.e. ∃ K > such that k X t ( ω ) k p ≤ K , ∀ ( t, ω ) . Let ¯ σ ij ( s, ω ) := σ ij ( X s ( ω )) , ¯ b i ( s, ω ) := b i ( X s ( ω )) where σ ij , b i are as inDefn.(4.1). Let ( Z t ) , ( ¯ Y t ) be as defined above. Then E (cid:18) sup s ≤ t k ¯ Y s k p (cid:19) ≤ CE k Y k p where C = C ( d, n, K, t ) is a constant. roof: (a) The proof of the existence part of (a) for Y = y ∈ S p fixed,is an immediate consequence of Itˆo’s formula and we refer to [36] for thedetails. For Y arbitrary but independent of the Brownian motion B , theresult follows by a conditioning argument. The proof of uniqueness followsfrom the results in [20].(b) It is sufficient to consider the case E k Y k p < ∞ . From the results of [37],we have k Y t k p = k τ Z t ( Y ) k p ≤ k Y k p P ( | Z t | )where P ( x ) is a polynomial in x ∈ R with nonnegative coefficients and de-gree m depending on | p | . Now the result follows by an application of theBurkholder-Davis-Gundy inequality to each term in P ( | Z t | ), using the bound-edness assumption on ( X t ) and the local boundedness of the coefficients σ ij and b i , i = 1 , · · · , d, j = 1 , · · · , n . (cid:3) We now come to the existence and uniqueness of solutions to equation (18).Recall that B p ( y, r ) is the ball in S p with centre y and radius r > Theorem 4.3
Let p ∈ R , q ≤ p − . Let σ ij , b i : S p → R , i = 1 , . . . , d , j = 1 , . . . n . Suppose that for every λ > , ∃ K ( λ ) > such that | σ ij ( φ ) − σ ij ( ψ ) | ≤ K ( λ ) k φ − ψ k q | b i ( φ ) − b i ( ψ ) | ≤ K ( λ ) k φ − ψ k q for φ, ψ ∈ B p (0 , λ ) = { η ∈ S p : k η k p ≤ λ } . Then for every Y : Ω → S p which is F measurable and independent of B and for every r > thereexists a strictly positive ( F Bt ) stopping time η r , and a S p T B q ( Y, r ) val-ued,continuous, F Bt -adapted process ( Y rt ) satisfying equation (19) on [0 , η r ) ,almost surely. If Y , Y are two F measurable S p -valued random variableswith P { Y = Y } > , then the corresponding solutions ( Y ,rt , η ,r ) , ( Y ,rt , η ,r ) satisfy : η ,r = η ,r and Y ,rt = Y ,rt , ≤ t < η ,r on the set { Y = Y } .In particular, (19) has an ˆ S p valued local, strong solution. The solutionis unique in the sense that if ( Y t , η ) and ( Y t , η ) are any two solutions ofequation (18) with initial condition Y , then P [ Y t = Y t , ≤ t < η ∧ η ] = 1 . Proof:
We will first prove uniqueness.17 niqueness :
Let ( Y t , η ) and ( Y t , η ) be any two local solutions of equa-tion (18) with initial condition Y ∈ S p . Let λ >
0. Let Y t = Y t − Y t . Let η ( λ ) := inf { t : Y t or Y t / ∈ B p (0 , λ ) } and η ≡ η ( λ ) := η ( λ ) ∧ η ∧ η . Thenby using the identity k Y t k q = ∞ X k =1 h Y t , h k,q i q where { h k,q } is an ortho-normal basis for S q and expanding h Y t , h k,q i q usingItˆo’s formula, we see that the following equation holds a.s., for all 0 ≤ t < η : k Y t k q = t Z { h Y s , L ( Y s ) − L ( Y s ) i q + k A ( Y s ) − A ( Y s ) k H S ( q ) } ds + M t where ( M t ) is a continuous local martingale. Now using inequality (5) ofTheorem (3.2) , the Gronwall inequality and a localisation argument (see forexample [19]), we get for each λ >
0, almost surely , Y t = Y t , 0 ≤ t < η ( λ ). Letting λ ↑ ∞ the result follows. Existence : To prove existence, we first consider the case sup ω ∈ Ω k Y ( ω ) k p < ∞ .Recall the operator maps L ( ., . ) and A i ( ., . ) , i = 1 , · · · , n defined in the para-graph prior to Lemma (4.2). Let for each k ≥ , ( X ks ) be an S p -valued process.We define the operator valued process L k , A ki : [0 , ∞ ) × Ω → L ( S p , S p − ) , i =1 , · · · , n , whose action on φ ∈ S p is given by, L k ( s, ω ) φ := 12 d X i,j =1 a ij ( X ks ( ω )) ∂ ij φ − d X i =1 b i ( X ks ( ω )) ∂ i φ and for i = 1 , · · · , n , A ki ( s, ω ) φ := − d X j =1 σ ji ( X ks ( ω )) ∂ j φ . We define a sequence of F Bt -adapted, S p -valued processes ( Y kt ), inductively,using operator valued processes L k ( s, ω ) and A ki ( s, ω ) as follows: Y t ≡ Y, t ≥ . Y ∈ S p is the given initial value of equation (18). If ( Y k − t ) is de-fined, then ( Y kt ) is defined as the unique ( F Bt )-adapted solution of the linearequation Y kt = Y + t Z L k ( s, ω ) Y ks ds + t Z A k ( s, ω ) Y ks · dB s (20)where L k ( s, ω ) and A ki ( s, ω ) are defined as above, with X ks ( ω ) := Y k − s ∧ η k − ( ω ).Here, η k − is an F Bt -stopping time, defined inductively, as follows : Let r > η j := σ ∧ · · · ∧ σ j andσ j := inf { s > k Y js − Y k q > r } j = 1 , . . . k −
1. For notational convenience, in what follows, we often suppressthe dependence on r when there is no ambiguity. For notational clarity, wenote that σ i denotes stopping times, whereas σ ij denotes the coefficients inthe operators L, A .The existence and uniqueness of solutions of equation (20), is a consequenceof Lemma (4.2) with ( Z t ) , ( ¯ Y t ) there taken to be the processes ( Z k − t ) , ( Y kt )respectively, where ( Z k − t ) is defined as follows : Z k − t := t Z σ ( Y k − s ∧ η k − ) · dB s + t Z b ( Y k − s ∧ η k − ) ds and Y kt := τ Z k − t ( Y ).Define η := lim k →∞ η k . We note that η ≡ η r depends on r . We will show belowthat η > t ≥ { Y kt ∧ η } converges in L (Ω → S q ) for q ≤ p −
1. We have as in the proof ofuniqueness, k Y kt ∧ η − Y k − t ∧ η k q = t ∧ η Z { h Y ks − Y k − s , L k ( Y ks ) − L k − ( Y k − s ) i q + k A k ( Y ks ) − A k − ( Y k − s ) k H S ( q ) } ds + M kt . M kt ) is a local martingale. Then using (14) and (15) we have L k ( Y ks ) = L ( Y k − s , Y ks ) , A ki ( Y ks ) = A i ( Y k − , Y ks ) with similar expressions for L k − ( Y k − s )and A k − i ( Y k − s ) involving the processes ( Y k − s ) , ( Y k − s ) respectively. FromTheorem (3.3) and using a localisation argument, we can take expectationsin the above expression to get for some constant C > k ≥ , t > E k Y kt ∧ η − Y k − t ∧ η k q ≤ C { t Z { E k Y k − s ∧ η − Y k − s ∧ η k q + E k Y ks ∧ η − Y k − s ∧ η k q } ds } (21)By the Gronwall inequality (21) now implies E k Y kt ∧ η − Y k − t ∧ η k q ≤ C { t Z E k Y k − s ∧ η − Y k − s ∧ η k q ds + t Z s Z E k Y k − u ∧ η − Y k − u ∧ η k q du e C ( t − s ) ds }≤ K t Z E k Y k − s ∧ η − Y k − s ∧ η k q ds where K := C (1 + te Ct ) and C is some positive constant.Iterating the above inequality yields, for each t > E k Y kt ∧ η − Y k − t ∧ η k q ≤ K t Z s Z E k Y k − u ∧ η − Y k − u ∧ η k q du ds ≤ K n − t Z ( t k − Z . . . ( t Z E k Y t ∧ η − Y t ∧ η k q dt ) · · · ) dt k − ≤ α K k − t k − ( k − α = sup t ≤ t E k Y t ∧ η − Y k q < ∞ . It follows by the Cauchy-Schwartz20nequality that for each T > ≤ t ≤ T , ∞ X k =1 E k Y kt ∧ η − Y k − t ∧ η k q < ∞ and ∞ X k =1 E Z T k Y kt ∧ η − Y k − t ∧ η k q dt < ∞ . Define for each t , Y t := Y + ∞ X k =1 Y kt ∧ η − Y k − t ∧ η . where the series in the right hand side converges in L ([0 , T ] × Ω → S q , dt dP ) ,q ≤ p − T > F t )-progressively measurable S q -valuedprocess ( Y t ). We also note that, for each t , Y t is an S q valued random variablesuch that E k Y t − Y kt ∧ η k q →
0. Note that for each t ≥ , k ≥ k Y kt ∧ η − Y k q ≤ r almost surely and by passing to an almost sure convergent subsequence, wealso have, k Y t − Y k q ≤ r almost surely. Denoting this subsequence again by Y kt it follows by the bounded convergence theorem that E k Y t − Y kt ∧ η k q → t and moreover that E t R k Y s − Y ks ∧ η k q ds → σ ij , b i ; i = 1 , · · · d, j = 1 , . . . n and thecontinuity of ∂ i : S q → S q − / , i = 1 , . . . , d, k Y t − Y kt ∧ η k q → t implies, L ( Y k − s , Y ks ) → L ( Y s ) and A i ( Y k − s , Y ks ) → A i ( Y s )for every s ≤ t ∧ η and i = 1 , · · · n , almost surely, where the convergencetakes place in S q − . Note also that there exists a constant K > ≤ s ≤ t , almost surely k L ( Y k − s ∧ η , Y ks ∧ η ) k q − + n X i =1 k A i ( Y k − s ∧ η , Y ks ∧ η ) k q − ≤ K and a fortiori, k L ( Y s ∧ η ) k q − + n X i =1 k A i ( Y s ∧ η ) k q − ≤ K ≤ s ≤ t , almost surely. It follows from the above observationsthat t ∧ η Z L k ( s, ω ) Y ks ds → t ∧ η Z L ( Y s ) ds for each t ≥
0, almost surely in S q − and that t ∧ η Z A k ( s, ω ) Y ns · dB s → t ∧ η Z A ( Y s ) · dB s for each t ≥ L (Ω → S q − ). Hence we can pass to the limit in S q − inequation (20) with t replaced by t ∧ η , to get Y t = Y + t ∧ η Z L ( Y s ) ds + t ∧ η Z A ( Y s ) · dB s (22)in S q − for every t >
0, almost surely. It then follows, as a consequence ofLemma (4.2), that if we define Z t := t ∧ η Z σ ( Y s ) · dB s + t ∧ η Z b ( Y s ) ds, (23)then ( τ Z t ( Y )) is a continuous S p valued process that satisfies equation (19) in S q for any q ≤ p −
1. We denote this process again by ( Y t ) i.e. Y t := τ Z t ( Y ).By its very construction the paths of ( Y t ) are constant for t > η , and we canredefine this value to be δ to satisfy definition (4.1) of a ‘ local solution’.We now show that η > σ n , η n defined above. Suppose there exists a set A with P ( A ) > ω ∈ A then η ( ω ) = 0. Then we claim that ∃ a subset A ⊂ A with P ( A ) > { n k } such that for ω ∈ A , σ n k ( ω ) = η n k ( ω ) < η n k − ( ω ) , k ≥ . It is intuitively clear that such subsequences as described above must exist.We give a proof for completeness: Fix t >
0. We define a sequence of integervalued random variables K i for i ≥ K := min { j ≥ η j < t } . i ≥ , K i := min { j > K i − : η j < η j − } . Note that if ω ∈ A , then K i ( ω ) < ∞ for all i ≥ σ K i ( ω ) = η K i ( ω ) < η K i − ( ω ). Further for every i ≥ A ⊂ { K i < ∞} = ∞ [ m =1 { K i = m } . Hence for every i ≥
1, we can choose an integer m i satisfying P ( { K i ≤ m i } ∩ A ) ≥ ( P ( A ) − i ) + . Without loss of generality we can take m i > m i − , i ≥
2. It follows that for i sufficiently large P ( A ∩ { K i > m i } ) = P ( A ) − P ( A ∩ { K i ≤ m } ) ≤ i . It follows by the Borel-Cantelli lemma that P { ω ∈ A : K i ( ω ) > m i infinitely often } = 0 . In particular we have the almost sure equality A = { K i > m i infinitely often } C ∩ A = ∞ [ n =1 \ ℓ ≥ n { K ℓ ≤ m ℓ } ∩ A = ∞ [ n =1 B n where we define the set B n for n ≥ B n = \ ℓ ≥ n { K ℓ ≤ m ℓ } ∩ A = [ { K = j . . . K ℓ = j ℓ , K ℓ +1 = j ℓ +1 , . . . } ∩ A where the (countable) union in the last equality is over j ℓ ≤ m ℓ for ℓ ≥ n ,and j < j < . . . < j n − < j n ≤ m n . Since B n ↑ A and P ( A ) >
0, wechoose n so that P ( B n ) >
0. Since each B n is a countable union of sets,each of which corresponds to a sequence { j ℓ } and P ( B n ) > ∃ a sequence { j ℓ } , j < j < . . . < j k < . . . such that P ( { K = j . . . K ℓ = j ℓ , . . . } ∩ A ) > . A := { K = j , . . . K ℓ = j ℓ , . . . } ∩ A . For ω ∈ A , σ j ℓ ( ω ) = η j ℓ ( ω ) < η j ℓ − ( ω ) , ℓ ≥
1. Note that P ( A ) >
0. This proves our claim. Wewill now work with the subsequence ( Y n k t ) k ≥ which we will rename ( Y nt )which then has the following property on A : If ω ∈ A , then σ n ( ω ) = η n ( ω ) < η n − ( ω ) , n ≥ . Fix t > . Then we claim that, almost surely along a subsequence, Y kt ∧ η k → Y t ∧ η in S p as k → ∞ . This can be seen as follows. First note that η k definedabove, decrease to η almost surely. Next, observe that Y kt = τ Z k − t ( Y ) wherefor each k ≥
1, the process ( Z kt ) is defined by Z kt := t Z σ ( Y ks ) · dB s + t Z b ( Y ks ) ds. Since the map x → τ x ( Y ) : R d → S p is continuous, to prove the claim, itsuffices to show that ( Z kt ∧ η k ) converges to the process ( Z t ∧ η ) in probabilityand hence almost surely along a subsequence. To see this, note that we canwrite Z kt ∧ η k = t ∧ η Z b ( Y ks ) ds + t ∧ η Z σ ( Y ks ) · dB s + t Z ( η,η k ] ( s ) b ( Y ks ) ds + t Z ( η,η k ] ( s ) σ ( Y ks ) · dB s . By stopping ( Y ks ) at its exit from a suitably large ball centered at Y ∈ S p , we can show that the third and fourth terms go to zero in probability. Henceusing the (Lipschitz) continuity of the maps σ, b and the convergence of Y ks ∧ η to Y s in L ([0 , t ] × Ω → S q ) it is easy to see that ( Z kt ∧ η k ) converges to theprocess ( Z t ∧ η ) in probability and hence almost surely along a subsequence.Thus our claim is proved and we have Y kt ∧ η k → Y t ∧ η in S p , almost surely ,along a subsequence. In particular it converges to Y in S q on the set A . Forthe rest of the proof, we will work with this subsequence, which, abusingnotation, we continue to denote by Y kt ∧ η k . In particular we will now assumethat the sum of integrals in the right hand side of equation (20), evaluated24t t ∧ η k , goes to zero, which is the value of the sum of integrals in the RHSof (22), almost surely on the set A .On the other hand, for ω ∈ A , σ k ( ω ) = η k ( ω ) < η k − ( ω ) , n ≥
2; Hence for k ≥ η k ≤ t , we have Y kη k ∧ t ( ω ) = Y kη k ( ω ) = Y kσ k ( ω )and consequently by the continuity of the process Y kt ∧ η k in S q , k Y kη k ∧ t ( ω ) − Y k q = k Y kσ k ( ω ) − Y k q = r for ω ∈ A and k ≥ k ( ω ) for some k ( ω ) ≥
1. But this leads to a contradic-tion to the fact proved above that the RHS of (20) goes to zero in S q , almostsurely on A and in particular on the set A . To complete the proof of the first part of the theorem, let r > Y , Y beas in the statement of the theorem with P { Y = Y } >
0. First we assume Y , Y are bounded. We will denote with a superscript i, i = 1 ,
2, the variousobjects defined in the construction of the solutions corresponding to thebounded initial values Y , Y respectively and suppress the dependence on r ,which is fixed. Firstly, we note that since η i,j = η i,j − ∧ σ i,j , i = 1 , j ≥ η ,j = η ,j and, almost surely, Y ,jt = Y ,jt , ≤ t < ∞ on { Y = Y } . It follows that η := lim j →∞ η ,j = lim j →∞ η ,j = η on the set { Y = Y } .To show that almost surely, Y t = Y t , ≤ t < η on { Y = Y } we argue asfollows. Define ˆ Y it := I { Y = Y } Y it , i = 1 , η ′ := I { Y = Y } η + I { Y = Y } ∞ .Then, using the quasi linearity of (19), ( ˆ Y t , η ′ ) , ( ˆ Y t , η ′ ) are two solutions of(19) with initial value I { Y = Y } Y . By uniqueness, our claim follows.The existence for the case of a general initial random variable Y and a fixed r > L case by considering the initial conditions Y n := Y I {k Y k p ≤ n } , as follows. Denote the solution corresponding to Y n ,constructed above, by ( Y nt , η n ) where we have omitted the dependence on r . Another solution with the same initial condition Y n is given by ( ¯ Y nt , ¯ η n )25here we define¯ η n := ∞ , ω ∈ {k Y k p > n } ; := η n +1 , ω ∈ {k Y k p ≤ n } and ¯ Y nt := I {k Y k p ≤ n } Y n +1 t , ≤ t < ¯ η n Then by uniqueness we get that almost surely on the set {k Y k p ≤ n } , Y nt =¯ Y nt = Y n +1 t , ≤ t < η n , and η n = η n +1 . We can now construct thesolution ( Y rt , η r ) corresponding to the initial random variable Y by piecingtogether the solutions on the sets {k Y k p ≤ n } as follows : η r ( ω ) := η n ( ω )and Y rt ( ω ) := Y nt ( ω ) , ≤ t < η n if ω ∈ {k Y k p ≤ n } and lies outside asuitable null set. That ( Y rt , η r ) is a solution of equation (19) follows from thefact that ( Y nt , η n ) solves (19) with initial condition Y on the set {k Y k p ≤ n } .That it takes values in B p ( Y, r ) on [0 , η ) is also clear from the correspondingproperty for ( Y nt , η n ) on the set {k Y k p ≤ n } .Let now Y , Y be two F B measurable, S p valued random variables and( Y t , η ) , ( Y t , η ) be the corresponding solutions constructed above for somefixed r >
0. To show the claimed uniqueness on the set { Y = Y } , wedefine, for n ≥
1, the processes Y n,it := I { Y = Y , k Y k p ≤ n } Y it , ≤ t < η ′ i , i = 1 , , where η ′ i := η i on { Y = Y , k Y k p ≤ n } ; = ∞ otherwise . Then, Y n,it solve(19) on the interval [0 , η ′ i ), with initial values I { Y = Y , k Y k p ≤ n } Y i , i = 1 , η = η and Y t = Y n, t = Y n, t = Y t , ≤ t < η on the set { Y = Y , k Y k p ≤ n } . Letting n → ∞ theuniqueness claim follows. This completes the proof of the theorem. (cid:3) Remark 4.4
A simpler proof of existence can be provided using the finitedimensional existence results as in Theorem 2.1. In effect, we fix the initialvalue y and we define ¯ σ ij ( x ) := σ ij ( τ x y ) , ¯ b i ( x ) := b i ( τ x y ). Then if σ ij , b i are Lipschitz in S q , q ≤ p − , then one can show (using duality and themean value theorem applied to < τ x y − y, ψ >, ψ ∈ S )that ¯ σ ij , ¯ b i are locallyLipshitz on R d . If ( Z t , η ) is the solution of equation (1) with x = 0 thenusing Itˆ o ’s formula for translations, it is easy to see that Y t := τ Z t y, t < η ,solves equation (19). However this proof does not work when, for example,26e replace the operator L in equations (3) and (19) with a perturbation of L viz. L + cI , where I is the identity and c ∈ R ( see also Example 6 of Section6). Remark 4.5
Translation invariance also applies to solutions of an evolutionequation which is a first order quasi linear PDE i.e. these solutions are trans-lates of the initial condition by the solution of an appropriate ‘characteristic’ODE. This follows on setting the diffusion coefficients in the above calcula-tions to be equal to zero. These first order systems may also be viewed as the‘zero noise’ limit of stochastic second order system, a topic of considerableinterest in the last three or four decades (see [17]), and also volumes 3 & 4of [7] for the connection with large deviation theory.
In this section we show that the local solution of equation (19) obtained inTheorem (4.3) can be extended to a maximal interval [0 , η ) for a given F -measurable Y (Theorem (5.3)). We then show that the solutions ( Y t ( y ) , η )obtained when Y ≡ y ∈ S p have a jointly measurable version ( Y ( t, ω, y ) , η ( ω, y ))in ( t, ω, y ) and that the solutions for arbitrary Y can be represented as Y ( t, ω, Y ) , ≤ t < η ( ω, y ), (Proposition (5.4) and Theorem (5.5)). Thestrong Markov property (Theorem (5.7))is then proved as a consequenceof uniqueness in law, which in turn follows from pathwise uniqueness by aYamada-Watanabe type argument (Theorem (5.6)).Consider now the solution ( Y rt , η r ) constructed in Theorem (4.3) for r > F -measurable random variable Y : Ω → S p . From thedefinition of η r in the proof of Theorem (4.3), it follows that, r < r implies η r ( ω ) ≤ η r ( ω ). Let η ( ω ) := lim r ↑∞ η r ( ω ). Then by pathwise uniqueness ofsolutions we have Y r t = Y r t , ≤ t < η r , almost surely. Let r n ↑ ∞ .Then η r n ( ω ) ↑ η ( ω ) ∀ ω . Let Ω n satisfy P (Ω n ) = 0 and for ω / ∈ Ω n , Y r n t ( ω ) = Y r n +1 t ( ω ) , ≤ t < η r n . Let Ω := ∞ S n =1 Ω n . Define, for ω / ∈ Ω and 0 ≤ t < η ( ω ) Y t ( ω ) := Y r n t ( ω ) if 0 ≤ t < η r n ( ω ) ≤ η ( ω )27nd let Y t ( ω ) := δ for t ≥ η ( ω ) . For ω ∈ Ω redefine η ( ω ) := 0 and define Y t ( ω ) := Y ( ω ) , ∀ t ≥ . We note the following ‘ maximality’ property of the solution ( Y t , η ). Proposition 5.1
Let Y ∈ S p , q ≤ p − . Then, a.s., lim t ↑ η ( ω ) k Y t ( ω ) k q = ∞ on ω ∈ { η < ∞} . In particular, a.s., lim t ↑ η ( ω ) k Y t ( ω ) k p = ∞ on ω ∈ { η < ∞} . Proof:
Note that { η < ∞} = S n { η < n } . It suffices to show that lim r →∞ k Y η r k = ∞ a.s. on { η < ∞} . Recall the approximations ( Y kt ) ≡ ( Y k,rt ) to the solu-tions ( Y rt , η r ) constructed in the proof of Theorem (4.3). For fixed r > Y kt ∧ η r → Y rt ∧ η r in L (Ω , S q ) and in particular for every ǫ > P (cid:0) k Y t ∧ η r − Y kt ∧ η r k q > ǫ (cid:1) → k → ∞ . Further if η k,r are the approximations to η r constructed in theproof of Theorem (4.3) we have for each k ≥ k Y kt ∧ η r − Y kt ∧ η k,r k q = t ∧ η k,r Z t ∧ η r ( (cid:10) Y ku − Y kt ∧ η r , L ( Y ku ) (cid:11) q + n X i =1 k A i ( Y ku ) k q ) du +2 t ∧ η k,r Z t ∧ η r (cid:10) Y ku − Y kt ∧ η r , A ( Y ku ) (cid:11) q · dB u . Since η k,r ↓ η r and k Y ku k q ≤ r for t ∧ η r ≤ u ≤ t ∧ η k,r , the first term goesto zero by the bounded convergence theorem almost surely and the secondterm goes to zero in probability as k → ∞ . Thus for every ǫ > P (cid:0) k Y kt ∧ η r − Y kt ∧ η k,r k q > ǫ (cid:1) → , k → ∞ . It follows from the above that for every ǫ > P (cid:0) k Y t ∧ η r − Y kt ∧ η k,r k q > ǫ (cid:1) → , k → ∞ . t > { k i } we have a.s. on { η r < t } , Y k i η ki,r → Y η r . We can argue (as in the proof that η r > η k i ,r = σ k i ,r and in particularthat on { η r < t } r = k Y k i σ ki,r − Y k q = k Y k i η ki,r − Y k q → k Y η r − Y k q . It follows that a.s. on { η r < t } , k Y η r − Y k q = r . Now we take r k ↑ ∞ . Then { η < ∞} = ∞ S n =1 ∞ T k =1 { η r k < n } . In particular a.s. on { η < ∞} ,lim r ↑∞ k Y η r k q = ∞ . Since q < p the result follows. (cid:3) Proposition 5.2
Let E k Y k p < ∞ and σ ij , b i : S p → R be bounded i.e. ∃ K > such that | σ ij ( φ ) | + | b i ( φ ) | ≤ K for all φ ∈ S p , i = 1 , . . . d, j = 1 . . . n . Then η = ∞ a.s. Proof:
It suffices to show that for every t > , P ( η r ≤ t ) → r ↑ ∞ . Asin the proof of the previous Proposition, { η r < t } ⊂ { η r < t, k Y η r − Y k q = r } ⊂ {k Y η r ∧ t − Y k q ≥ r } . Hence, P ( η r ≤ t ) ≤ r E k Y t ∧ η r − Y k q . Further by the boundedness assumptions on σ ij , b i and the monotonicityinequality given in Theorem (3.1) ( applied with φ = ψ = Y s ), we have2 h Y s , L ( Y s ) i q + n X i =1 k A i ( Y s ) k q ≤ C · k Y s k q where C > r, d, p and K . Using Gronwal’sinequality we get E k Y t ∧ η r − Y k q ≤ { e Ct E k Y k q + E k Y k q } . Hence dividing by r and letting r ↑ ∞ in the above inequality we concludethat P ( η r ≤ t ) → r ↑ ∞ for every t > (cid:3) heorem 5.3 Let p ∈ R , q ≤ p − . Let σ ij , b i : S p → R , i = 1 , . . . , d , j = 1 , . . . n be as in Theorem (4.3). Then for every Y : Ω → S p which is F measurable and independent of B , equation (18) has a unique ˆ S p valuedstrong (local) solution ( Y t , η ) . Further η > is maximal in the sense that,almost surely, lim t ↑ η ( ω ) k Y t ( ω ) k p = ∞ on ω ∈ { η < ∞} . Finally, if we define ( Z t ) as Z t := t ∧ η Z σ ( Y s ) · dB s + t ∧ η Z b ( Y s ) ds, (24) then ( Z t ) is a continuous, ( F Yt ) -adapted, R d -valued process such that almostsurely, Y t = τ Z t ( Y ) , ≤ t < η. (25) Proof :
The proof follows by ‘patching up’ the solutions obtained in Theorem(4.3), as described in the beginning of Section 5. This gives us the pair ( Y t , η ).That it solves equation (19) follows from Theorem (4.3) and the fact thatby construction, for any r > Y t = Y rt , ≤ t < η r . The maximality of thesolution follows from Proposition (5.1). We note that Y t = τ Z t ( Y ) followsfrom Lemma (4.2). (cid:3) To formulate the strong Markov property, we consider the solution ( Y t , η )when the initial value Y is a constant Y ≡ y ∈ S p and denote the corre-sponding solution by ( Y t ( y ) , η y ) or ( Y t ( ω, y ) , η y ).Recall that ˆ S p = S p ∪ { δ } . We define the σ -field B ( ˆ S p ) on ˆ S p by ˆ A ∈ B ( ˆ S p )iff ˆ A = A ∪ { δ } for some A ∈ B ( S p ). A measurable function f : S p → R is extended to ˆ S p by defining f ( δ ) = 0. The resulting extension will also bedenoted by f .For y
6∈ S p , we define Y t ( ω, y ) = δ, t ≥ , ω ∈ Ω . In other words, η ( ω ) = 0 for such y . We now construct versions of the solution( Y t ( y ) , η y ) constructed in Theorem (5.3), with initial value Y ≡ y ∈ S p , which30re jointly measurable in ( t, ω, y ). In the two Propositions below, we needthe approximations ( Y kt ) , k ≥ Y ≡ y , constructed in theproof of Theorem (4.3), which we now denote by ( Y kt ( y )) or as ( Y r,kt ( y )),whenever the dependence on the domain B p ( y, r ) needs to be made explicit.The proofs of the following two results (Proposition (5.4) and Theorem (5.5)are given in the Appendix). Proposition 5.4 a). There exists a map ˜ Y : [0 , ∞ ) × Ω × S p → ˆ S p which is B [0 , ∞ ) ⊗ F B ∞ ⊗ B ( S p ) / B ( ˆ S p ) measurable and satisfies for all t ≥ , y ∈ ˆ S p , ˜ Y ( t, ω, y ) = Y t ( ω, y ) a.s. b). There exists a map ˜ η : Ω × S p → [0 , ∞ ] which is F B ∞ ⊗ B ( S p ) / B [0 , ∞ ] measurable and satisfies for all y ∈ ˆ S p , ˜ η ( ω, y ) = η y ( ω ) a.s. The next result shows that the solution of the SPDE (19) with an initialrandom variable Y as the composition of Y with the solutions starting at y ∈ S p . So let Y : Ω → S p be F -measurable. Define˜ Y ( t, ω ) := Y ( t, ω, Y ( ω )) , ˜ η ( ω ) := ˜ η ( ω, Y ( ω )) . Then we have the following result.
Theorem 5.5
Let ( Y t , η ) be the solution of equation (19) with initial r.v. Y independent of ( B t ) . Then for each t ≥ , we have η = ˜ η a.s. and ˜ Y ( t, ω ) = Y t ( ω ) a.s. on { t < η } . We now prove uniqueness in law for equation (19) required to prove thestrong Markov property. This follows from the Yamada-Watanabe result forSPDE’s of the type (19), which we now state as the next theorem. We needsome preliminaries to deal with the law of explosive solutions.We first construct an appropriate path space i.e. a measurable space ( C, C )such that if ( Y t , η ) is a maximal solution on some probability space then31lmost surely, the paths Y ·∧ η ( ω ) belong to C and the map ω → Y ·∧ η ( ω ) ismeasurable. It is clear that the law of ( Y ·∧ η ) is essentially determined on [0 , η )where the paths are continuous. However, we need to distinguish betweenthe cases η < ∞ and η = ∞ . Further, although our initial conditions andthe paths of the corresponding solutions lie in S p , we will consider them aspaths in S q , where the equation holds.Thus let p ∈ R , q ≤ p −
1. Let y : [0 , ∞ ) → ˆ S q . All such maps that we considerwill be B ([0 , ∞ )) / B ( ˆ S q ) measurable. Define η q ( y ) := inf { s > y ( s ) / ∈ S q } .Let C := { y : [0 , ∞ ) → ˆ S q , y (0) ∈ S p , η q ( y ) > y : [0 , η q ( y )) → S q is continuous; } ,C := C \ { η q ( y ) < ∞ and lim t → η q ( y ) k y ( t ) k q = ∞} ; C := C \ { η q ( y ) = ∞} . Define C := C S C . For y ∈ C, define for r > , τ r ( y ) := inf { t > k y ( t ) − y (0) k q > r } . Then y ∈ C implies y ∈ C and by continuity of y on [0 , η q ( y )),we have τ r ( y ) < ∞ , and η q ( y ) = lim r ↑∞ τ r ( y ). Note that C ([0 , ∞ ) , S q ) = { y ∈ C : η q ( y ) = ∞} . We now define a sigma field C on C via the maps K r : C → C ([0 , ∞ ) , S q ) , r ≥ , K r ( y ) := y ( · ∧ τ r ) as follows. C := σ { K r : r ≥ } = σ { K − r ( A ) : A ∈ B ( C ([0 , ∞ ) , S q )) } . Let ( Y t , η ) be a maximal solution of equation (19) with initial value Y ∈ S p and Brownian motion ( B t ), obtained in Theorem (5.3) on some probabilityspace (Ω , F , P ) . Then recall that this solution is obtained by pasting to-gether the solutions ( Y rt , η r ) , r > , obtained in Theorem (4.3). Then wehave a map ˆ Y : Ω → ˆ S q , ˆ Y ( ω ) := ( t → Y t ( ω ) , t < η ; t → δ, t ≥ η ) . By Proposition (5.1) it follows that almost surely, η q ( ˆ Y ( ω )) = η ( ω ) andhence ˆ Y ( ω ) ∈ C almost surely. We can redefine ˆ Y on a null set so thatˆ Y : Ω → C ⊂ ˆ S q . Since η r ( ω ) ≤ τ r ( ˆ Y ( ω )) =: τ r ( ω ) < η q ( ˆ Y ( ω )) = η ( ω ) andsince η r ( ω ) ↑ η ( ω ), we have for a fixed r > ,K r ( ˆ Y ( ω )) t = Y t ∧ τ r ( ω ) ( ω ) = lim r ↑∞ Y t ∧ τ r ∧ η r ( ω ) . r ≥
0, the maps ω → K r ( ˆ Y ( ω )) is F B measurable and hence ω → ˆ Y ( ω ) is F B / C measurable.Let ( Y it , B it , η i ) i = 1 , dY it = L ( Y it ) dt + A ( Y it ) · dB it , ≤ t < η i Y i = Y i , adapted to the n dimensional F it -Brownian motions ( B it ) , i = 1 , , with F i measurable initial values Y i : Ω i → S p , independent of ( B it ), on possiblydifferent probability spaces (Ω i , F i , F it , P i ). Equality in law between tworandom variables X , X will be denoted by ” X , X ”. For r > η i,r , r >
0, the stopping times upto which the solution Y i,rt lies in a ball of radius r around Y i in S q , q ≤ p − η i = lim r →∞ η i,r , the explosion time. Let P i , i = 1 , Y it ) on( C, C ). Let W n := C ([0 , ∞ ) , R n ). Theorem 5.6 If Y , Y , then P = P . Proof:
Let P i,y , i = 1 , Y i,yt , η i,y ) of equation(19) corresponding to Y i ≡ y ∈ S q on ( C, C ). Using Theorem (5.5) and theindependence of Y i and ( B it ) and the definition of C we have P i ( A ) = Z S p P i,y ( A ) P Y i ( dy ) , it suffices to show that for all y ∈ S p , P ,y = P ,y on C . From the definition of C , it suffices to show that for every r >
0, the laws of ( Y i,yt , η i,y ) composedwith the map K r : C → C ([0 , ∞ ) , S q ) i.e. the laws of K r ( ˆ Y i,y ) agree on B ( C ([0 , ∞ ) , S q )). We fix y ∈ S p . In what follows, we drop the explicitdependence on y in our notation. Let P ir ( A ) := P ( Y i,r ·∧ η i,r ∈ A ) where A ∈B ( C ([0 , ∞ ) , S q )). Since, as was observed above, K r ( ˆ Y i ) is the almost surelimit ( Y i,rt ∧ τ r ∧ η i,r ) as r ↑ ∞ , it suffices to show that for every r > , P r = P r on B ( C ([0 , ∞ ) , S q )). 33he proof is basically the same as in the proof of the finite dimensionalYamada-Watanabe result. In our case the finite dimensional diffusions arereplaced by the infinite dimensional processes ( Y i,rt , η i,r ) , i = 1 , y . We follow the proof in [22] (Chapter IV, Theorem (1.1) and itscorollary), the only difference in our case being that the space C ([0 , ∞ ) , R d ) isreplaced by the space C ([0 , ∞ ) , S q ), which is again a Polish space. It sufficesto show then that P r ( A ) = P r ( A ) , A ∈ B ( C ([0 , ∞ ) , S q )). Let Q ir ( A × B ) := P { Y i ·∧ η ,r ∈ A, W · ∈ B } , A ∈ B ( C ([0 , ∞ ) , S q )) , B ∈ B ( W n ). Let P be theWiener measure on W n .Let Q ir ( ω, A ) be a disintegration of Q ir w.r.t. P , i.e. for i = 1 , ,Q ir ( A × B ) = Z B Q ir ( ω, A ) P ( dω ) .A ∈ B ( C ([0 , ∞ ) , S q )) and B ∈ B ( W n ). Then as in the proof in [22],Theorem(1.1), using pathwise uniqueness, there exists a measurable map F r : W → C ([0 , ∞ ) , S q ) such that Q i,yr ( ω, A ) = δ F r ( ω ) ( A ) a.e. ω P , for i = 1 ,
2. In particular it follows that P r ( A ) = Q r ( A × R n ) = Q r ( A × R n ) = P r ( A ) . (cid:3) Having defined measurable versions of ( Y t ( y ) , η y ) we can now define the tran-sition probability function P ( t, y, A ) for t ≥ , y ∈ ˆ S p and A ∈ B ( ˆ S p ) in theusual way: P ( t, y, A ) := P ( Y t ( y ) ∈ A )= I S p ( y ) { P ( Y t ( y ) ∈ A, t < η y ) + P ( Y t ( y ) ∈ A, t ≥ η y ) } + I δ ( y ) P ( Y t ( y ) ∈ A, t ≥ η y ) . From Proposition (5.4), it follows that for fixed A ∈ B ( ˆ S p ), the map ( t, y ) → P ( t, y, A ) is jointly measurable and a probability measure for fixed t ≥ y ∈ ˆ S p . For y ∈ S p we will write Y t ( y ) = y + Y t ( y ) , ≤ t < η y ( ω )34here Y t ( y ) = t Z A ( Y s + y ) · dB s + t Z L ( Y s + y ) ds (26)for 0 ≤ t < η y . We can then formulate the strong Markov property of theprocess ( Y t ( y )) as follows. Theorem 5.7
Let T : Ω → [0 , ∞ ] be an ( F Yt ) stopping time. Then for each y ∈ S p , a.s. on { T < ∞} , P ( Y T + t ( y ) ∈ A | F YT ) = P ( t, Y T , A ) for t ≥ , A ∈ B ( ˆ S p ) . Proof:
We first consider the case y ∈ S p and A ⊆ S p . Let f : ˆ S p → R be abounded measurable function, f ( ∂ ) = 0. Then, with y ∈ S p and η := η y E [ f ( Y T + t ( y )) | F Yt ] = E [ f ( Y T + t ( y ))( I ( T <η ) + I ( T ≥ η ) ) | F YT ]= E [ f ( Y T + t ( y )) I ( T <η ) | F YT ]= E [ f ( Y T + t ( y )) I ( T <η ) I ( T + t<η ) | F YT ]= I ( T <η ) E [ f ( Y T + ˆ Y t ( y )) I ( t< ¯ η ) | F YT ]where ˆ Y t ( y ) := Y t + T ( y ) − Y T ( y ) , ≤ t < ¯ η ; := ∂, t ≥ ¯ η and ¯ η := η − T , ω ∈ { η > T } and := ∞ otherwise . We note that on { T < ∞} ˆ Y t ( y ) = ˆ Y t ( z ) | z = Y T ( y ) , ≤ t < ˆ η z , z = Y T ( y ) , where ( ˆ Y t ( z ) , ˆ η z ) satisfies equation (26)(with respect to P ( . | T < ∞ )) with y replaced by z and with ( B t ) replaced by the Brownian motion ( ˆ B t ) :=( I ( T < ∞ ) ( B T + t − B T )); and ˆ η z is the explosion time for the process ˆ Y t ( z ) := z + ˆ Y t ( z ) which by its maximality, satisfies η ≡ η y = T + ˆ η Y T ( y ) on { η y > } . Since the latter Brownian motion is independent of F T we have (usingTheorem (5.5)),LHS above = I ( T <η ) E [ f ( z + ˆ Y t ( z )) I ( t< ˆ η z ) ] | z = Y T ( y ) = I ( T <η ) E [ f ( Y t ( z )) I ( t<η z ) ] z = Y T ( y ) = P t f ( Y T ( y ))on the set { T < η } , and where we have used the uniqueness in law forequation (26), which follows from the previous theorem.We now consider the case y = δ . Then both sides of the equation in thestatement of the theorem reduce to I A ( δ ). Next let y ∈ S p , A = { δ } . we have P ( Y T + t ( y ) ∈ { δ }|F YT ) = P { ( Y T + t ( y ) = δ ) \ (( T < η ) [ ( T ≥ η )) |F YT } = I { T ≥ η,Y T ( y )= δ } + I { T <η } P ( t > ˆ η z ) | z = Y T ( y ) = P ( t, Y T , { δ } )where we have used the independence of the Brownian motion ( ˆ B t ) and F YT in the second equality. This completes the proof. (cid:3) It is clear from the relation Y t = τ Z t ( y ) , ≤ t < η y , with ( Z t ) as in equa-tion(23) that the path properties of the processes ( Y t ) and ( Z t ) are closelyrelated, although they live in different spaces. In particular, as already ob-served in Proposition (3.12) of [39] corresponding to the case where σ, b aregiven by linear functionals on S p , the explosions of ( Z t ) as t → ∞ are relatedto the convergence of Y t to zero in the weak topology of S ′ and this corre-spondence is pathwise. It is easy to see that the result of Proposition (3.12)of [39] extends to the more general framework of Theorem (5.3) above. Wethen have the following result. Proposition 5.8
Let σ i,j , b i be as in Theorem 4.3, with Y ≡ y = 0 ∈ S p .Let ( Y t ( y ) , η y ) be the unique maximal solution of equation(19),and ( Z t ( y )) begiven by equation (23) with Y t ( y ) = τ Z t ( y ) ( y ) , ≤ t < η y . Fix ω ∈ Ω . Then, Z t ( ω, y ) → ∞ as t → η y ( ω ) whenever Y t ( ω, y ) → weakly in S ′ as t → η y ( ω ) . Conversely, suppose one of the following two conditions is satisfied viz.1. y ∈ L p ( R d ) , p ≥ . . y has compact support.Then, Y t ( ω, y ) → weakly in S ′ whenever Z t ( ω, y ) → ∞ as t → η y ( ω ) . Proof :
The proof is the same as in Proposition (3.12) of [39]. The prooffor the case y ∈ L p , p ≥ , p = 2 is also the same as the case p = 2 with someobvious changes. (cid:3) Remark 5.9
Note that when η y < ∞ and Z t ( ω, y ) → ∞ then by the aboveProposition, Y t ( ω, y ) → S ′ as t → η y while by Proposition (5.1), k Y t ( y ) k p → ∞ . Our main existence and uniqueness result viz. Theorem (5.3), applies to anumber of different situations. In this section we give some examples of theseapplications. In what follows we use the fact that if p > d then δ x ∈ S − p (see[38], Theorem (4.1)) and for such p we also note that φ ∈ S − p is a continuousfunction. Example 1
Let p > d + 1. Then note that S p ⊂ C ( R d ), the space of twotimes continuously differentiable functions on R d (see Theorem 4.1, [38]).Let ( Y t ) ≤ t<η be the unique S p - valued strong solution of equation (18) withinitial condition y ∈ S p , given by Theorem (5.3). Then almost surely, for t < η , it is given by a C ( R d ) function, say, x → Y t ( ω, x ) and we also have ∂ α Y t ( ω, x ) = h δ x , ∂ α Y t ( ω ) i = ( − | α | h ∂ α δ x , Y t ( ω ) i for | α | ≤
2. In particular acting on both sides of (19) by δ x ∈ S − p we get foreach t < η, x ∈ R d , almost surely, Y t ( ω, x ) := h δ x , Y t ( ω ) i = y ( x ) + t Z L ( Y s )( ω, x ) ds + t Z A ( Y s )( ω, x ) · dB s , (27)37here the integrands in the RHS of the above equation are well definedprocesses for each x and the stochastic integrals are well defined. Since Y t = τ Z t ( y ) , t < η with ( Z t ) as in (23), in particular Y t ( x ) = y ( x − Z t ) , x ∈ R d .Since y ∈ S p , p > d +1, it is in C ( R d ) and the Itˆo formula applied to y ( x − Z t )also yields the RHS of (27). Thus in this case, ( Y t ) t ≥ ≡ { Y t ( x ) : t ≥ , x ∈ R d } gives the unique classical solution of the SPDE (18) when p > d + 1.We also note that the Fourier transform f ∈ S p → ˆ f ∈ S p is a unitary mapon the complexified Hermite-Sobolev spaces S p ( C ) (see [45]). Hence we getfrom the above that the Fourier transform ˆ Y t of Y t is given as ˆ Y t = ˆ ye i h· ,Z t i ,where h ., . i is the inner product in R d and the RHS represents the productof the tempered distribution ˆ y , the Fourier transform of y , with the bounded C ∞ function x → e i h x,Z t i . Note that for each x ∈ R d , ˆ Y t ( x ) is a process and itis easily seen that it satisfies a linear SDE obtained by the Fourier transformof equation (19). Example 2
The connection between solutions of equation (19) and the so-lutions of the finite dimensional SDE (1) was shown in [39]. Let ( Z t ) be asin equation (23). Then it follows as in [39] that the process ( X xt ) defined by X xt := x + Z t ( τ x y ) , ≤ t < η , solves the equation dX t = ¯ σ ( X t ) · dB t + ¯ b ( X t ) dt (28) X = x where for z ∈ R d , ¯ σ ij ( z ) := σ ij ( τ z ( y )) , ¯ b i ( z ) := b i ( τ z ( y )) , j = 1 , · · · , n, i =1 , · · · , d and y ∈ S p acts as a fixed parameter. Special cases arise when σ ij , b i : S p → R are continuous linear functionals on S p i.e. they are given byelements in S − p and consequently¯ σ ij ( z ) := h σ ij , τ z y i , ¯ b i ( z ) := h b i , τ z y i where h· , ·i denotes duality between S − p and S p . Note that when σ ij , b i , y arefunctions in L ( R d ), then¯ σ ij ( z ) = Z σ ij ( w + z ) y ( w ) dw = σ ij ∗ ˜ y ( z ) , where ∗ denotes convolution and ˜ y ( z ) := y ( − z ) and similarly ¯ b i ( z ) = b i ∗ ˜ y ( z ).When p < − d , then we can take y = δ ∈ S p , σ ij , b i ∈ S − p ⊂ C ( R d ), the spaceof real valued continuous functions on R d , and ¯ σ ij ( z ) = σ ij ( z ) , ¯ b i ( z ) = b i ( z ).38 emark 6.1 The weak existence of solutions to the Itˆo SDE (28) can becombined with the pathwise uniqueness of solutions to equation (18) when σ, b are in S p to yield pathwise unique solutions of (28). The weak existence isobtained whenever the coefficients ¯ σ ij , ¯ b i in (28) are bounded and continuous.On the other hand any two solutions of (28) with the same Brownian motiongives rise via Lemma (4.2), to corresponding solutions of (18) forcing theformer solutions to be the same (see Theorem (3.3) of [40]).We can vary the construction in the above example to get strong solutions inthe case of Lipschitz continuous functions. We do this in the following onedimensional example. The general finite dimensional case can be handled byconsidering finitely many equations like (19). Example 3
Let d = 1 , p > and σ j = σ j , b = b : R → R , i = 1 , j =1 , · · · , n be Lipschitz functions. For x ∈ R define ˆ b ( x, . ) , ˆ σ j ( x, · ) : S p → R byˆ σ j ( x, y ) := σ j ( y ( x )) and ˆ b ( x, y ) := b ( y ( x )). Note that under the assumptionson p, the elements of S p are continuous functions. Then we note that for y , y ∈ S p , | ˆ σ j ( x, y ) − ˆ σ j ( x, y ) | ≤ K k y − y k p with a similar inequality for b and where the constant K depends on x . Let L and A be the operators as in equations (3) and (2) with σ j , b replaced byˆ σ j ( x, · ) and ˆ b ( x, · ). Then for any fixed initial value y ∈ S p , equation (18)has a unique S p valued strong solution which we denote by ( Y t ( x, y )). Wethen have Y t ( x, y ) := τ Z t ( x,y ) ( y ) where ( Z t ( x, y )) is given by (23) with σ and b there replaced with ˆ σ j ( x, · ) , ˆ b ( x, · ) respectively and y ∈ S p , as definedabove. Then it follows as in Example 2 that ( Z t ( x, y )) solves the ordinarySDE dZ t = σ ( y ( x − Z t )) · dB t + b ( y ( x − Z t )) dt (29) Z = 0let ¯ σ j ( z ) := ˆ σ j ( x, τ z ( y )) = σ j (( τ z y )( x )) = σ ( y ( x − z )) and a similar ex-pression for ¯ b ( z ). Then X t ( x ) := x − Z t ( x, y ) solves dX t = ¯ σ ( X t ) · dB t + ¯ b ( X t ) dt (30) X = x. B t ) adapted solutions of equation (30)will give rise to two solutions of equation (29), which in turn (via Lemma(4.2)) gives rise to two solutions of (18). The same arguments also implythat there is local uniqueness in equation (30), upto a stopping time i.e. localuniqueness upto a stopping time in equation (18) implies local uniquenessin equation (30) upto a stopping time. If now we consider a sequence ofelements y k ∈ S p , k ≥
1, satisfying y k ( z ) = z, | z − x | ≤ k then a localisation argument implies that the corresponding solutions ( X kt ( x ))satisfies X kt ( x ) = X k +1 t ( x ) , t ≤ τ k where τ k is the exit time of ( X kt ( x )) fromthe ball { z : | z − x | ≤ k } . One can then patch up the solutions ( X kt ( x )) , k ≥ dX t = σ ( X t ) · dW t + b ( X t ) dt (31) X = x, when the coefficients σ j , b, j = 1 , · · · , n , are given Lipschitz continuous func-tions. Example 4
In this example we consider martingale problems in the sense ofStroock and Varadhan, associated with a second order differential operator ¯ L with coefficients ¯ σ ij and ¯ b i , i, j = 1 , . . . d which are bounded and continuousfunctions on R d . If in addition they belong to S p , p > d + 1 we can solve theSDE (18) with σ ij and b i given by the linear functionals on S − p correspondingto ¯ σ ij and ¯ b i , i, j = 1 , . . . d and initial condition δ x ∈ S − p . In this situationwe have indeed a unique strong solution to the Ito SDE (1). In case we knowonly that ¯ σ ij and ¯ b i , i, j = 1 , . . . d are bounded and continuous, then sincethey are tempered distributions, there exists p > S − p . In this case, we still have strong solutions of (18) with η = ∞ (Proposition (5.2)) for initial conditions y ∈ S p , p > δ x / ∈ S p .Below we show that when y n → δ x weakly in S ′ and Y t ( y n ) = τ Z n ( t ) ( y n )are the solutions of (18), then the laws { P nx } of the processes ( x + Z n ( t )),converge weakly to P x , the solution of the martingale problem for ¯ L startingat x , provided the latter is well posed.40e have z ∈ R d ,¯ Lϕ ( z ) = 12 d X i,j =1 (¯ σ ¯ σt ) ij ( z ) ∂ ij ϕ ( z ) + d X i =1 ¯ b i ( z ) ∂ i ϕ ( z ) . (32)On the other hand consider the SPDE equation (18) with coefficients σ ij , b i : S p → R given by σ ij ( φ ) = h ¯ σ ij , φ i and a similar expression for ¯ b i ( φ ) , φ ∈ S p .Let y n ∈ S p ∩ C ( R d ) , y n → δ x weakly for a fixed x ∈ R d . Let ( Y nt ) , Y nt := Y t ( y n ) denote the unique S p valued solution to (18) with initial condition Y n = y n . Then Y nt = τ Z nt ( y n ) where ( Z nt ) comes from equation (23) with( Y t ) replaced by ( Y nt ). Let ¯ σ nij ( z ) := h ¯ σ ij , τ z ( y n ) i , ¯ b ni ( z ) := (cid:10) ¯ b i , τ z ( y n ) (cid:11) . Then Z nt = t Z ¯ σ n ( Z ns ) · dB s + t Z ¯ b n ( Z ns ) ds (33)where we have used the notation ¯ σ n = (¯ σ nij ) and ¯ b n = (¯ b ni ) for the diffusionand drift coefficients respectively. Let P n be the law of ( Z nt ) on C ([0 , ∞ ) , R d )and Z t ( ω ) := ω ( t ) the coordinate process. For ϕ ∈ S let¯ L n ϕ ( z ) := 12 X i,j (cid:0) ¯ σ n ¯ σ nt (cid:1) ij ( z ) ∂ ij ϕ ( z ) + X i ¯ b ni ( z ) ∂ i ϕ ( z ) . Let s < t and G be a bounded,continuous and F s -measurable function of thepath ω ∈ C ([0 , ∞ ) , R d ) depending on finitely many time coordinates. For f ∈ S , we have by Itˆo’s formula E P n f ( Z t ) − f ( Z s ) − t Z s ¯ L n f ( Z s ) ds G = 0 . Suppose now that P n → P weakly on C ([0 , ∞ ) , R d ). Let ¯ L x be the operatorin (32) wherein ¯ σ ij ( z ) , ¯ b i ( z ) are replaced with ¯ σ ij ( x + z ) , ¯ b i ( x + z ), x ∈ R d fixed. We then have : E P f ( Z t ) − f ( Z s ) − t Z s ¯ L x f ( Z s ) ds G = 0 . (34)41o see this, first note that the integrand is a bounded continuous functionon C ([0 , ∞ ) , R d ). Further, as n → ∞ , we have ¯ σ nij ( z ) → ¯ σ ij ( x + z ) , ¯ b ni ( z ) → ¯ b i ( x + z ). Moreover,sup ≤ i,j ≤ d sup n sup z (cid:12)(cid:12) ¯ σ nij ( z ) + ¯ b ni ( z ) (cid:12)(cid:12) < ∞ . (35)Our claim now follows by using the Skorokhod mapping theorem and thebounded convergence theorem. In particular, it follows that any weak limit P of the sequence { P n } solves the martingale problem for ¯ L x starting at zero.We then have the following theorem. Theorem 6.2
Suppose the martingale problem for ¯ L starting at x has aunique solution P x . Let y n ∈ S − p ∩ C ( R d ) , y n → δ x weakly. Let ( Z nt ) be asabove and let P nx be the law of ( x + Z nt ) . Then P nx → P x weakly. Proof:
Replacing f by τ − x f we see that if P is any weak limit of the family { P n , n ≥ } where P n is the law of ( Z nt ), then under P, X xt := x + Z t solvesthe martingale problem for ¯ L starting at x and hence the law of ( X xt ) mustbe P x . The tightness of the laws { P n , n ≥ } viz. for every ǫ > , T > δ ↓ sup n P n sup ≤ s,t ≤ T | s − t |≤ δ | Z t − Z s | > ǫ = 0and hence the tightness of { P nx } , follows easily from Doob’s maximal in-equality, the Burkholder-Davis-Gundy inequalities and the uniform boundsin (35). (cid:3) Example 5
In this example we consider the non-linear evolution equation ∂ t Y t = L ( Y t ) (36) Y = y. Here y ∈ S p for some p ∈ R and L : S p → S p − is given by equation (3).By a solution we mean a pair ( Y t , η ) where η > Y t ) is a continuousfunction t → Y t : [0 , η ) → S p satisfying the following equation in S p − Y t = y + t Z L ( Y s ) ds (37)42or 0 ≤ t < η . Suppose ( Y t ) ≤ t<η is an S p valued solution. Define the timedependent, linear operators ¯ A t , ¯ L t : S p → S p − as follows:¯ L t ( ϕ ) = 12 d X i,j =1 ( σσ t ) ij ( Y t ) ∂ ij ϕ − d X i =1 b i ( Y t ) ∂ i ϕ ¯ A t ( ϕ )( h ) = − n X j =1 h j d X i =1 σ ij ( Y t ) ∂ i ϕ, where h = ( h , · · · , h n ). Note that the coefficients are now deterministic buttime dependent. Define the R d -valued process ( Z t ) ≤ t<η by Z t = t Z σ ( Y s ) · dB s + t Z b ( Y s ) ds for 0 ≤ t < η . Since the integrands are deterministic ( Z t ) is a Gaussianprocess. Let ¯ Y t := τ Z t ( y ) , ≤ t < η . Then ( ¯ Y t ) ≤ t<η is the unique S p -valuedsolution of the equation d ¯ Y t = ¯ L t ( ¯ Y t ) dt + ¯ A t ( ¯ Y t ) · dB t ¯ Y = y. Let ϕ ( t ) := E ¯ Y t , 0 ≤ t < η where we note that E k ¯ Y t k p < ∞ . Then ϕ ( t )satisfies the linear evolution equation ∂ t ϕ ( t ) = ¯ L t ϕ ( t ) (38) ϕ (0) = y. in the interval 0 ≤ t < η . Since ¯ L t has constant (in space) coefficients itsatisfies the monotonicity inequality and hence equation (38) has a unique S p -valued solution. Hence we have the following stochastic representation ofsolutions of equation (36). Theorem 6.3
Let p ∈ R , y ∈ S p , and let σ ij , b i : S p → R be bounded andmeasurable. Let ( Y t ) ≤ t<η be an S p -valued solution of equation (36). Thenwe have, Y t = Eτ Z t ( y ) = y ∗ p Z t (39) where p Z t is the density of Z t and ∗ -denotes convolution. xample 6 The previous example maybe generalised. Consider the follow-ing equation viz. Y t = y + t Z L ( y, Y s ) ds (40)of which (37) becomes a special case when there is no dependence on y inthe operator L . However we will make a departure from the L in (37) byrequiring L to act on y as a partial differential operator with the coefficients σ ij , b i depending on Y s in the right hand side above. In other words, L ( y , y ) := 12 d X i,j =1 ( σσ t ) ij ( y , y ) ∂ ij y − d X i =1 b i ( y , y ) ∂ i y where y , y ∈ S − p , p ∈ R . If µ ( dz ) is a probability measure, then µ ∈S − p , p > d and we can define the non-linear convolution L ( · , y ) ◦ µ )( y )([39], Section 5, where the notation in definition (5.1) is slightly differentand the coefficients σ ij and b i do not depend on y ) as L ( · , y ) ◦ µ )( y ) := Z R d L ( τ z y , y ) µ ( dz )whenever the integral exists as a Bochner integral in S − p . An interestingsituation arises when the measure µ arises as the marginals of a stochasticprocess ( Z t ). Let { µ s ( dz ) , s ≥ } be the corresponding family of probabilitymeasures. Consider the case when σ ij ( · , · ) , b i ( · , · ) are uniformly bounded and( Z t ) satisfies the equation Z t := t Z σ ( τ Z s y, y ◦ µ s ) · dB s + t Z b ( τ Z s y, y ◦ µ s ) ds, (41)where µ t ( dz ) := P ( Z t ∈ dz ) is the law of Z t and y ∈ S − p . Then applying Itˆ o ’sformula and taking expectations we get that Y t := Eτ Z t y = y ◦ µ t =: ψ ( t, y )satisfies the non linear evolution equation ∂ t ψ ( t, y ) = ψ ( t, L ( y, ψ ( t, y ))) , t ≥ , ψ (0 , y ) = y. (42)44ith ψ ( t, L ( y, y )) := L ( · , y ) ◦ µ t ( y ) and y = ψ ( t, y ). Equation (37) becomesa special case of (42) when σ ij ( y , y ) , b i ( y , y ) are independent of y . Whenwe consider y = δ x where x ∈ R d is fixed then X t := x + Z t and we get theMckean-Vlasov equation from (41). Example 7
Let p > d . We now consider the Feynman-Kac formula for thesolution of the equation ∂ t u ( t, x ) = ¯ Lu ( t, x ) + V ( x ) u ( t, x ) u (0 , x ) = f ( x ) . where ¯ Lφ ( x ) := 12 d X i,j =1 (¯ σ ¯ σ t ) ij ( x ) ∂ ij φ ( x ) + d X j =1 ¯ b i ( x ) ∂ i φ ( x ) , φ ∈ S . Here we assume f, V, ¯ σ ij , ¯ b i are given functions in S p . Then we define L as inequation (3) with coefficients σ ij ( · ) and b i ( · ) given via the duality between S p and S − p as σ ij ( y ) = h y, ¯ σ ij i , b i ( y ) = h y, ¯ b i i where ¯ σ ij and ¯ b i are as aboveand y ∈ S − p .Denoting by ( X xt ) the diffusion corresponding to ¯ L and by ( Y t ( y )) , y = δ x thecorresponding lift on S p satisfying equation (18), it is easy to see that thesolution u ( t, x ) arises from a transformation on path space C ([0 , ∞ ) , S − p )viz. Y · ( y ) → ˆ Y · ( y ) := Y · ( y ) e − · R c ( s,Y ) ds where c ( s, y ) := −h y s , V i , y ∈ C ([0 , ∞ ) , S − p ) . In particular, since Y t ( δ x ) = δ X xt , c ( s, Y ) = V ( X xs ). For ease of calculations, we assume η = ∞ , a.s . Next,with u ( t, x ) := P Vt f ( x ) where ( P Vt ) is the Feynman-Kac semi-group, we have u ( t, x ) = E ( e t R V ( X xs ) ds f ( X xt )) = E h e t R V ( X xs ) ds δ X xt , f i = E h e t R c ( s,Y ) ds Y t ( y ) , f i = E h ˆ Y t ( y ) , f i We can show that the process ( ˆ Y t ) satisfies an SPDE with time dependentcoefficients ˆ L ( s, y ) , ˆ A i ( s, y ) , i = 1 , · · · n, s ≥ , y ∈ C ([0 , ∞ ) , S − p ) given in theform 45 L ( s, y ) := 12 d X i,j =1 (ˆ σ ˆ σ t ) ij ( s, y ) ∂ ij y s − d X j =1 ˆ b i ( s, y ) ∂ i y s − ˆ c ( s, y ) y s ˆ A i ( s, y ) := − d X j =1 ˆ σ ji ( s, y ) ∂ j y s . Here the coefficients ˆ σ ij , ˆ b i , ˆ c are induced on [0 , ∞ ) × C ([0 , ∞ ) , S − p ) by thecoefficients σ ij , b i of L, A i appearing in L and the transformation y → y c given by the unique solution of the equation y ct = y t e − t R c ( s,y c ) ds on the path space C ([0 , ∞ ) , S − p ) and satisfying ˆ σ ij ( t, y ) = σ ij ( y ct ) , ˆ b i ( t, y ) = b i ( y ct ) , ˆ c ( t, y ) = c ( t, y c ).It is then easy to see using integration by parts and the fact that ( Y t ) solves(18), that, ˆ Y t satisfies the SPDE d ˆ Y t = ˆ L ( t, ˆ Y ) dt + ˆ A ( t, ˆ Y ) · dB t (43)ˆ Y = y. The uniqueness of solutions of the above SPDE can be proved using theuniqueness of the solutions of the equation y ct = y t e − t R c ( s,y c ) ds and the ‘in-vertibility’of the map y → y c . The details can be seen in [41]. Translation invariance also appears to be a reflection of a possibly morebasic,‘duality’ relation between the finite dimensional SDE and the corre-sponding SPDE. Let p > , f ∈ S p , y ∈ S − p . Let σ ij , b i , ( Y t ( y ) , η ) be as in46heorem (5.3), with Y ≡ y . We consider the case η = ∞ . Let ( Z t ( y )) beas in equation (23). We observe the following duality relation between Y t ( y )and Z t ( y ) viz. E h y, τ − Z t ( y ) f i = E h τ Z t ( y ) y, f i = E h Y t ( y ) , f i whenever the relevant expectations are finite.Finally we note that in the model we have introduced in this paper, it be-comes meaningful to talk about diffusions with coefficients σ and b, in thestate y , for any tempered distribution y . The ’state y ’ becomes an initialstate for the SPDE, but in the context of the SDE, allows for representationof more complex initial states than just y = δ x . The distribution y is moreintutively, thought off as an initial distribution of the mass of the solventparticles in the diffusion model. An interpretation of ‘translation invariance’in the case of non interacting particles could be that it is linked by ‘symmertyprinciples’ to conservation of the mass of the particles.Thus we may interpret the parameter y ∈ S ′ in the process ( X x,yt ) by sayingthat the diffusion with parameters ¯ σ, ¯ b and starting at x is in the state y or that the diffusion with parameters ¯ σ, ¯ b is in the state ( x, y ). This ofcourse, corresponds to the process ( Y t ( τ x y )) being in the initial state τ x y .When we consider questions such as ergodicity and existence of an invariantmeasure, we replace the (initial) deterministic state x by a random statewith a distribution µ . In the context of our results this raises the questionof wether the existence of an invariant measure and questions of ergodicitycan be answered by randomising both x and y . We refer to [4], Chapter (5),for some results in this direction. We present the proofs of Proposition (5.4) and Theorem (5.5).
Proof of Proposition (5.4) :
Given r > Y r ( t, ω, y ) , ˜ η r ( ω, y )) jointly measurable in ( t, ω, y ) and ( ω, y ) respectivelysuch that for each ( t, y ) , ˜ Y r ( t, ω, y ) = Y rt ( ω, y ) a.s. on the set { t < η r,y } . y ∈ S p , ( Y rt ( y ) , η r,y ) is the solution of equation (19) con-structed in Theorem (4.3) with Y ≡ y . In the construction below we drop thesuperscript r until further notice. Recall from Section 2, that { h n,p ; n ∈ Z d + } is the ONB in the Hilbert space S p . Since for each y ∈ S p , Y t ( ω, y ) = y + ∞ X | n | =0 h Y t ( ω, y ) , h n,p i p h n,p where ∞ P | n | =0 h Y ( t, ω, y ) , h n,p i p < ∞ for all t ≥ ω ∈ Ω, it suffices to showthe existence of the map ˜ η ( ω, y ) and for each n ≥
1, a B [0 , ∞ ) ⊗ F B ∞ ⊗B ( S p ) / B ( R ) measurable map ˜ Y n ( t, ω, y ) satisfying ∞ X | n | =0 ( ˜ Y n ( t, ω, y )) < ∞ for all ( t, ω, y ), and satisfying, for each t ≥ , y ∈ S p , ˜ η ( ω, y ) = η y ( ω ) , and ˜ Y n ( t, ω, y ) = h Y t ( y ) , h n,p i p almost surely on the set { t < η y } . One can then define ˜ Y ( t, ω, y )by ˜ Y ( t, ω, y ) : = y + ∞ X | n | =0 ˜ Y n ( t, ω, y ) h n,p t < ˜ η ( ω, y ) and: = δ t ≥ ˜ η ( ω, y ) . Recall the process ( Y kt ) satisfying equation (20) (which we now denote by( Y kt ( y )) to make the dependence on y explicit), constructed for each k ≥ , y ∈ S p , in the proof of Theorem (4.3), satisfying for each t ≥ , y ∈ S p , E k Y t ( y ) − Y kt ∧ η y ( y ) k p → k → ∞ ; where η y := lim k →∞ η k,y , as in the proof of Theorem (4.3). It is easyto see that there exists jointly measurable maps ( t, ω, y ) → ˜ Y k ( t, ω, y ) and( ω, y ) → ˜ η ( ω, y ) satisfying, for each t ≥ , ˜ Y k ( t, ω, y ) = Y kt ( y ) and ˜ η ( ω, y ) = η y ( ω ) almost surely. For the first map we define ˜ Y k ( t, ω, y ) := τ Z k ( t,ω,y ) ( y )where the R d valued process ( Z k ( t, ω, y )) is a jointly measurable version whichis indistinguishable for each y from the process ( Z kt ( y )) defined in terms of48 Y k − ( t, ω, y )) in the proof of Theorem (4.3). Note that Y ( t, ω, y ) ≡ y .Thus the joint measurability of Z k ( t, ω, y ) follows from that of the stochasticintegrals defining Z kt and an induction argument. Consequently, the map˜ Y k ( t, ω, y ) is for each y , indistinguishable from the process Y kt ( y ) = τ Z kt ( y ) ( y ).To define the map ˜ η ( ω, y ), we first define˜ σ j ( ω, y ) := inf { s > k ˜ Y j ( s, ω, y ) − y k q > r } . It is easy to check that the map ( ω, y ) → ˜ σ j ( ω, y ) is jointly measurable andsatisfies for each y, ˜ σ j ( ω, y ) = σ yj ( ω ), almost surely ; where we have explicitlydenoted the dependence on y of the stopping time σ j constructed in theproof of Theorem (4.3). The map ˜ η ( ω, y ) is now constructed from the map˜ σ j ( ω, y ) in the same way as η y was constructed from the σ j ’s and η k ’s inthe proof of Theorem (4.3) viz. ˜ η j ( ω, y ) := ˜ σ ( ω, y ) ∧ · · · ∧ ˜ σ j ( ω, y ) and˜ η ( ω, y ) := lim j →∞ ˜ η j ( ω, y ).Fix t ≥ , y ∈ S p . Since E k Y t ( y ) − Y kt ∧ η y ( y ) k q → , q ≤ p − { n k } such that Y n k t ∧ η y ( y ) → Y t ( y ) almost surely in S q . Inparticular, for all n = ( n , · · · , n d ) and for almost all ω , h Y n k t ∧ η y ( y ) , h n,q i q → h Y t ( y ) , h n,q i q . We now construct a set G in the product ( t, ω, y )-space using the subsequence { n k } above as follows. Let G := T n G n T G where the intersection is overall n = ( n , · · · , n d ) , n i ∈ Z d + and where the sets G , G n are defined as G := { ( t, ω, y ) : lim k →∞ k ˜ Y n k ( t ∧ ˜ η, ω, y ) k q < ∞} and G n := { ( t, ω, y ) : lim k →∞ h ˜ Y n k ( t ∧ ˜ η, ω, y ) , h n,q i q exists } . Fix n = ( n , · · · , n d ). Define Y n ( t, ω, y ) := lim k →∞ h ˜ Y n k ( t ∧ ˜ η ( t, ω, y ) , ω, y ) , h n,q i q , ( t, ω, y ) ∈ G := 0 otherwise . Then from the joint measurability of G, ˜ η, and ˜ Y n k ( t, ω, y ) we get that themap ( t, ω, y ) → Y n ( t, ω, y ) is jointly measurable. If ( t, ω, y ) ∈ G , then ∞ X | n | =0 ( Y n ( t, ω, y )) ≤ lim k →∞ ∞ X | n | =0 h ˜ Y n k ( t ∧ ˜ η y , ω, y ) , h n,q i q ≤ lim k →∞ k ˜ Y n k ( t ∧ ˜ η y , ω, y ) k q < ∞ . t ≥ , y ∈ S p we have ˜ Y n k ( t ∧ ˜ η y , ω, y ) = Y n k t ∧ η y ( y ) almostsurely, it follows from the preceding definitions that Y n ( t, ω, y ) = h Y t ( ω, y ) , h n,q i q almost surely on { t < η y } . We can now define˜ Y n ( t, ω, y ) := (2 | n | + d ) p − q Y n ( t, ω, y ) , n ∈ Z d + . Note that this is not the same as ˜ Y k ( t, ω, y ) defined earlier in this proof,which were approximations to Y t ( y ). Then ˜ Y n ( t, ω, y ) = h Y t ( y ) , h n,p i p on { t < η y } almost surely and since q ≤ p − ∞ X | n | =0 ( ˜ Y n ( t, ω, y )) ≤ ∞ X | n | =0 ( Y n ( t, ω, y )) < ∞ for every ( t, ω, y ). Then, as mentioned above, we construct the map ( t, ω, y ) → ˜ Y ( t, ω, y ) using ˜ Y n ( t, ω, y ) as its n -th Fourier-Hermite coefficient, n ∈ Z d + .Since the maps ˜ Y , ˜ η constructed above depend on r >
0, we now make the de-pendence explicit and patch up the maps ˜ Y r ≡ ˜ Y ( t, ω, y ) , ˜ η r ≡ ˜ η ( ω, y ) for dif-ferent r >
0. Let r k ↑ ∞ . We denote by ˜ Y k ( t, ω, y ) := ˜ Y r k ( t, ω, y ) , ˜ η k ( ω, y ) :=˜ η r k ( ω, y ). Let H := { ( ω, y ) : ˜ η k ( ω, y ) ≤ ˜ η k +1 ( ω, y ) , k = 1 , · · · } , and define˜ η ( ω, y ) = lim k →∞ ˜ η k ( t, ω, y ) , ( t, ω ) ∈ H ; = ∞ otherwise . Then for fixed y , ˜ η ( ω, y ) = η y ( ω ) almost surely follows from the correspond-ing equality ˜ η k ( ω, y ) = η r k ,y ( ω ), almost surely. Thus, part b) in the statementof the theorem holds.For k = 1 , · · · , define H k := { ( t, ω, y ) : t < ˜ η k ( ω, y ) , ˜ Y k +1 ( t, ω, y ) = ˜ Y k ( t, ω, y ) } , and H := S n ≥ T k ≥ n H k . We define˜ Y ( t, ω, y ) = ˜ Y k ( t, ω, y ) if ( t, ω, y ) ∈ H ; = 0 otherwise . t, y ), that ˜ Y ( t, ω, y ) = Y t ( y ) almost surely on t < η y ( ω ) followsfrom the fact that almost surely, ˜ Y k ( t, ω, y ) = Y r k t ( y ) on t < η r k ,y ( ω ). Clearly˜ Y ( t, ω, y ) can be extended as a ˆ S p := S p S { δ } in an obvious manner for t ≥ ˜ η to satisfy part a) of the theorem. (cid:3) Proof of Theorem (5.5):
The proof consists in checking, at each stageof the construction of measurable maps ( t, ω, y ) → ˜ Y ( t, ω, y ) carried outin the previous theorem, that composition with Y ( ω ) at time t yields thecorresponding (approximate) solution with initial value Y at time t .Recall that for r > , ˜ Y r ( t, ω, y ) , ˜ η r ( ω, y ) are the measurable versions of( Y rt ( y ) , η r,y ) constructed in the previous proposition. It is sufficient to showthat if ˜ Y r ( t, ω ) := ˜ Y r ( t, ω, Y ( ω )) , ˜ η r ( ω ) := ˜ η r ( ω, Y ( ω )), then almost surely,˜ Y r ( t, ω ) = Y rt ( ω ) a.s. on { t < η r } and that ˜ η r ( ω ) = η r ( ω ) almost surely. Once this is done for each r > r k ↑ ∞ , define ˜ η k ( ω ) := ˜ η r k ( ω ) , ˜ Y k ( t, ω ) := ˜ Y r k ( t, ω ) and observethat by pathwise uniqueness of (19), for each t , almost surely, ( t, ω, Y ( ω )) ∈ H, ( ω, Y ( ω )) ∈ H , where the sets H, H are as in the previous proposition.Then ˜ Y ( t, ω ) = Y t ( ω ) on { t < η ( ω ) } , almost surely, follows by pathwiseuniqueness.Recall the approximations ( Y r,kt ( y ) , η r,k,y ) k ≥
1, for fixed r >
0, of the solu-tions ( Y rt ( y ) , η r,y ), of equation (19) with initial value Y = y ∈ S p in a ball ofradius r around y . It is clear by induction and uniqueness of the linear equa-tion (20) satisfied by Y r,kt ( ω, y ), and the independence of Y and ( B t ) that forfixed t , ( ˜ Y r,k ( t, ω, Y ( ω )) , ˜ η r,k ( ω, Y ( ω ))) is the k th approximant to ( Y rt , η r ),the solutions of equation (19) on [0 , η r ), with initial value Y . Note that η r ( ω ) = lim k ↑∞ η r,k,Y ( ω ) ( ω ) = lim k ↑∞ ˜ η r,k ( ω, Y ( ω )) = ˜ η r ( ω, Y ( ω )) =: ˜ η r , almostsurely, where the second equality follows from the preceding observation.Thus from the above observations, we have for each t , E k Y r ( t ∧ η r ) − ˜ Y r,k, ( t ∧ η r , Y ) k q → k → ∞ . It remains to identify the limit as k → ∞ of ˜ Y r,k, ( t ∧ η r , Y ) with˜ Y r ( t, ω, Y ( ω )).From the above L convergence we get the subsequential convergence˜ Y r,n k ( t ∧ η r , Y ) → Y r ( t ∧ η r )51lmost surely. Let G be the set constructed in the proof of Proposition (5.4),with the above subsequence. Let ¯ Y r,n ( t, ω, y ) and ˜ Y r,n ( t, ω, y ) be as in theprevious proposition, where we have now made the dependence on r explicit.Then, for fixed t and almost every ω, ( t, ω, Y ( ω )) ∈ G , and hence on t < η r ,˜ Y r,n ( t, ω, Y ( ω )) = (2 | n | + d ) p − q ¯ Y r,n ( t, ω, Y ( ω ))= (2 | n | + d ) p − q lim k →∞ h ˜ Y r,n k ( t, ω, Y ( ω )) , h n,q i q = h Y r ( t, ω ) , h n,p i p , where the last equality follows from the almost sure subsequential conver-gence in S p . Since this is true for all n = ( n , . . . , n d ), we have˜ Y r ( t, ω, Y ( ω )) = X n ˜ Y r,n ( t, ω, Y ( ω )) h n,p = X n h Y r ( t, ω ) , h n,p i p h n,p = Y r ( t, ω )almost surely on { t < η r ( ω ) } . (cid:3) References [1] Agram, Nacira. and Oksendal, Bernt. (2016): Model uncertainitystochastic mean field control; arXiv:1611.01385v1.[2] Barbu, Viorel. and Rockner, Michael. (2018): From Non linearFokker-Planck equations to solutions of distribution dependent SDE;arXiv:1808.10706v2.[3] Bass,R. (1997):
Diffusions and Elliptic Operators , (Probability and itsApplications) Springer.[4] Bhar,S. (2015) :
Semi-Martingales and stochastic partial differentialequations in the space of tempered distributions , Ph.d Thesis, IndianStatistical Institute, Kolkata. 525] Bhar,S. and Rajeev,B. (2015 ) : Differential Operators on HermiteSobolev spaces,
Proc. Indian Acad. Sci. Math. Sci. ,125(1):113–125.[6] Bhar,S., Rajeev,B., and Sarkar,B. (2017,pre-print) : Solutions ofSPDE’s associated with a stohastic flow.[7] Bhatia,Rajendra., Bhat,Abhay G. and Parthasarathy,K.R. (Eds)(2012) : Collected papers of S.R.S.Varadhan, Hindustan Book Agency.[8] Bj¨ork,Tomas and Christensen, Bent Jesper.(1999) Interest rate dynam-ics and consistent forward rate curves.
Math. Finance , 9(4):323–348.[9] Bj¨ork,Tomas., and Svensson,Lars. (2001) On the existence of finite-dimensional realizations for nonlinear forward rate models.
Math. Fi-nance , 11(2):205–243.[10] Borkar,V.S. (1984) : Evolution of Brownian particles in an interactingmedium ,
Stochastics , vol. 14, 33-79.[11] Chiang,T.S., Kallianpur,G. and Sundar,P. (1991): Propagation ofChaos and the Mckean-Vlasov equation in duals of nuclear spaces.
Appl.Math. Optim , 24, 55-83.[12] Da Prato,G. and Zabczyk,J. (1992):
Stochastic equations in infinitedimensions , Cambridge University Press.[13] Doob,J.L. (1953):
Stochastic Processes . Wiley,New York, 1953.[14] Filipovi´c,Damir., Tappe,Stefan., and Teichmann,Josef. (2014) Invari-ant manifolds with boundary for jump-diffusions.
Electron. J. Probab. ,19:no. 51, 28, 2014.[15] Evans, Lawrence C (2010):
Partial differential equations , Graduatestudies in mathematics, Vol.19. AMS.[16] Feller,W. (1954) : Diffusion process in one dimension.
Trans.Amer.Math.Soc. . 1-31.[17] Freidlin,Mark I. and Wentzell, Alexander D. (2012) : Random pertur-bations of dynamical systems , Springer.5318] Gawarecki, Leszek and Mandrekar, Vidyadhar (2011) :
Stochastic Dif-ferential Equations with applications to Stochastic Partial DifferentialEquations , Springer.[19] Gawarecki,L., Mandrekar,V. and Rajeev,B. (2008): Linear stochasticdifferential equations in the dual of a multi-Hilbertian space.
TheoryStoch. Process. , 14(2):28–34.[20] Gawarecki,L., Mandrekar,V. and Rajeev,B. (2009): The MonotonicityInequality for Linear Stochastic Partial Differential Equations,
InfiniteDimensional Analysis, Quantum Probability and Related Topics , Vol12, No.4,575-591.[21] Hairer,M. (2014): A theory of regularity structures.
Invt. Math ,198,269-504.[22] Ikeda,N. and Watanabe,S. (1981):
Stochastic Differential Equationsand Diffusion Processes , North Holland.[23] Itˆo,K. (1946) : On a Stochastic Integral Equation,
Proc.Japan. Acad , , 32-35.[24] Itˆo,K. (1984): Foundations of Stochastic Differential Equations in In-finite Dimensional Spaces , CBMS 47, SIAM.[25] Kallianpur,G. and Xiong,J. (1995):
Stochastic Differential Equationsin Infinite Dimensional Spaces . Lecture Notes , Monograph Series Vol.26, Institute of Mathematical Statistics.[26] Kolmogorov,A.N. (1931): Uber die analytischen Methoden in derWahrschienlickheitsrechnung,Math.Ann.104,415-458.[27] Kotelnez,Peter M., and Kurtz, Thomas G (2010) : Macroscopic lim-its for stochastic partial differential equations of Mckean-Vlasov type,
Probab. Theory and Relat. Fields , 146, 189-222.[28] Krylov,N.V., and Rozovskii,B.L. (1979): Stochastic Evolution Equa-tions,
Itogi Naukt i Tekhniki, Seriya Sovremennye Problemy Matem-atiki , Vol 14, p.71-146 5429] Krylov,N.V. (1995):
Introduction to the Theory of Diffusion processes ,Translations of Mathematical Monographs,142,American Mathemati-cal Society.[30] Kunita,H. (1990):
Stochastic Flows and Stochastic Differential Equa-tions . Cambridge University Press.[31] L´evy,P. (1948):
Processus stochastiques et mouvement brownien . Gau-thier Villars,Paris.[32] Lions,P.L. (1982) :
Generalized solutions of Hamilton-Jacobi equations ,Pitman.[33] Metivier, M (1982) :
Semi-martingales : a course on stochastic pro-cesses. de Gruyter.[34] Oksendal,B. (2010):
Stochastic Differential Equations : An Introductionwith Applications (Universitext) . Springer.[35] Pardoux, ´E. and Peng,S. (1992): Backward SDE’s and quasi linearPDE’s, in
Stochastic Partial differential equations and their appli-cations , B.L. Rozowskii and R.Sowers (eds.) LNCIS 176, 200-217.Springer.[36] Rajeev,B. (2001): From Tanaka Formula to Ito Formula : Distribu-tions, Tensor Products and Local Times.
S´eminaire de ProbabilitesXXXV , LNM 1755, p.371- 389.[37] Rajeev,B. and Thangavelu,S. (2003): Probabilistic Representation ofSolutions to the Heat Equation.
Proceedings of the Indian Academy ofScience (Math. Sci.)
Potential Analysis , 28, 139-162.[39] Rajeev,B. (2013): Translation Invariant Diffusions in the space of Tem-pered Distributions,
Indian Journal of Pure and Applied Mathematics ,44(2), p.231-258.[40] Rajeev,B. and Suresh Kumar,K. (2016) : A class of stochastic differen-tial equations with pathwise unique solutions, Ind.Jl.Pure Appl.Math.,Vol.47(2), p.343-355. 5541] Rajeev,B. (2019) : On the Feynman-Kac formula.http://arxiv.org/abs/1904.12160.[42] Stroock,D.W. (2008) :
Partial differential equations for probabilists .Cambridge University Press.[43] Stroock,D.W. and Varadhan,S.R.S. (1969): Diffusion Processes withcontinuous coefficients, I and II ,
Comm.Pure Appl. Math , XXII, 345-400 and 479-530.[44] Tappe, Stefan., (2017) : Invariance of closed convex cones for stochas-tic partial differential equations
Journal of Mathematical Analysis andApplications
Lectures on Hermite and Laguerre expansions ,Math. Notes 42, Princeton University Press, Princeton.[46] ¨Ust¨unel,A.S. (1982): A generalization of Itˆo’s formula.
J. Funct. Anal. ,47(2):143–152.[47] Walsh,J.B. (1986):
An Introduction to Stochastic Partial DifferentialEquation . Lecture Notes in Mathematics 1180. Springer 1986.[48] Wiener,N. (1923): Differential space.