Path independence of the additive functionals for McKean-Vlasov stochastic differential equations with jumps
aa r X i v : . [ m a t h . P R ] M a r PATH INDEPENDENCE OF THE ADDITIVE FUNCTIONALS FORMCKEAN-VLASOV STOCHASTIC DIFFERENTIAL EQUATIONSWITH JUMPS*
HUIJIE QIAO , AND JIANG-LUN WU
1. School of Mathematics, Southeast UniversityNanjing, Jiangsu 211189, China2. Department of Mathematics, University of Illinois at Urbana-ChampaignUrbana, IL 61801, [email protected]. Department of Mathematics, Computational Foundry, Swansea UniversityBay Campus, Swansea SA1 8EN, [email protected]
Abstract.
In this article, the path independent property of additive functionals ofMcKean-Vlasov stochastic differential equations with jumps is characterised by nonlinearpartial integro-differential equations involving L -derivatives with respect to probabilitymeasures introduced by P.-L. Lions. Our result extends the recent work [16] by Ren andWang where their concerned McKean-Vlasov stochastic differential equations are drivenby Brownian motions. Introduction
Since the seminal work [11, 12], there have been substantial interests to study McKean-Vlasov stochastic differential equations, which are stochastic differential equations whosecoefficients depend on the law of the solution, and are also referred as mean-field stochas-tic differential equations, see, e.g., [2] and most recently [1, 9] (and references therein).Very recently, Ren and Wang [16] explored an interesting result characterising the path in-dependent additive functionals of McKean-Vlasov stochastic differential equations drivenby Brownian motion by space-distribution partial differential equations, which extendsthe earlier work [18, 19] on this direction.The object of this paper is to extend [16] to the same type of equations driven by com-pensated Poisson martingale measures (and Brownian motion). We aim to characterise thepath-independence of additive functionals of McKean-Vlasov stochastic differential equa-tions with jumps by certain partial integro-differential equations involving L -derivativeswith respect to probability measures, following our previous work [14, 15] where thereinstochastic differential equations with jumps in finite and infinite dimensions were studied,respectively. Let us also mention further interesting work [17, 10], where characterisationtheorems for the path independence of additive functionals of stochastic differential equa-tions driven by G -Brownian motion as well as for stochastic differential equations driven AMS Subject Classification(2010):
Keywords:
McKean-Vlasov stochastic differential equations with jumps; the Itˆo formula; additivefunctionals, partial integro-differential equations.*This work was partly supported by NSF of China (No. 11001051, 11371352, 11671083) and ChinaScholarship Council under Grant No. 201906095034. y Brownian motion with non-Markovian coefficients (i.e. random coefficients) are estab-lished, respectively. It is of course very interesting to extend the two cases to the situationof the concerned equations with jumps, which we will consider in our forthcoming work.It is worthwhile to mention our results. We prove the Itˆo formula for McKean-Vlasovstochastic differential equations. And its proof is more simple than that in [5] and [9].Moreover, it does not contain any abstract probability space. Therefore, it is more appli-cable. Besides, we compare our main results with that in [16] and [14]. And we find thatit is indeed more general.The rest of the paper is organized as follows. In the next section, we will set upthe framework for introducing the McKean-Vlasov stochastic differential equations. InSection 3, we will first derive the Itˆo formula for the solutions of our concerned McKean-Vlasov stochastic differential equations (and the proof is given in the Appendix at the endof our paper), prove our characterisation theorem, analyze some special cases and finallycompare our results with some known results.2. Preliminary
Notation.
In the subsection, we introduce notation used in the sequel.For convenience, we shall use | · | and k · k for norms of vectors and matrices, re-spectively. Furthermore, let h· , ·i denote the scalar product in R d . Let A ∗ denote thetranspose of the matrix A .Let B ( R d ) be the Borel σ -algebra on R d and M ( R d ) be the space of all probabilitymeasures defined on B ( R d ) carrying the usual topology of weak convergence. Let M ( R d )be the collection of all the probability measures µ on B ( R d ) satisfying µ ( | · | ) := Z R d | x | µ (d x ) < ∞ . We put on M ( R d ) a topology induced by the following metric: ρ ( µ , µ ) := inf π ∈ C ( µ ,µ ) Z R d × R d | x − y | π (d x, d y ) , µ , µ ∈ M ( R d ) , where C ( µ , µ ) denotes the set of all the probability measures whose marginal distribu-tions are µ , µ , respectively. Thus, ( M ( R d ) , ρ ) is a Polish space.2.2. McKean-Vlasov stochastic differential equations with jumps.
In the sub-section, we introduce McKean-Vlasov stochastic differential equations with jumps andpath-independence for a type of additive functionals.Let (Ω , F , P ) be a complete, filtered probability space. Let ( B t ) t > be a m -dimensionalBrownian motion. Let ( U , k · k U ) be a finite dimensional normed space with its Borel σ -algebra U . Let ν be a σ -finite measure defined on ( U , U ). We fix U = {k u k U α } , where α > ν ( U \ U ) < ∞ and R U k u k U ν (d u ) < ∞ . Let N (d t, d u ) be aninteger-valued Poisson random measure on (Ω , F , P ) with intensity E N (d t, d u ) = d tν (d u ).Denote ˜ N (d t, d u ) := N (d t, d u ) − d tν (d u ) , that is, ˜ N (d t, d u ) stands for the compensated martingale measure of N (d t, d u ). Moreover, B · and N (d t, d u ) are mutually independent. Fix T > { F t } t ∈ [0 ,T ] be the filtrationgenerated by ( B t ) t > and N (d t, d u ), and augmented by a σ -field F , i.e., F t := σ { B s , N ((0 , s ] , A ) : 0 s t, A ∈ U } , t := ( ∩ s>t F s ) ∨ F , t ∈ [0 , T ] , where F ⊂ F has the following properties:(i) ( B t ) t > and N (d t, d u ) are independent of F ;(ii) M ( R d ) = { P ◦ ξ − , ξ ∈ L ( F ; R d ) } ;(iii) F ⊃ N and N is the collection of all P -null sets.Now, consider the following McKean-Vlasov stochastic differential equation with jumpson R d :d X t = b ( t, X t , L X t )d t + σ ( t, X t , L X t )d B t + Z U f ( t, X t − , L X t , u ) ˜ N (d t, d u ) , t ∈ [0 , T ] , (1)where L X t denotes the distribution of X t under P . Here the coefficients b : [0 , T ] × R d ×M ( R d ) R d , σ : [0 , T ] × R d × M ( R d ) R d × m and f : [0 , T ] × R d × M ( R d ) × U R d are all Borel measurable. We assume:( H b,σ ) There exists an increasing function L : [0 , ∞ ) (0 , ∞ ) such that for t ∈ [0 , T ], x , x ∈ R d , µ , µ ∈ M ( R d ), | b ( t, x , µ ) − b ( t, x , µ ) | + k σ ( t, x , µ ) − σ ( t, x , µ ) k L ( t )( | x − x | + ρ ( µ , µ )) , and for t ∈ [0 , T ], | b ( t, , δ ) | + k σ ( t, , δ ) k L ( t ) , where δ is the Dirac measure in 0.( H f ) There exists an increasing function L : [0 , ∞ ) (0 , ∞ ) such that for t ∈ [0 , T ], x , x ∈ R d , µ , µ ∈ M ( R d ), u ∈ U , | f ( t, x , µ , u ) − f ( t, x , µ , u ) | L ( t ) k u k U ( | x − x | + ρ ( µ , µ )) , and for t ∈ [0 , T ], u ∈ U , | f ( t, , δ , u ) | L ( t ) k u k U . Remark 2.1.
By ( H b,σ ) and ( H f ), it holds that for t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) | b ( t, x, µ ) | + k σ ( t, x, µ ) k C (cid:16) | x | + µ ( | · | ) (cid:17) , | f ( t, x, µ, u ) | C k u k U (cid:16) | x | + µ ( | · | ) (cid:17) . Under ( H b,σ ) ( H f ), following up the proof of [5, Theorem 3.1, Page 7], we can obtainthat for any s ∈ [0 , T ] and X s ∈ L (Ω , F s , P ; R d ), Eq.(1) has a unique solution ( X t ) t > s with E (cid:16) sup t ∈ [ s,T ] | X t | (cid:17) < ∞ . (2)And then we introduce the following additive functional F s,t := Z ts g ( r, X r , L X r )d r + Z ts h g ( r, X r , L X r ) , d B r i + Z ts Z U g ( r, X r − , L X r , u ) ˜ N (d r, d u )+ Z ts Z U g ( r, X r , L X r , u ) ν (d u )d r, s < t T, (3)where g : [0 , T ] × R d × M ( R d ) R , g : [0 , T ] × R d × M ( R d ) R m , : [0 , T ] × R d × M ( R d ) × U R , g : [0 , T ] × R d × M ( R d ) × U R , are Borel measurable, g ( t, x, µ ) , g ( t, x, µ ) , g ( t, x, µ, u ) are continuous in ( t, x, µ ) and R U g ( t, x, µ, u ) ν (d u ) is continuous in ( t, x, µ ), so that F s,t is a well-defined semimartingale. Definition 2.2.
The additive functional F s,t is called path independent, if there exists afunction V : [0 , T ] × R d × M ( R d ) R , such that for any s ∈ [0 , T ] and X s ∈ L (Ω , F s , P ; R d ) , the solution ( X t ) t ∈ [ s,T ] of Eq.(1)satisfies F s,t = V ( t, X t , L X t ) − V ( s, X s , L X s ) . (4)2.3. L-derivative for functions on M ( R d ) . In the subsection we recall the definitionof L-derivative for functions on M ( R d ). And the definition was first introduced by Lions[2]. Moreover, he used some abstract probability spaces to describe the L-derivatives.Here, for the convenience to understand the definition, we apply a straight way to state it([16]). Let I be the identity map on R d . For µ ∈ M ( R d ) and φ ∈ L ( R d , B ( R d ) , µ ; R d ), µ ( φ ) := R R d φ ( x ) µ (d x ). Moreover, by simple calculation, it holds that µ ◦ ( I + φ ) − ∈M ( R d ). Definition 2.3. (i) A function h : M ( R d ) R is called L-differentiable at µ ∈ M ( R d ) ,if the functional L ( R d , B ( R d ) , µ ; R d ) ∋ φ h ( µ ◦ ( I + φ ) − ) is Fr´echet differentiable at ∈ L ( R d , B ( R d ) , µ ; R d ) ; that is, there exists a unique ξ ∈ L ( R d , B ( R d ) , µ ; R d ) such that lim µ ( | φ | ) → h ( µ ◦ ( I + φ ) − ) − h ( µ ) − µ ( h ξ, φ i ) p µ ( | φ | ) = 0 . In the case, we denote ∂ µ h ( µ ) = ξ and call it the L-derivative of h at µ .(ii) A function h : M ( R d ) R is called L-differentiable on M ( R d ) if L-derivative ∂ µ h ( µ ) exists for all µ ∈ M ( R d ) .(iii) By the same way, ∂ µ h ( µ )( y, y ′ ) for y, y ′ ∈ R d can be defined. Next, we introduce some related spaces.
Definition 2.4.
The function h is said to be in C ( M ( R d )) , if ∂ µ h is continuous, forany µ ∈ M ( R d ) , ∂ µ h ( µ )( · ) is differentiable, and its derivative ∂ y ∂ µ h : M ( R d ) × R d → R d ⊗ R d is continuous, and for any y ∈ R d , ∂ µ h ( · )( y ) is differentiable, and its derivative ∂ µ h : M ( R d ) × R d × R d → R d ⊗ R d is continuous. Definition 2.5. (i) The function h : [0 , T ] × R d × M ( R d ) R is said to be in C , , ([0 , T ] × R d ×M ( R d )) , if h ( t, x, µ ) is C in t ∈ [0 , T ] , C in x ∈ R d and µ ∈ M ( R d ) respectively, and its derivatives ∂ t h ( t, x, µ ) , ∂ x h ( t, x, µ ) , ∂ x h ( t, x, µ ) , ∂ µ h ( t, x, µ )( y ) , ∂ y ∂ µ h ( t, x, µ )( y ) , ∂ µ h ( t, x, µ )( y, y ′ ) are jointly continuous in the corresponding variable family ( t, x, µ ) , ( t, x, µ, y ) or ( t, x, µ, y, y ′ ) .(ii) The function h : [0 , T ] × R d × M ( R d ) R is said to be in C , , b ([0 , T ] × R d ×M ( R d )) , if h ∈ C , , ([0 , T ] × R d × M ( R d )) and all its derivatives are uniformly boundedon [0 , T ] × R d × M ( R d ) . If h ∈ C , , ([0 , T ] × R d × M ( R d )) or h ∈ C , , b ([0 , T ] × d × M ( R d )) and h is independent of t , we write h ∈ C , ( R d × M ( R d )) or h ∈ C , b ( R d × M ( R d )) .(iii) The function h : [0 , T ] × R d × M ( R d ) R is said to be in C , , b ([0 , T ] × R d × M ( R d )) , if h ∈ C , , b ([0 , T ] × R d × M ( R d )) and all its derivatives are Lipschitzcontinuous. In addition, if h is independent of t , we write h ∈ C , b ( R d × M ( R d )) . Main results and related analysis
In the section, we state and prove the main results, analyze some special cases andcompare our results with some known results.3.1.
Main results and their proofs.
In the subsection, we state and prove the mainresults.First of all, we prove the Itˆo formula which is an important tool in our following proofs.
Proposition 3.1. (The Itˆo formula)
Suppose that ( H b,σ ) ( H f ) hold. Then, if h belongsto C , , b ([0 , T ] × R d × M ( R d )) and all the derivatives of h in ( t, x, µ ) are uniformlycontinuous, it holds that for t > , d h ( t, X t , L X t ) = ( ∂ t + L b,σ,f ) h ( t, X t , L X t )d t + h ( σ ∗ ∂ x h )( t, X t , L X t ) , d B t i + Z U [ h ( t, X t − + f ( t, X t − , L X t , u ) , L X t ) − h ( t, X t − , L X t )] ˜ N (d t, d u ) , (5) where L b,σ,f h ( t, x, µ ) := h b, ∂ x h i ( t, x, µ ) + 12 tr (cid:16) ( σσ ∗ ) ∂ x h (cid:17) ( t, x, µ )+ Z R d h b ( t, y, µ ) , ( ∂ µ h )( t, x, µ )( y ) i µ (d y )+ 12 Z R d tr (cid:16) ( σσ ∗ )( t, y, µ ) ∂ y ∂ µ h ( t, x, µ )( y ) (cid:17) µ (d y )+ Z U (cid:20) h (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − h ( t, x, µ ) −h f ( t, x, µ, u ) , ∂ x h ( t, x, µ ) i (cid:21) ν (d u )+ Z U Z Z R d (cid:28) ∂ µ h ( t, x, µ ) (cid:16) y + ηf ( t, y, µ, u ) (cid:17) − ∂ µ h ( t, x, µ )( y ) , f ( t, y, µ, u ) (cid:29) µ (d y )d ην (d u ) . (6)Because the proof of Proposition 3.1 is too long, we place it to the Appendix so as tomake the context more compact. Now, it is the position to state and prove our mainresult. Theorem 3.2.
Assume that b, σ, f satisfy ( H b,σ ) ( H f ). Then if V belongs to C , , b ([0 , T ] × R d × M ( R d )) and all the derivatives of V in ( t, x, µ ) are uniformly continuous, g ∈ C ([0 , T ] × R d × M ( R d ) R ) , g ∈ C ([0 , T ] × R d × M ( R d ) R m ) , g ( · , · , · , u ) ∈ C ([0 , T ] × R d × M ( R d ) R ) and R U g ( · , · , · , u ) ν (d u ) ∈ C ([0 , T ] × R d × M ( R d ) R ) , s,t is path independent in the sense of (4) if and only if ( V, g , g , g , g ) satisfies thepartial integral-differential equation ( ∂ t + L b,σ,f ) V ( t, x, µ ) = g ( t, x, µ ) + R U g ( t, x, µ, u ) ν (d u ) , ( σ ∗ ∂ x V )( t, x, µ ) = g ( t, x, µ ) ,V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) = g ( t, x, µ, u ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U . (7) Proof.
First, we prove sufficiency. For V ∈ C , , b ([0 , T ] × R d × M ( R d )), based on Propo-sition 3.1, it holds thatd V ( t, X t , L X t ) = ( ∂ t + L b,σ,f ) V ( t, X t , L X t )d t + h ( σ ∗ ∂ x V )( t, X t , L X t ) , d B t i + Z U [ V ( t, X t − + f ( t, X t − , L X t , u ) , L X t ) − V ( t, X t − , L X t )] ˜ N (d t, d u ) . (8)Inserting (7) in (8), we haved V ( t, X t , L X t ) = g ( t, X t , L X t )d t + Z U g ( t, X t , L X t , u ) ν (d u )d t + h g ( t, X t , L X t ) , d B t i + Z U g ( t, X t − , L X t , u ) ˜ N (d t, d u ) . By integrating the above equality from s to t , one can obtain (4). That is, F s,t is pathindependent.Next, let us show necessity. On one hand, since F s,t is path independent, it follows fromDefinition 2.2 that V ( t, X t , L X t ) − V (0 , X , L X ) = Z t g ( r, X r , L X r )d r + Z t h g ( r, X r , L X r ) , d B r i + Z t Z U g ( r, X r − , L X r , u ) ˜ N (d r, d u )+ Z t Z U g ( r, X r , L X r , u ) ν (d u )d r, t > . (9)On the other hand, by integrating (8) from 0 to t , we get that V ( t, X t , L X t ) − V (0 , X , L X ) = Z t ( ∂ r + L b,σ,f ) V ( r, X r , L X r )d r + Z t h ( σ ∗ ∂ x V )( r, X r , L X r ) , d B r i + Z t Z U (cid:20) V ( r, X r − + f ( r, X r − , L X r , u ) , L X r ) − V ( r, X r − , L X r ) (cid:21) ˜ N (d r, d u ) . (10) hus, V ( t, X t , L X t ) − V (0 , X , L X ) has two expressions. Since V ( t, X t , L X t ) − V (0 , X , L X )is a semimartingale, by uniqueness for decomposition of the semimartingale it holds that g ( r, X r , L X r ) + Z U g ( r, X r , L X r , u ) ν (d u ) = ( ∂ r + L b,σ,f ) V ( r, X r , L X r ) ,g ( r, X r , L X r ) = ( σ ∗ ∂ x V )( r, X r , L X r ) ,g ( r, X r , L X r , u ) = V ( r, X r + f ( r, X r , L X r , u ) , L X r ) − V ( r, X r , L X r ) , r ∈ [0 , T ] . And then for any s ∈ [0 , T ] and µ = L X s ∈ M ( R d ), we know that g ( s, X s , µ ) + Z U g ( s, X s , µ, u ) ν (d u ) = ( ∂ r + L b,σ,f ) V ( s, X s , µ ) ,g ( s, X s , µ ) = ( σ ∗ ∂ x V )( s, X s , µ ) ,g ( s, X s , µ, u ) = V ( s, X s + f ( s, X s , µ, u ) , µ ) − V ( s, X s , µ ) , and then g ( s, x, µ ) + Z U g ( s, x, µ, u ) ν (d u ) = ( ∂ r + L b,σ,f ) V ( s, x, µ ) ,g ( s, x, µ ) = ( σ ∗ ∂ x V )( s, x, µ ) ,g ( s, x, µ, u ) = V ( s, x + f ( s, x, µ, u ) , µ ) − V ( s, x, µ ) , x ∈ supp( µ ) . To show (7), we replace µ by µ n = µ ∗N d (0 , n I d ) in the above equality, where N d (0 , n I d )denotes the d -dimensional Gaussian distribution with the mean 0 and the covariancematrix n I d . Note that supp( N d (0 , n I d )) = R d , supp( µ n ) = R d and µ n → µ as n → ∞ ,which together with continuity of all the related functions in µ yields (7). The proof iscomplete. (cid:3) Some special cases.
In the subsection, we analyze some special cases.First of all, we give out a solution of the partial integro-differential equation in (7). Todo this, we introduce two McKean-Vlasov stochastic differential equations with jumps on R d : for any ξ ∈ L (Ω , F , P ; R d ) and x ∈ R d , X s,ξt = ξ + Z ts b ( r, X s,ξr , L X s,ξr )d r + Z ts σ ( r, X s,ξr , L X s,ξr )d B r + Z ts Z U f ( r, X s,ξr − , L X s,ξr , u ) ˜ N (d r, d u ) , s < t T, (11) X s,x,ξt = x + Z ts b ( r, X s,x,ξr , L X s,ξr )d r + Z ts σ ( r, X s,x,ξr , L X s,ξr )d B r + Z ts Z U f ( r, X s,x,ξr − , L X s,ξr , u ) ˜ N (d r, d u ) , s < t T, (12)and a backward McKean-Vlasov stochastic differential equation ( d Y s,x,ξt = g ( t, X s,x,ξt , L X s,ξt )d t + R U g ( t, X s,x,ξt , L X s,ξt , u ) ν (d u )d t,Y s,x,ξT = Φ( X s,x,ξT , L X s,ξT ) . (13)If b, σ are bounded, | f ( t, x, µ, u ) | C k u k U for ( t, x, µ, u ) ∈ [0 , T ] × R d × M ( R d ) × U and some constant C >
0, and K ( · ) , K ( · ) are two constants, under ( H b,σ ) ( H f ),based on [5, Theorem 3.1, Page 7], it holds that the above equations (11) (12) have nique solutions X s,ξt , X s,x,ξt , respectively. If we further assume g , R U g ( · , · , · , u ) ν (d u ) , Φare bounded, the above equation (13) also has a unique solution. For Φ ∈ C , b ( R d ×M ( R d )) , g ( t, · , · ) ∈ C , b ( R d ×M ( R d )) , R U g ( t, · , · , u ) ν (d u ) ∈ C , b ( R d ×M ( R d )) and g ( · , x, µ ) ∈ C ([0 , T ]) , R U g ( · , x, µ, u ) ν (d u ) ∈ C ([0 , T ]), set V ( t, x, µ ) := E (cid:20) Φ( X t,x,ξT , L X t,ξT ) − Z Tt g ( r, X t,x,ξr , L X t,ξr )d r − Z Tt Z U g ( r, X t,x,ξr , L X t,ξr , u ) ν (d u )d r (cid:21) , µ = L ξ , (14)and then by [9, Theorem 9.2, Page 3159], it holds that V ( t, x, µ ) ∈ C , , b ([0 , T ] × R d ×M ( R d )) is the unique solution of the following nonlocal partial integral-differential equa-tion ( ∂ t + L b,σ,f ) V ( t, x, µ ) = g ( t, x, µ ) + R U g ( t, x, µ, u ) ν (d u ) ,V ( T, x, µ ) = Φ( x, µ ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) . Thus, by combining Theorem 3.2 with [9, Theorem 9.2, Page 3159], one can have thefollowing result.
Corollary 3.3.
Assume that ( H b,σ ) ( H f ) hold, ( b ( t, x, µ ) , σ ( t, x, µ )) ∈ C , , b ([0 , T ] × R d × M ( R d ) R d × R d × m ) , and all the derivatives of f ( t, x, µ, u ) in t order and in x, µ up to order are bounded by L k u k U and Lipschitz continuous with a Lipschitz factor L k u k U . Then for V ( t, x, µ ) defined in (14), g ∈ C ([0 , T ] × R d × M ( R d ) R m ) and g ( · , · , · , u ) ∈ C ([0 , T ] × R d × M ( R d ) R ) , F s,t is path independent in the sense of (4)if and only if V, g , g satisfy ( σ ∗ ∂ x V )( t, x, µ ) = g ( t, x, µ ) ,V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) = g ( t, x, µ, u ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U . In the following we analyze some special cases of F s,t . If g = 0 , g = 0, F s,t reduces to F g ,g s,t := Z ts h g ( r, X r , L X r ) , d B r i + Z ts Z U g ( r, X r − , L X r , u ) ˜ N (d r, d u ) . We follow up the above deduction to get that for V ( t, x, µ ) := E [Φ( X t,x,ξT , L X t,ξT )], g ∈ C ([0 , T ] × R d × M ( R d ) R m ) and g ( · , · , · , u ) ∈ C ([0 , T ] × R d × M ( R d ) R ), F g ,g s,t is path independent in the sense of (4) if and only if V, g , g satisfy ( σ ∗ ∂ x V )( t, x, µ ) = g ( t, x, µ ) ,V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) = g ( t, x, µ, u ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U . This result also can be obtained by Theorem 3.2 and [5, Theorem 7.3, Page 47]. If g = β | g | , β = 0, F s,t reduces to F g ,g ,g s,t := Z ts β | g | ( r, X r , L X r )d r + Z ts h g ( r, X r , L X r ) , d B r i Z ts Z U g ( r, X r − , L X r , u ) ˜ N (d r, d u )+ Z ts Z U g ( r, X r , L X r , u ) ν (d u )d r, s < t T. Thus, by Theorem 3.2, it holds that F g ,g ,g s,t is path independent in the sense of (4) if andonly if ( ∂ t + L b,σ,f ) V ( t, x, µ ) = β | σ ∗ ∂ x V | ( t, x, µ ) + R U g ( t, x, µ, u ) ν (d u ) , ( σ ∗ ∂ x V )( t, x, µ ) = g ( t, x, µ ) ,V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) = g ( t, x, µ, u ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U . (15)3.3. The relationship between Theorem 3.2 and some known results.
In thesubsection, we discuss the relationship between Theorem 3.2 and [14, Theorem 2.6] [16,Theorem 2.2].Let λ : [0 , ∞ ) × U → (0 ,
1] be a measurable function. Here we require that if λ ( t, u ) = 1, f ( t, x, µ, u ) = 0 immediately. And then there exists an integer-valued ( F t ) t > -Poisson random measure N λ (d t, d u ) on (Ω , F , P ; ( F t ) t > ) with intensity E ( N λ (d t, d u )) = λ ( t, u )d tν (d u ). Denote ˜ N λ (d t, d u ) := N λ (d t, d u ) − λ ( t, u )d tν (d u )that is, ˜ N λ (d t, d u ) stands for the compensated ( F t ) t > -predictable martingale measureof N λ (d t, d u ). Moreover, ˜ N λ (d t, d u ) is independent of B · . We replace ˜ N (d t, d u ) by˜ N λ (d t, d u ) in Eq.(1). Thus, the solution of the new equation is denoted as X λt . Be-sides, we assume that there exists a measurable function ˜ b : [0 , T ] × R d × M ( R d ) → R m such that b = σ ˜ b . For convenience of the following deduction, we also assume: E h exp n Z T (cid:12)(cid:12)(cid:12) ˜ b ( s, X λs , L X λs ) (cid:12)(cid:12)(cid:12) d s oi < ∞ , Z T Z U (cid:18) − λ ( s, u ) λ ( s, u ) (cid:19) λ ( s, u ) ν (d u )d s < ∞ . So, set Γ t : = exp (cid:26) − Z t h ˜ b ( s, X λs , L X λs ) , d B s i − Z t (cid:12)(cid:12)(cid:12) ˜ b ( s, X λs , L X λs ) (cid:12)(cid:12)(cid:12) d s − Z t Z U log λ ( s, u ) ˜ N λ (d s, d u ) − Z t Z U (cid:16)(cid:0) log λ ( s, u ) (cid:1) λ ( s, u ) + (cid:0) − λ ( s, u ) (cid:1)(cid:17) ν (d u )d s (cid:27) , and then by the same deduction to that in [14], it holds that Γ t is an exponential mar-tingale. Define a new probability Q as d Q d P = Γ T . hus, under Q , ˜ B t := B t + Z t ˜ b ( s, X λs , L X λs )d s is a d -dimensional Brownian motion and¯ N (d t, d u ) = N λ (d t, d u ) − d tν (d u )is the compensated ( F t ) t > -predictable martingale measure of N λ (d t, d u ). Moreover,Eq.(1) becomes X λt = X λs + Z ts σ ( r, X λr , L X λr )d ˜ B r + Z ts Z U f ( r, X λr − , L X λr , u ) ¯ N (d r, d u ) . That is, X λt is a local martingale.Now, take g ( t, x, µ ) = 12 (cid:12)(cid:12)(cid:12) ˜ b ( t, x, µ ) (cid:12)(cid:12)(cid:12) , g ( t, x, µ ) = ˜ b ( t, x, µ ) ,g ( t, x, µ, u ) = log λ ( t, u ) , g ( t, x, µ, u ) = log λ ( t, u ) + (cid:18) λ ( t, u ) − (cid:19) . And then by Theorem 3.2, we know that F λ ,t := − log Γ t is path independent in the senseof (4) if and only if ( ∂ t + L b,σ,f ) V ( t, x, µ )= | ( σ ∗ ∂ x V )( t, x, µ ) | + R U (cid:16)(cid:0) log λ ( t, u ) (cid:1) λ ( t, u ) + (cid:0) − λ ( t, u ) (cid:1)(cid:17) ν (d u ) , ( σ ∗ ∂ x V )( t, x, µ ) = ˜ b ( t, x, µ ) ,V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) = log λ ( t, u ) ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U . (16)The equation is just right Eq.(15) with β = 1. And then we rewrite Eq.(16) as ∂ t V ( t, x, µ ) = L σ,f V ( t, x, µ ) ,b ( t, x, µ ) = ( σσ ∗ ∂ x V )( t, x, µ ) ,λ ( t, u ) = exp n V (cid:16) t, x + f ( t, x, µ, u ) , µ (cid:17) − V ( t, x, µ ) o ,t ∈ [0 , T ] , x ∈ R d , µ ∈ M ( R d ) , u ∈ U , where L σ,f V ( t, x, µ ) := −
12 tr (cid:16) ( σσ ∗ ) ∂ x V (cid:17) ( t, x, µ ) − | ( σ ∗ ∂ x V )( t, x, µ ) | − Z R d h ( σσ ∗ ∂ y V )( t, y, µ ) , ( ∂ µ V )( t, x, µ )( y ) i µ (d y ) − Z R d tr (cid:16) ( σσ ∗ )( t, y, µ ) ∂ y ∂ µ V ( t, x, µ )( y ) (cid:17) µ (d y ) − Z U (cid:20) e V ( t,x + f ( t,x,µ,u ) ,µ ) − V ( t,x,µ ) − −h f ( t, x, µ, u ) , ∂ x V ( t, x, µ ) i e V ( t,x + f ( t,x,µ,u ) ,µ ) − V ( t,x,µ ) (cid:21) ν (d u ) Z U Z Z R d (cid:28) ∂ µ V ( t, x, µ ) (cid:16) y + ηf ( t, y, µ, u ) (cid:17) − ∂ µ V ( t, x, µ )( y ) , f ( t, y, µ, u ) (cid:29) µ (d y )d ην (d u ) . If b, σ, f, V are independent of µ , this is just right Theorem 2.6 in [14]. Moreover, if f = 0,this is exactly Theorem 2.2 in [16]. Therefore, our result is more general.4. Appendix
The proof of Proposition 3.1.
Set µ t := L X t , and then h ( t, X t , L X t ) = h ( t, X t , µ t ). Define ¯ h ( t, x ) := h ( t, x, µ t ), andthen ¯ h ( t, X t ) = h ( t, X t , µ t ). Moreover, based on h ∈ C , , b ([0 , T ] × R d × M ( R d )), it holdsthat ¯ h is C in x . However, we don’t know the differentiability of ¯ h in t . Note that thedifferentiability of ¯ h in t comes from two parts- h ( t, x, µ ) in t for fixed x, µ and h ( s, x, µ t )in t for fixed s, x . Therefore, to apply the classical Itˆo formula to ¯ h ( t, X t ), we only needto consider the second part by h ∈ C , , b ([0 , T ] × R d × M ( R d )). Step 1.
Assume that b, σ are bounded, | f ( t, x, µ, u ) | C k u k U for ( t, x, µ, u ) ∈ [0 , T ] × R d × M ( R d ) × U and some constant C >
0. We study the differentiability of h ( s, x, µ t )in t .Here, we follow the method in [3] to deal with it. For the convenience to our expression,we take H ( µ t ) := h ( s, x, µ t ). For any positive integer K , set x , x , · · · , x K ∈ R d , H K ( x , x , · · · , x K ) := H (cid:16) K K X l =1 δ x l (cid:17) , (17)and then H K ( x , x , · · · , x K ) is a function on R d × K . Moreover, by [3, Proposition 3.1,Page 15], it holds that H K is C on R d × K and ∂ x i H K ( x , x , · · · , x K ) = 1 K ∂ µ H (cid:16) K K X l =1 δ x l (cid:17) ( x i ) ,∂ x i x j H K ( x , x , · · · , x K ) = 1 K ∂ y ∂ µ H (cid:16) K K X l =1 δ x l (cid:17) ( x i ) δ i,j + 1 K ∂ µ H (cid:16) K K X l =1 δ x l (cid:17) ( x i , x j ) ,i, j = 1 , , · · · , K, (18)where δ i,j = 1 , i = j , δ i,j = 0 , i = j . Besides, we take K independent copies X lt , l =1 , , · · · , K of X t . That is,d X lt = b ( t, X lt , L X lt )d t + σ ( t, X lt , L X lt )d B lt + Z U f ( t, X lt − , L X lt , u ) ˜ N l (d t, d u ) , l = 1 , , · · · , K, where B l , N l , l = 1 , , · · · , K are mutually independent and have the same distributionsto that of B, N , respectively. And then applying the Itˆo formula to H K ( X t , X t , · · · , X Kt )and taking the expectation on both sides, we obtain that for 0 t < t + v T E H K ( X t + v , X t + v , · · · , X Kt + v )= E H K ( X t , X t , · · · , X Kt ) + K X i =1 Z t + vt E ∂ x i H K ( X s , X s , · · · , X Ks ) b ( s, X is , L X is )d s K X i =1 Z t + vt E ∂ x i x i H K ( X s , X s , · · · , X Ks ) σσ ∗ ( s, X is , L X is )d s + Z t + vt Z U E h H K (cid:16) X s + f ( s, X s , L X s , u ) , X s , · · · , X Ks (cid:17) − H K ( X s , X s , · · · , X Ks ) − ∂ x H K ( X s , X s , · · · , X Ks ) f ( s, X s , L X s , u ) i ν (d u )d s + · · · + Z t + vt Z U E h H K (cid:16) X s , X s , · · · , X Ks + f ( s, X Ks , L X Ks , u ) (cid:17) − H K ( X s , X s , · · · , X Ks ) − ∂ x K H K ( X s , X s , · · · , X Ks ) f ( s, X Ks , L X Ks , u ) i ν (d u )d s = E H K ( X t , X t , · · · , X Kt ) + K Z t + vt E ∂ x H K ( X s , X s , · · · , X Ks ) b ( s, X s , L X s )d s + K Z t + vt E ∂ x x H K ( X s , X s , · · · , X Ks ) σσ ∗ ( s, X s , L X s )d s + K Z t + vt Z U Z E h(cid:16) ∂ x H K ( X s + ηf ( s, X s , L X s , u ) , X s , · · · , X Ks ) − ∂ x H K ( X s , X s , · · · , X Ks ) (cid:17) f ( s, X s , L X s , u ) i d ην (d u )d s, where the property of the same distributions for X lt , l = 1 , , · · · , K is used in the secondequality. Inserting (17) (18) in the above equality, we get that E H (cid:16) K K X l =1 δ X lt + v (cid:17) = E H (cid:16) K K X l =1 δ X lt (cid:17) + Z t + vt E ∂ µ H (cid:16) K K X l =1 δ X ls (cid:17) ( X s ) b ( s, X s , L X s )d s + 12 Z t + vt E ∂ y ∂ µ H (cid:16) K K X l =1 δ X ls (cid:17) ( X s ) σσ ∗ ( s, X s , L X s )d s + 12 K Z t + vt E ∂ µ H (cid:16) K K X l =1 δ X ls (cid:17) ( X s , X s ) σσ ∗ ( s, X s , L X s )d s + Z t + vt Z U Z E h(cid:16) ∂ µ H (cid:16) K δ X s + ηf ( s,X s , L X s ,u ) + 1 K K X l =2 δ X ls (cid:17) ◦ ( X s + ηf ( s, X s , L X s , u )) − ∂ µ H (cid:16) K K X l =1 δ X ls (cid:17) ( X s ) (cid:17) f ( s, X s , L X s , u ) i d ην (d u )d s. Next, we take the limit on both sides of the above equality. Note thatlim K →∞ E " sup t T ρ K K X l =1 δ X lt , µ t ! = 0 , hich comes from [6, Section 5]. And then as K → ∞ , by continuity and boundedness of H, ∂ µ H, ∂ y ∂ µ H , and boundedness of ∂ µ H, b, σ , it follows from the dominated convergencetheorem that H ( µ t + v ) = H ( µ t ) + Z t + vt E ∂ µ H (cid:16) µ s (cid:17) ( X s ) b ( s, X s , L X s )d s + 12 Z t + vt E ∂ y ∂ µ H (cid:16) µ s (cid:17) ( X s ) σσ ∗ ( s, X s , L X s )d s + Z t + vt Z U Z E h(cid:16) ∂ µ H ( µ s ) (cid:16) X s + ηf ( s, X s , L X s , u ) (cid:17) − ∂ µ H ( µ s )( X s ) (cid:17) f ( s, X s , L X s , u ) i d ην (d u )d s. Thus, by simple calculus we obtain that ∂ t H ( µ t )= Z R d h b ( t, y, µ t ) , ∂ µ H ( µ t )( y ) i µ t (d y ) + 12 Z R d tr (cid:16) ( σσ ∗ )( t, y, µ t ) ∂ y ∂ µ H ( µ t )( y ) (cid:17) µ t (d y )+ Z U Z Z R d h(cid:16) ∂ µ H ( µ t ) (cid:16) y + ηf ( t, y, µ t , u ) (cid:17) − ∂ µ H ( µ t )( y ) (cid:17) f ( t, y, µ t , u ) i µ t (d y )d ην (d u ) . (19) Step 2.
Assume that ( H b,σ ) ( H f ) hold. We prove the differentiability of h ( s, x, µ t ) in t . First of all, we choose a smooth function χ n : R d R d satisfying χ n ( x ) = x, | x | n and χ n ( x ) = 0 , | x | > n such that for x ∈ R d | χ n ( x ) | C, k ∂χ n ( x ) k C, (20)where the positive constant C is independent of n . Set b ( n ) ( t, x, µ ) := b ( t, χ n ( x ) , µ ) , σ ( n ) ( t, x, µ ) := σ ( t, χ n ( x ) , µ ) , f ( n ) ( t, x, µ, u ) := f ( t, χ n ( x ) , µ, u ) , and then b ( n ) ( t, x, µ ) → b ( t, x, µ ) , σ ( n ) ( t, x, µ ) → σ ( t, x, µ ) , f ( n ) ( t, x, µ, u ) → f ( t, x, µ, u )as n → ∞ . Moreover, by Remark 2.1 we know that b ( n ) , σ ( n ) are bounded, | f ( n ) ( t, x, µ, u ) | C k u k U for ( t, x, µ, u ) ∈ [0 , T ] × R d × M ( R d ) × U , and b ( n ) , σ ( n ) , f ( n ) satisfy ( H b,σ ) ( H f ).Thus, the following equationd X ( n ) t = b ( n ) ( t, X ( n ) t , L X ( n ) t )d t + σ ( n ) ( t, X ( n ) t , L X ( n ) t )d B t + Z U f ( n ) ( t, X ( n ) t − , L X ( n ) t , u ) ˜ N (d t, d u )has a unique solution denoted by X ( n ) · and µ ( n ) t := L X ( n ) t . And then by Step 1. , it holdsthat for 0 t < t + v TH ( µ ( n ) t + v ) − H ( µ ( n ) t ) = Z t + vt Z R d h b ( n ) ( r, y, µ ( n ) r ) , ∂ µ H ( µ ( n ) r )( y ) i µ ( n ) r (d y )d r + 12 Z t + vt Z R d tr (cid:16) ( σ ( n ) σ ( n ) ∗ )( r, y, µ ( n ) r ) ∂ y ∂ µ H ( µ ( n ) r )( y ) (cid:17) µ ( n ) r (d y )d r Z t + vt Z U Z Z R d h(cid:16) ∂ µ H ( µ ( n ) r ) (cid:16) y + ηf ( n ) ( r, y, µ ( n ) r , u ) (cid:17) − ∂ µ H ( µ ( n ) r )( y ) (cid:17) f ( n ) ( r, y, µ ( n ) r , u ) i µ ( n ) r (d y )d ην (d u )d r. (21)Next, we observe the limit of µ ( n ) t as n → ∞ for any t ∈ [0 , T ]. Applying the Itˆo formulato | X ( n ) t − X t | and taking the expectation on two sides, one can obtain that E | X ( n ) t − X t | = 2 E Z t h b ( n ) ( r, X ( n ) r , µ ( n ) r ) − b ( r, X r , µ r ) , X ( n ) r − X r i d r + E Z t tr (cid:16) ( σ ( n ) ( r, X ( n ) r , µ ( n ) r ) − σ ( r, X r , µ r ))( σ ( n ) ∗ ( r, X ( n ) r , µ ( n ) r ) − σ ∗ ( r, X r , µ r )) (cid:17) d r + E Z t Z U | f ( n ) ( r, X ( n ) r , µ ( n ) r , u ) − f ( r, X r , µ r , u ) | ν (d u )d r C E Z t | X ( n ) r − X r | d r + C E Z t ρ ( µ ( n ) r , µ r )d r + C E Z t | χ n ( X ( n ) r ) − X r | d r C Z t E | X ( n ) r − X r | d r + C E Z t | χ n ( X r ) − X r | d r, where we use ( H b,σ ) ( H f ) (20) and ρ ( µ ( n ) r , µ r ) E | X ( n ) r − X r | in the first and secondinequalities, respectively. Thus, the Gronwall inequality admits us to have that E | X ( n ) t − X t | C E Z T | χ n ( X r ) − X r | d r. The fact that | χ n ( x ) | | x | for x ∈ R d , together with (2) and the dominated convergencetheorem, yields that lim n →∞ E Z T | χ n ( X r ) − X r | d r = 0 . So, we get that lim n →∞ sup t ∈ [0 ,T ] ρ ( µ ( n ) t , µ t ) lim n →∞ sup t ∈ [0 ,T ] E | X ( n ) t − X t | = 0 . Taking the limit on both sides of (21), by Remark 2.1 and the dominated convergencetheorem, one can still obtain (19).
Step 3.
We prove (5).By
Step 2. , we know that ¯ h ( t, x ) is C in t and C in x . Therefore the classical Itˆoformula admits us to obtain thatd h ( t, X t , L X t ) = d¯ h ( t, X t )= ∂ t ¯ h ( t, X t )d t + h ∂ x ¯ h ( t, X t ) , b ( t, X t , µ t ) i d t + h ∂ x ¯ h ( t, X t ) , σ ( t, X t , µ t )d B t i + Z U (cid:20) ¯ h (cid:16) t, X t + f ( t, X t , µ t , u ) (cid:17) − ¯ h ( t, X t ) − h f ( t, X t , µ t , u ) , ∂ x ¯ h ( t, X t ) i (cid:21) ν (d u )d t + Z U h ¯ h (cid:16) t, X t − + f ( t, X t − , µ t , u ) (cid:17) − ¯ h ( t, X t − ) i ˜ N (d t, d u )
12 tr (cid:16) σσ ∗ ( t, X t , µ t ) ∂ x ¯ h ( t, X t ) (cid:17) d t = ∂ t h ( t, X t , µ t )d t + ∂ t h ( s, x, µ t ) | s = t,x = X t d t + h ∂ x h ( t, X t , µ t ) , b ( t, X t , µ t ) i d t + h ∂ x h ( t, X t , µ t ) , σ ( t, X t , µ t )d B t i + 12 tr (cid:16) σσ ∗ ( t, X t , µ t ) ∂ x h ( t, X t , µ t ) (cid:17) d t + Z U h h (cid:16) t, X t − + f ( t, X t − , µ t , u ) , µ t (cid:17) − h ( t, X t − , µ t ) i ˜ N (d t, d u )+ Z U (cid:20) h (cid:16) t, X t + f ( t, X t , µ t , u ) , µ t (cid:17) − h ( t, X t , µ t ) −h f ( t, X t , µ t , u ) , ∂ x h ( t, X t , µ t ) i (cid:21) ν (d u )d t = ∂ t h ( t, X t , µ t )d t + h ∂ x h ( t, X t , µ t ) , b ( t, X t , µ t ) i d t + h ∂ x h ( t, X t , µ t ) , σ ( t, X t , µ t )d B t i + Z U h h (cid:16) t, X t − + f ( t, X t − , µ t , u ) , µ t (cid:17) − h ( t, X t − , µ t ) i ˜ N (d t, d u )+ 12 tr (cid:16) σσ ∗ ( t, X t , µ t ) ∂ x h ( t, X t , µ t ) (cid:17) d t + Z R d h b ( t, y, µ t ) , ∂ µ h ( t, X t , µ t )( y ) i µ t (d y )d t + 12 Z R d tr (cid:16) ( σσ ∗ )( t, y, µ t ) ∂ y ∂ µ h ( t, X t , µ t )( y ) (cid:17) µ t (d y )d t + Z U Z Z R d (cid:28) ∂ µ h ( t, X t , µ t ) (cid:16) y + ηf ( t, y, µ t , u ) (cid:17) − ∂ µ h ( t, X t , µ t )( y ) , f ( t, y, µ t , u ) (cid:29) µ t (d y )d ην (d u )d t + Z U (cid:20) h (cid:16) t, X t + f ( t, X t , µ t , u ) , µ t (cid:17) − h ( t, X t , µ t ) −h f ( t, X t , µ t , u ) , ∂ x h ( t, X t , µ t ) i (cid:21) ν (d u )d t. This is just right (5). The proof is complete.
Acknowledgements:
The authors are very grateful to Professor Xicheng Zhang for valuable discussions. Thefirst author also thanks Professor Renming Song for providing her an excellent environ-ment to work in the University of Illinois at Urbana-Champaign.
References [1] R. Buckdahn, J. Li, S. Peng, C. Rainer: Mean-field stochastic differential equations and associatedPDEs.
Ann Probab,
Notes on mean field games (from P.L. Lion’s lectures at College de France)
Probabilities and Potential B: Theory of Martingales . North-Holland, Amsterdam/New York/Oxford, 1982.[5] T. Hao and J. Li: Mean-field SDEs with jumps and nonlocal integral-PDEs,
Nonlinear DifferentialEquations Appl.
23 (2) (2016)1-51.
6] J. Horowitz and R. L. Karandikar: Mean rates of convergence of empirical measures in the Wasser-stein metric,
Journal of Computational and Applied Mathematics.
Stochastic Differential Equations and Diffusion Processes , 2nd ed.,North-Holland/Kodanska, Amsterdam/Tokyo, 1989.[8] J. Jacod and A.N. Shiryaev:
Limit Theorems for Stochastic Processes . Springer-Verlag, Berlin,1987.[9] J. Li, Mean-field forward and backward SDEs with jumps and associated nonlocal quasilinearintegral-PDEs,
Stoch. Proc. Appl.
128 (2018)3118-3180.[10] L. Lin, F. Xu, Q. Zhang: The link between stochastic differential equations with non-Markoviancoefficients and backward stochastic partial differential equations. Preprint, 2019.[11] Jr. H.P. McKean: A class of Markov processes associated with nonlinear parabolic equations.
Proc.Natl. Acad. Sci. USA , 56(1966)1907-1911.[12] Jr. H.P. McKean: Propagation of chaos for a class of non-linear parabolic equations. StochasticDifferential Equations (Lecture Series in Differential Equations, Session 7, Catholic Univ., 1967).[13] K. R. Parthasarathy:
Probability measures on metric spaces . AMS Chelsea Publishing, 2005.[14] H. Qiao and J.-L. Wu: Characterising the path-independence of the Girsanov transformation fornon-Lipschnitz SDEs with jumps,
Statistics and Probability Letters , 119(2016)326-333.[15] H. Qiao and J.-L. Wu: On the path-independence of the Girsanov transformation for stochasticevolution equations with jumps in Hilbert spaces, Discrete and Continuous Dynamical Systems-B,24(2019)1449-1467.[16] P. Ren and F.-Y. Wang: Space-Distribution PDEs for path independent additive functionals ofMcKean-Vlasov SDEs, to appear in
Infin. Dimens. Anal. Quantum Probab. Relat. Top. [17] P. Ren and F. Yang: Path independence of additive functionals for stochastic differential equationsunder G-framework,
Front. Math. China , 14(2019)135-148.[18] A. Truman, F. -Y. Wang, J.-L. Wu, W. Yang: A link of stochastic differential equations to nonlinearparabolic equations,
SCIENCE CHINA Mathematics , 55 (2012) 1971-1976.[19] J.-L. Wu and W. Yang: On stochastic differential equations and a generalised Burgers equation,pp 425-435 in
Stochastic Analysis and Its Applications to Finance: Essays in Honor of Prof. Jia-An Yan (eds. T. S. Zhang, X. Y. Zhou) , Interdisciplinary Mathematical Sciences, Vol. 13, WorldScientific, Singapore, 2012., Interdisciplinary Mathematical Sciences, Vol. 13, WorldScientific, Singapore, 2012.