Reflected BSDEs and optimal control and stopping for infinite-dimensional systems
aa r X i v : . [ m a t h . O C ] N ov Reflected BSDEs and optimal control and stopping forinfinite-dimensional systems
Marco FuhrmanPolitecnico di Milano, Dipartimento di Matematicapiazza Leonardo da Vinci 32, 20133 Milano, Italye-mail: [email protected] Masiero, Gianmario TessitoreDipartimento di Matematica e Applicazioni, Universit`a di Milano Bicoccavia Cozzi 55, 20125 Milano, Italye-mail: [email protected], [email protected]
Abstract
We introduce the notion of mild supersolution for an obstacle problem in an infinitedimensional Hilbert space. The minimal supersolution of this problem is given in terms ofa reflected BSDEs in an infinite dimensional Markovian framework. The results are appliedto an optimal control and stopping problem.
The connection between backward stochastic differential equations in R n and semilinear parabolicPDEs is known since the seminal paper of Pardoux and Peng [18]. This result was extended tothe case of reflected BSDEs and correspondingly of obstacle problem for PDEs in [6]. Moreoverit is also well known that the above equations are related to optimal stochastic control problems(in the first case) and optimal stopping or optimal control/stopping problems in the second see[19]. We notice that in the finite dimensional framework the above mentioned partial differentialequations are intended either in classical sense (see [18]) or, more frequently, in viscosity sense.On the other hand the relation between backward stochastic differential equations in infi-nite dimensional spaces, optimal control of Hilbert valued stochastic evolution equations andparabolic equation on infinite dimensional spaces was investigated in [8] and in several successivepapers. In the above mentioned literature it appears that the concept of solution of the PDEhas to be modified in the infinite dimensional case. Namely classical solutions require too muchregularity while the theory of viscosity solutions can be applied only in special cases with traceclass noise and very regular value function (see [14]). The type of definition that was seen tofit the infinite dimensional framework and the BSDE approach is the classical notion of mildsolution . Namely if we consider a semilinear parabolic PDE such as ∂u∂t ( t, x ) = L t u ( t, x ) + ψ ( t, x, u ( t, x ) , ∇ u ( t, x )) t ∈ [0 , T ] , x ∈ Hu ( T, x ) = φ ( x ) , and ( P s,t ) ≤ s ≤ t ≤ T is the transition semigroup related to the second order differential operators( L t ) t ∈ [0 ,T ] then a function u : [0 , T ] × H → R is called a mild solution of the above PDE whenever1 admits a gradient (in a suitable sense) and it holds: u ( s, x ) = P s,t [ u ( t, · )]( x ) + Z ts P s,τ h ψ ( τ, · , u ( τ, · ) , ∇ u ( τ, · )) i ( x ) dτ. Large amount of literature has then extended the BSDE approach to control problems to severaldifferent situations both in the finite and in the infinite framework but, at our best knowledge,the problem of relating reflected BSDEs in infinite dimensional spaces and obstacle problems forPDEs with infinitely many variables was never investigated. The point is that it is not obvioushow one should include the reflection term (which is not absolutely continuous with respect toLebesgue measure on [0 , T ]) into the definition of mild solution.In this paper, inspired by the work of A. Bensoussan see [2], to overcome such a difficulty,we propose the notion of mild-supersolution (see Definition 3.2). To be more specific, our mainresult will be to prove that if ( X s,x , Y s,x , Z s,x , K s,x ) is the solution of the following forwardbackward system with reflected BSDE: dX s,xt = AX s,xt + F ( t, X s,xt ) dt + G ( t, X s,xt ) dW t t ∈ [ s, T ] X s,xs = x, − dY s,xt = ψ ( t, X s,xt , Y s,xt , Z s,xt ) dt + dK s,xt − Z s,xt dW t , t ∈ [0 , T ] ,Y s,xT = φ ( X s,xT ) ,Y s,xt ≥ h ( X s,xt ) , R T ( Y s,xt − h ( X s,xt )) dK s,xt = 0 . setting u ( t, x ) := Y t,xt then u is the minimal mild supersolution of the obstacle problem min (cid:0) u ( t, x ) − h ( x ) , − ∂u∂t ( t, x ) − L t u ( t, x ) − ψ ( t, x, u ( t, x ) , ∇ u ( t, x ) G ( t, x )) (cid:1) ≥ t ∈ [0 , T ] , x ∈ Hu ( T, x ) = φ ( x ) , (1.1)Another issue that is considered in this paper is that we do not assume any nondegeneracyon the coefficient G (and consequently any strong ellipticity on the second order differentialoperator in the PDE). Therefore we can not expect to have regular solutions of the obstacleproblem. Thus we have to precise how the directional gradient ∇ uG has to be intended. Wechoose here to employ the definition of generalized gradient (in probabilistic sense) introducedin [11]). It was proved in [11] that such generalized gradient exists for all locally Lipschitzfunctions. In Theorem 2.9 we prove that our candidate solution u ( t, x ) := Y t,xt is indeed locallyLipschitz). Moreover we notice that we work under general growth assumptions with respect to x on the nonlinear term ψ and on the final datum φ . This forces us to obtain L p estimates onthe solution on the reflected BSDE that extend the ones proved in [6].The structure of the paper is the following. In section 2 we study reflected BSDEs obtainingthe desired L p estimates and the local lipscitzianity with respect to the initial datum in themarkovian framework. In section 3 we introduce the notion of minimal mild supersolution ofthe obstacle problem in the sense of the generalized gradient and we show how it is related tothe reflected BSDEs. Finally in section 4 we apply the above results to an optimal control andstopping problem. In a complete probability space (Ω , F , P ) we consider a cylindrical Wiener process { W τ , τ ≥ } ina Hilbert space Ξ and ( F τ ) τ ≥ is its natural filtration, augmented in the usual way. We consider2he following reflected backward stochastic differential equation (RBSDE in the following): dY t = − f ( t, Y t , Z t ) dt − dK t + Z t dW t , t ∈ [0 , T ] ,Y T = ξ,Y t ≥ S t , R T ( Y t − S t ) dK t = 0 (2.1)for the unknown adapted processes Y , Z and K . Y and K are real processes, and Z is a Ξ ∗ -valued process. Y and Z are square integrable processes, Y admits a continuous modificationand K is a continuous non-decreasing process with K = 0. The equation is understood in theusual integral way, namely: Y t + Z Tt Z r dW r = ξ + Z Tt f ( r, Y r , Z r ) dr + K T − K t , t ∈ [0 , T ] , P − a.s. . (2.2)We also consider equation 2.1 with f not depending on ( y, z ): dY t = − f ( t ) dt − dK t + Z t dW t , t ∈ [0 , T ] ,Y T = ξ,Y t ≥ S t , R T ( Y t − S t ) dK t = 0 (2.3)In the following, if E is a separable Hilbert space, 0 < a < b and p ≥ L p P (Ω × [ a, b ] , E ) the space of E -vauled F t -predictable processes ℓ s.t. E Z ba | ℓ ( t ) | p dt < ∞ . If E = R we write L p (Ω × [ a, b ]) instead of L p (Ω × [ a, b ] , R ).Moreover by L p P (Ω , C ([ a, b ] , E )) we denote the subspace of L p P (Ω × [ a, b ] , E ) given by processesadmitting a continuous version and verifying E sup t ∈ [ a,b ] | ℓ ( t ) | p < ∞ . An analogous definition is given to L p P (Ω , C ([ a, b ]))It proved in [6, proposition 5.1], that if f ∈ L P (Ω × [0 , T ]), ξ ∈ L (Ω) and sup t ∈ [0 ,T ] S + t ∈ L (Ω) then equation (2.3) admits a unique solution ( Y, Z, K ) with (
Y, Z ) ∈ L P (Ω × [0 , T ]) × L P (Ω × [0 , T ] , H ), moreover Y admits a continuous version and E sup t ∈ [0 ,T ] | Y | < ∞ ; finally K T ∈ L (Ω).In the following we need to prove regular dependence of the solution to the above equationwith respect to parameters, namely the initial data of a related (forward) stochastic differen-tial equation. Due to the assumptions that we choose on the nonlinearity ψ we will need L p estimates (both on the solution and on its approximations corresponding to suitable penalizedapproximating equations).We make the following assumptions on the generator, on the final datum and on the obstableof the RBSDE (2.1): Hypothesis 2.1 f : (Ω × [0 , T ]) × R × Ξ → R is measurable with respect to P × B ( R × Ξ ∗ ) (where by P we mean the predictable σ -algebra on Ω × [0 , T ] , and by B (Λ) the Borel σ -algebra onany topological space Λ ). oreover f is Lipschitz with respect to y and z uniformly in t and ω and, for some p ≥ E Z T | f ( t, , | p < ∞ The final data ξ is F T measurable and p -integrable.Finally the obstacle S is a continuous, P -meausurable, real valued process satisfying E sup t ∈ [0 ,T ] | S t | p − < ∞ . We notice that the integrability requests are not optimal (for instance we assume p -integrabilityjointly in Ω × [0 , T ]) for the generator f and 2( p −
1) integrability for the obstacle S ). Neverthe-less such assumptions are verified in the Markovian framework (see Section 2.1) and will allowus to treat general obstacle problems under general assumptions (see Section 3).By a penalization procedure, we can prove the following: Theorem 2.2
If hypothesis 2.1 holds true, equation (2.1) admits a unique adapted solution ( Y, Z, K ) such that Y admits a continuous version and K is non decreasing with K = 0 .Moreover ( Y, Z, K ) satisfy E sup t ∈ [0 ,T ] | Y t | p + E (cid:18)Z T | Z t | dt (cid:19) p/ + E | K T | p (2.4) ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! p/ (2 p − . Where C only depends on T and on the Lipschitz constant of f . We first need an analogous result on the corresponding penalized equation, that we nowintroduce. Let us consider the following BSDE (cid:26) − dY nt = f ( t, Y nt , Z nt ) dt + n ( Y nt − S t )) − dt − Z n,s,xt dW t , t ∈ [0 , T ] ,Y nT = ξ. (2.5)It is shown in [6] that the penalized BSDE (2.5) admits a unique solution ( Y n , Z n ) in L P (Ω , C ([0 , T ])) × L P (Ω × [0 , T ] , Ξ), whose norm (in the above spaces) is uniformly bounded with respect to n .Moreover such a solution ( Y n , Z n ) converges in L P (Ω , C ([0 , T ])) × L P (Ω × [0 , T ] , Ξ) to Y s,x , Z s,x ,solution of the RBSDE. Next we want to prove an L p -estimate, uniform with respect to n . Proposition 2.3
If hypothesis 2.1 holds true then equation (2.21) admits admits a uniqueadapted solution ( Y n , Z n ) such that Y n admits a continuous version.Moreover ( Y, Z, K ) satisfy E sup t ∈ [0 ,T ] | Y nt | p + E (cid:18)Z T | Z nt | dt (cid:19) p/ (2.6) ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! p/ (2 p − . Finally if K nt = n R t ( Y ns − S s ) − ds then K n is an adapted, continuous, non-decreasing processatisfying E | K nT | p ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! p/ (2 p − (2.7) where C only depends on p , T and on the Lipschitz constant of f . roof. First of all we notice that we can always reduce ourselves to the case in which y | y | f ( t, y, z ) ≤ | f ( t, , | + µ | y | + λ | z | with µ + λ ≤ . (2.8)Indeed, setting ˜ Y nt = e at Y nt , ˜ Z nt = e at Z nt , we get that (cid:16) ˜ Y n , ˜ Z n (cid:17) satisfies − d ˜ Y nt = e at f ( t, e − at ˜ Y nt , e − at ˜ Z nt ) dt − a ˜ Y nt dt + n ( ˜ Y nt − ˜ S t )) − dt − ˜ Z nt dW t , t ∈ [0 , T ] ,Y nT = ξ. So the generator is given by ˜ f ( t, y, z ) := e at f ( t, e − at y, e − at z ) − ay, so by choosing a sufficiently large (depending only on the Lipscitz constant of f ) we can assume µ + λ ≤ −
1. From now on we assume that (2.8) holds true and for simplicity we omit thesuperscript ∼ where necessary.Moreover by c we shall denote a constant that depends only on the Lipschitz constant of f , T and p and by c ( δ ) a constant that depends, beside the above parameters, on an auxiliaryconstant δ >
0. Their value can change from line to line. We apply Itˆo formula to | Y nt | p , s ≤ t ≤ T and we get, − d | Y nt | p = p | Y nt | p − ˆ Y nt f ( t, Y nt , Z nt ) dt + pn | Y nt | p − ˆ Y nt ( Y nt − S t ) − dt − p | Y nt | p − ˆ Y nt Z nt dW t − p ( p − | Y nt | p − | Z nt | dt. where ˆ Y nt := Y nt | Y nt | . Integrating between s and T , 0 ≤ s ≤ t ≤ T , we get | Y ns | p + p ( p − Z Ts | Y nt | p − | Z nt | dt = | ξ | p + p Z Ts | Y nt | p − ˆ Y nt f ( t, Y nt , Z nt ) dt + np Z Ts | Y nt | p − ˆ Y nt ( Y nt − S t ) − dt − p Z Ts | Y nr | p − ˆ Y nt Z nt dW t ≤ | ξ | p + p Z Ts | Y nt | p − | f ( t, , | dt + pµ Z Ts | Y nt | p dt + pλ Z Ts | Y nt | p − | Z nt | dt + np Z Ts | S t | p − ( Y nt − S t ) − dt − p Z Ts | Y nr | p − ˆ Y nt Z nt dW t ≤ | ξ | p + c Z Tt | f ( t, , | p dt + p Z Ts | Y nt | p + pµ Z Ts | Y nt | p dt + pλ ( p − Z Ts | Y nt | p dt + p ( p − Z Ts | Y nt | p − | Z nt | dt + sup t ∈ [ s,T ] | S t | p − n Z Ts ( Y nt − S t ) − dt − p Z Ts | Y nr | p − ˆ Y nt Z nt dW t , µ + λ ≤ p ≥
2, we get | Y ns | p + p ( p − Z Ts | Y nt | p − | Z nt | dt ≤ | ξ | p + c Z Tt | f ( t, , | p dt + sup t ∈ [ s,T ] | S t | p − n Z Ts ( Y nt − S t ) − dt − c Z Ts | Y nr | p − ˆ Y nt Z nt dW t By the penalized BSDE (2.5) in integral form we deduce that Z Ts n ( Y nt − S t ) − dt = − ξ + Y nt − Z Ts f ( t, Y nt , Z nt ) dt + Z Ts Z nt dW t , (2.9)and so | Y ns | p + p ( p − Z Ts | Y nt | p − | Z nt | dt (2.10) ≤ | ξ | p + c Z Ts | f ( t, , | p dt − p Z Ts | Y nr | p − ˆ Y nt Z nt dW t + " sup t ∈ [ s,T ] | S t | p − (cid:18) − ξ + Y ns − Z Ts f ( t, Y nt , Z nt ) dt + Z Ts Z nt dW t (cid:19) ≤ | ξ | p + c Z Ts | f ( t, , | p dt − p Z Ts | Y nr | p − ˆ Y nt Z nt dW t + c sup t ∈ [ s,T ] | S t | p + 12 | Y ns | p + sup t ∈ [ s,T ] | S t | p − (cid:20) | ξ | + | Z Ts f ( t, Y nt , Z nt ) dt | + | Z Ts Z nt dW t | (cid:21) Now we recall that, by the L p -estimates on BSDEs, see e.g. [8], ( Y n , Z n ) ∈ L p P (Ω , C ([0 , T ])) × L p P (Ω , L ([0 , T ] , Ξ)), and so the Itˆo integral R Ts | Y nr | p − ˆ Y nt Z nt dW t has null expectation. Com-puting expectation in the above inequality12 E | Y ns | p + p ( p − E Z Ts | Y nt | p − | Z nt | dt ≤ E | ξ | p + c E Z Ts | f ( t, , | p dt (2.11)+ E sup t ∈ [ s,T ] | S t | p − (cid:20) | ξ | + | Z Ts f ( t, Y nt , Z nt ) dt | + | Z Ts Z nt dW t | (cid:21) ≤ C E | ξ | p + C E Z Ts | f ( t, , | p dt + E sup t ∈ [ s,T ] | S t | p − ! ∗∗ E (cid:20) | ξ | + Z Ts ( | f ( t, , | + c | Y nt | + c | Z nt | ) dt + | Z Ts Z nt dW t | (cid:21) ! As already mentioned, it is well known that the penalized BSDE admits a unique solution whosenorm is uniformly bounded in L P (Ω , C ([0 , T ])) × L P (Ω × [0 , T ] , Ξ). Namely estimates in section6 of [6] reed: E sup s ∈ [0 ,T ] | Y ns | + E Z T | Z nt | dt ≤ c E | ξ | + c E Z T | f ( t, , | dt,
6o plugging the above in (2.11) we get, also by the BDG inequality,12 E | Y ns | p + p ( p − E Z Ts | Y nt | p − | Z nt | dt ≤ E | ξ | p + c E Z T | f ( t, , | p dt (2.12)+ c E sup t ∈ [ s,T ] | S t | p − ! E (cid:20) | ξ | + | Z T | f ( t, , | dt | + Z Ts | Y nt | + Z Ts | Z nt | dt (cid:21) ≤ c E | ξ | p + c E Z T | f ( t, , | p dt + c E sup t ∈ [0 ,T ] | S t | p − ! ∗ (cid:18) E | ξ | + C E Z T | f ( t, , | dt (cid:19) So we can deduce that p ( p − E Z Ts | Y nt | p − | Z nt | dt (2.13) ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! ∗ (cid:18) E | ξ | + C E Z T | f ( t, , | dt (cid:19) By (2.10), with r in the place of s , such that 0 ≤ s ≤ r ≤ T we get | Y nr | p ≤ | ξ | p + c Z T | f ( t, , | p dt − p Z Tr | Y nr | p − ˆ Y nt Z nt dW t + c sup r ∈ [ s,T ] | S t | p ++ 2 sup r ∈ [ s,T ] | S r | p − (cid:20) | ξ | + | Z Tr f ( t, Y nt , Z nt ) dt | + | Z Tr Z nt dW t | (cid:21) . By taking the supremum over the time r , by taking expectation and with calculations in part7imilar to the ones we have performed in (2.12) we arrive at E sup r ∈ [ s,T ] | Y nr | p (2.14) ≤ E | ξ | p + c E Z Ts | f ( t, , | p dt + c E sup r ∈ [ s,T ] | Z Tr | Y nt | p − ˆ Y nt Z nt dW t | + c E sup t ∈ [ s,T ] | S t | p + 2 E sup t ∈ [ s,T ] | S t | p − ! E " | ξ | + Z Ts | f ( t, Y nt , Z nt ) | dt + sup t ∈ [ s,T ] | Z Tt Z nt dW t | ≤ c E | ξ | p + c E Z Ts | f ( t, , | p dt + c E sup r ∈ [ s,T ] Z Tr | Y nt | p − ˆ Y nt Z nt dW t + c E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + Z Ts | f ( t, , | dt (cid:21)(cid:19) ≤ c E | ξ | p + c Z Ts | f ( t, , | p dt + c E (cid:18)Z Tr | Y nt | p − | Z nt | dt (cid:19) + c E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + Z Ts | f ( t, , | dt (cid:21)(cid:19) ≤ c E | ξ | p + c Z Ts | f ( t, , | p dt + c E sup t ∈ [ s,T ] | Y nt | p Z Ts | Y nt | p − | Z nt | dt ! + c E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + Z Ts | f ( t, , | dt (cid:21)(cid:19) ≤ c E | ξ | p + c E Z Ts | f ( t, , | p dt + 12 E sup r ∈ [ s,T ] | Y nr | p + c E Z Ts | Y nt | p − | Z nt | dt + c E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + Z Ts | f ( t, , | dt | (cid:21)(cid:19) So we get, also by applying estimate (2.13) E sup r ∈ [ s,T ] | Y nr | p ≤ c E | ξ | p + c E Z Ts | f ( t, , | p dt (2.15)+ E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + Z Ts | f ( t, , | dt (cid:21)(cid:19) Next we estimate E (cid:18)Z Ts | Z nt | dt (cid:19) p ; we apply Itˆo formula to | Y nt | , s ≤ t ≤ T obtaining d | Y nt | = − Y nt f ( t, Y nt , Z nt ) dt − nY nt ( Y nt − S t ) − dt + 2 Y nt Z nt dW t + | Z nt | dt.
8e integrate on [ s, T ] and we raise to the power p : | Y ns | p + (cid:18)Z Ts | Z nt | dt (cid:19) p ≤ | ξ | p + (cid:18) Z Ts Y nt f ( t, Y nt , Z nt ) dt (cid:19) p + (cid:18) n Z Ts Y nt ( Y nt − S t ) − dt (cid:19) p + (cid:18)Z Ts Y nr Z nt dW t (cid:19) p ≤ | ξ | p + (cid:18) Z Ts Y nt f ( t, Y nt , Z nt ) dt (cid:19) p + sup t ∈ [ s,T ] | S t | n Z Ts Y nt ( Y nt − h ( X t )) − dt ! p + (cid:18)Z Ts Y nr Z nt dW t (cid:19) p Using the expression (2.9) for n Z Ts ( Y nt − S t ) − dt that comes from the penalized BSDE (2.5),we get (cid:18)Z Ts | Z nt | dt (cid:19) p ≤ | ξ | p + 2 (cid:12)(cid:12)(cid:12)(cid:12) Z Ts Y nt f ( t, Y nt , Z nt ) dt (cid:12)(cid:12)(cid:12)(cid:12) p + (cid:12)(cid:12)(cid:12)(cid:12)Z Ts Y nt Z nt dW t (cid:12)(cid:12)(cid:12)(cid:12) p + sup t ∈ [ s,T ] | S t | p (cid:12)(cid:12)(cid:12)(cid:12) − ξ + Y nt − Z Ts f ( t, Y nt , Z nt ) dt + Z Ts Z nt dW t (cid:12)(cid:12)(cid:12)(cid:12) p ≤ | ξ | p + (cid:18) Z Ts (cid:0) | Y nt | | f ( t, , | + µ | Y nt | + λ | Y nt || Z nt | (cid:1) dt (cid:19) p + (cid:12)(cid:12)(cid:12)(cid:12)Z Ts Y nr Z nt dW t (cid:12)(cid:12)(cid:12)(cid:12) p + sup t ∈ [ s,T ] | S t | p (cid:18) | ξ | + | Y nt | + Z Ts ( | f ( t, , | + µ | Y nt | + λ | Z nt | ) dt + (cid:12)(cid:12)(cid:12)(cid:12)Z Ts Z nt dW t (cid:12)(cid:12)(cid:12)(cid:12)(cid:19) p , Computing expectation, by BDG and Young inequalities, and by using estimate (2.15), we get E (cid:18)Z Ts | Z nt | dt (cid:19) p ≤ E | ξ | p + c E sup r ∈ [ s,T ] | Y nr | p + c E (cid:18)Z Ts | f ( t, , | dt (cid:19) p + 14 (cid:18) E Z Ts | Z nt | dt (cid:19) p + E (cid:18)Z Ts | Y nt Z nt | dt (cid:19) p + c E sup t ∈ [ s,T ] | S t ) | p ≤ c E | ξ | p + c E Z T | f ( t, , | p dt + 12 E (cid:18)Z Ts | Z nt | dt (cid:19) p + c E sup r ∈ [ s,T ] | Y nr | p E (cid:18)Z Ts | Z nt | dt (cid:19) p ≤ c E | ξ | p + c E Z T | f ( t, , | p dt + c E sup t ∈ [ s,T ] | S t | p − ! (cid:18) E (cid:20) | ξ | + | Z Ts | f ( t, , | dt | (cid:21)(cid:19) and this concludes the estimate of E (cid:16)R Ts | Z nt | dt (cid:17) p .The estimate of E | K nT | p is then easy consequence of the previous ones and of relation (2.9).We are now ready to prove Theorem 2.2. Proof of Theorem 2.2.
By [6], section 6, we know that Y nt ↑ Y t and E ( sup t ∈ [0 ,T ] ( Y t − Y nt )) → . Thus choosing a suitable subsequence we can assume the P -a.s. convergence of sup t ∈ [0 ,T ] ( Y t − Y nt )towards 0. Consequently by Fatou Lemma and (2.6) we get E sup t ∈ [0 ,T ] | Y t | p ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! p/ (2 p − . For what concerns the convergence of Z n , again by [6], section 6, we already know that Z n → Z in L P (Ω × [0 , T ]), and by proposition 2.3 we know that Z n is bounded in L p P (Ω × [0 , T ]), so,extracting, if needed, a subsequence, we can assume that such that ( Z n ) converges weakly in L p P (Ω × [0 , T ]) and consequently also weakly in L P (Ω × [0 , T ]). Therefore the weak limit of ( Z n )in L p P (Ω × [0 , T ]) must coincide with the strong limit Z in L P (Ω × [0 , T ]) topology. Consequentlyagain by (2.6) we have that Z satisfies E (cid:18)Z T | Z t | dt (cid:19) p/ ≤ C E | ξ | p + C E Z T | f ( t, , | p dt + C E sup t ∈ [0 ,T ] | S t | p − ! p/ (2 p − . For what concerns K , by [6] we already know (see again [6], section 6) that E | K nT − K T | → , .The claim follows as before by Fatou lemma by extracting a subsequence that converges P -a.s.and exploiting estimate (2.6). Now we consider a RBSDE depending on a forward equation with values in another real andseparable Hilbert space H . Namely, we consider the forward backward system dX s,xt = AX s,xt + F ( t, X s,xt ) dt + G ( t, X s,xt ) dW t t ∈ [ s, T ] X s,xs = x, − dY s,xt = ψ ( t, X s,xt , Y s,xt , Z s,xt ) dt + dK s,xt − Z s,xt dW t , t ∈ [0 , T ] ,Y s,xT = φ ( X s,xT ) ,Y s,xt ≥ h ( X s,xt ) , R T ( Y s,xt − h ( X s,xt )) dK s,xt = 0 . (2.16)We denote the solution of the RBSDE in the above equation by ( Y s,x , Z s,x , K s,x ), to stress thedependence on the initial conditions, or by ( Y, Z, K ) if no confusion is possible.On the coefficients of the forward equation we make the following assumptions:10 ypothesis 2.4 A is the generator of a strongly continuous semigroup of linear operators ( e tA ) t ≥ ;2. The mapping F : [0 , T ] × H → H is measurable and satisfies, for some constant C > and ≤ γ < , (cid:12)(cid:12) e sA F ( τ, x ) (cid:12)(cid:12) ≤ Cs − γ (1 + | x | ) , t ∈ [0 , T ] , (cid:12)(cid:12) e sA F ( τ, x ) − e sA F ( τ, y ) (cid:12)(cid:12) ≤ Cs − γ | x − y | , s > , t ∈ [0 , T ] , x, y ∈ H. (2.17) G is a mapping [0 , T ] × H → L (Ξ , H ) such that for every v ∈ Ξ , the map Gv : [0 , T ] × H → H is measurable and for every s > , τ ∈ [0 , T ] and x ∈ H we have e sA G ( τ, x ) ∈ L (Ξ , H ) .Moreover there exists < θ < such that (cid:12)(cid:12) e sA G ( τ, x ) (cid:12)(cid:12) L (Ξ ,H ) ≤ Ls − θ (1 + | x | ) , (cid:12)(cid:12) e sA G ( τ, x ) − e sA G ( τ, y ) (cid:12)(cid:12) L (Ξ ,H ) ≤ Ls − θ | x − y | , s > , τ ∈ [0 , T ] , x, y ∈ H. (2.18)The next existence and uniqueness Proposition is proved in [8]. Proposition 2.5
Under hypothesis 2.4, the forward equation in (2.16) admits a unique contin-uous mild solution. Moreover E sup t ∈ [ s,T ] | X s,xt | p < C p (1 + | x | ) p , for every p ∈ (0 , ∞ ) , and someconstant C p > . We will work under the following assumptions on ψ : Hypothesis 2.6
The function ψ : [0 , T ] × H × R × Ξ → R is Borel measurable and satisfies thefollowing:1. there exists a constant L > such that | ψ ( t, x, y , z ) − ψ ( t, x, y , z ) | ≤ L ( | y − y | + | z − z | Ξ ) , for every t ∈ [0 , T ] , x ∈ H, y , y ∈ R , z , z ∈ Ξ ;2. for every t ∈ [0 , T ] , ψ ( t, · , · , · ) is continuous H × R × Ξ ∗ → R ;3. there exists L ′ > and m ≥ such that | ψ ( t, x , y, z ) − ψ ( t, x , y, z ) | ≤ L ′ | x − x | (1 + | x | m + | x | m + | y | m ) (1 + | z | Ξ ) , for every t ∈ [0 , T ] , x , x ∈ H, y ∈ R , z ∈ Ξ .4. as far as the final datum φ and the obstacle h are concerned there exists L > such that: | φ ( x ) − φ ( x ) | ≤ L | x − x | (1 + | x | m + | x | m ) , | h ( x ) − h ( x ) | ≤ L | x − x | (1 + | x | m + | x | m ) , for all x , x ∈ H . We notice that hypothesis 2.6 implies that, for all p > | ψ ( t, x, y, z ) | ≤ L (cid:0) | x | m +1 + | y | + | z | Ξ ∗ (cid:1) , | φ ( x ) | ≤ L (1 + | x | m +1 ) , | h ( x ) | ≤ L (1 + | x | m +1 ) , (2.19)for all t ∈ [0 , T ], x ∈ H , y ∈ R z ∈ Ξ, and for all p ≥ roposition 2.7 Let hypotheses 2.4 and 2.6 hold true and fix s ∈ [0 , T ] , x ∈ H . Then theRBSDE in (2.16) admits a unique adapted solution ( Y s,x , Z s,x , K s,x ) . Moreover Y s,x admits acontinuous version, ( K s,x ) is continuous and non-decreasing ( K s,x = 0 ) and, for all p ≥ thereexists C p > such that E sup t ∈ [0 ,T ] | Y s,xt | p + E (cid:18)Z T | Z s,xt | dt (cid:19) p/ + E | K s,xT | p < C (1 + | x | p ( m +1) ) . (2.20) We consider also the penalized version of the RBSDE in (2.16): (cid:26) − dY n,s,xt = ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) dt + n ( Y n,s,xt − h ( X s,xt )) − dt − Z n,s,xt dW t , t ∈ [0 , T ] ,Y n,s,xT = φ ( X s,xT ) . (2.21) The same holds for the penalized equation with constant C independent on n . Proof.
It suffices to notice that by setting f ( t, y, z ) := ψ ( t, X s,xt , y, z ) , S t := h ( X s,xt ) , ξ := φ ( X s,xT )for all t ∈ [0 , T ], y ∈ R , z ∈ Ξ and with X s,x solution to the forward equation in the FBSDE(2.16), by (2.19) f, h, S satisfy hypothesis 2.1, and in particular: E Z T | f ( t, , | p dt = E | ψ ( t, X t , , | p ≤ c (cid:16) | x | p ( m +1) (cid:17) (2.22) E sup t ∈ [0 ,T ] | S t | p − = E sup t ∈ [0 ,T ] | h ( X t ) | p − ≤ c (cid:16) | x | p − m +1) (cid:17) (2.23) E | ξ | p = E || φ ( X t ) | p ≤ (1 + | x | p ( m +1) ) . So we can apply Proposition 2.3 and Theorem 2.2 to obtain the claim
Remark 2.8
Notice that ( Y s,xt , Z s,xt ) is independent on F s so, fixed ≤ τ ≤ s ≤ T the compo-sitions Y s,X τ,xs t ; Z s,X τ,xs t , t ∈ [ s, T ] are well defined. Moreover by uniqueness of the solution to the forward equation in (2.16) wehave X τ,xt = X s,X τ,xs t and consequently Y s,X τ,xs t = Y τ,xt P − a.s. , ∀ t ∈ [ s, T ] Z s,X τ,xs t = Z τ,xt P − a.s. for a.e. t ∈ [ s, T ]The next theorem is devoted to the local Lipschitz continuity of Y s,x with respect to x . Theorem 2.9
Let hypotheses 2.4 and 2.6 hold true and let ( Y s,x , Z s,x , K s,x ) be the uniquesolution of the the RBSDE in (2.16). Then there exists a constant L > such that, ∀ x , x ∈ H , | Y s,x s − Y s,x s | ≤ L (cid:16) | x | m ( m +1) + | x | m ( m +1) (cid:17) | x − x | . (2.24)12 roof. We start by considering the generator ψ differentiable, namely for every t ∈ [0 , T ] weassume that ψ ( t, · , · , · ) ∈ G ( H × R × Ξ ∗ , R ). The idea is to prove that, in the case of smooth(differentiable) coefficients, the solution of the penalized equation (2.21) is differentiable withrespect to x , and the derivative is bounded uniformly with respect to n so that in particular weget local lipschitz continuity of Y n,s,xs with respect to x , that is preserved as n → ∞ .In order to work in a “smooth” framework, in the penalized BSDE (2.21) instead of consideringthe penalizing term n ( y − h ) − , we have to consider a smooth penalizing term, namely we considera function γ : R → R , such that γ ∈ C ∞ b ( R ) γ ( y ) = 0 for y ≥ , γ ( y ) > y < γ ( y ) = − y for y ≤ − , ˙ γ ( y ) < y < . Notice that to construct γ it is enough to set γ ( y ) = R − y ℓ ( r ) dr with ℓ ( r ) = 0 for r ≤ , ℓ ( r ) > r > , ℓ ( r ) = 1 for r ≥ , Z ℓ ( r ) dr = 1 . So we consider the following “smooth” penalized BSDE (cid:26) − dY n,s,xt = ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) dt + nγ ( Y n,s,xt − h ( X s,xt )) dt − Z n,s,xt dW t , t ∈ [0 , T ] ,Y n,s,xT = φ ( X s,xT ) , (2.25)and we notice that estimates obtained in proposition 2.7 are still true for the pair of processes( Y n,s,x , Z n,s,x ) solution of equation (2.7).Notice that it is still true that | y | p − ˆ yγ ( y − s ) ≤ | s | p − γ ( y − s ) for all y, s, ∈ R .By [8] we know that we can differentiate ( Y n,s,x , Z n,s,x ) with respect to x , and that ( ∇ x Y n,s,x , ∇ x Z n,s,x )is the solution of the BSDE (to be intended in mild form): − d ∇ x dY n,s,xt = ∇ x ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) ∇ x X s,xt dt + ∇ y ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) ∇ x Y n,s,xt dt + n ˙ γ ( Y n,s,xt − h ( X s,xt ))( ∇ x Y n,s,xt − ∇ h ( X s,xt ) ∇ x X s,xt ) dt + ∇ z ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) ∇ x Z n,s,xt dt − ∇ x Z n,s,xt dW t , t ∈ [ s, T ] , ∇ x Y n,s,xT = ∇ φ ( X s,xT ) ∇ x X s,xT . where (see again [8]) ∇ x X s,x is the mild solution to the following forward equation (cid:26) d ∇ x X s,xt = A ∇ x X s,xt dt + ∇ x F ( t, X s,xt ) ∇ x X s,xt dt + ∇ x G ( t, X s,xt ) ∇ x X s,xt dW t , t ∈ [ s, T ] , ∇ x X s,xs = I,I : H → H being the identity operator in H .We set ˜ P := E T P , with E T = exp (cid:18)Z Ts ∇ z ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) dW t − Z Ts |∇ z ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) | dt (cid:19) . (2.26)By the Girsanov theorem ˜ P is a probability measure equivalent to the original one P (recall thatby hypothesis 2.6, ∇ z is bounded) and˜ W τ = − Z τs ∇ z ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) dt + W τ , s ≤ τ ≤ T is a ˜ P -cylindrical Wiener process. 13n (Ω , F , ˜ P ) the pair ( ∇ x Y n,s,x , ∇ x Z n,s,x ) solve the following BSDE for t ∈ [ s, T ]: − d ∇ x dY n,s,xt = ∇ x ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) ∇ x X s,xt dt + ∇ y ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) ∇ x Y n,s,xt dt + n ˙ γ ( Y n,s,xt − h ( X s,xt ))( ∇ x Y n,s,xt − ∇ h ( X s,xt ) ∇ x X s,xt ) dt − ∇ x Z n,s,xt d ˜ W t , ∇ x Y n,s,xT = ∇ φ ( X s,xT ) ∇ x X s,xT , (2.27)Multiplying ∇ x Y n,s,xt by exp (cid:26)Z ts ( ∇ y ψ ( t, X s,xσ , Y n,s,xσ , Z n,s,xσ ) + n ˙ γ ( Y n,s,xσ − h ( X s,xσ ))) dσ (cid:27) andwriting the obtained equation in t = s we get: ∇ x Y n,s,xs = E (cid:20) E T Z Ts exp (cid:26)Z τs ∇ y ψ ( t, X s,xσ , Y n,s,xσ , Z n,s,xσ ) + n ˙ γ ( Y n,s,xσ − h ( X s,xσ )) dσ (cid:27) (2.28) (cid:16) ∇ x ψ ( τ, X s,xτ , Y n,s,xτ , Z n,s,xτ ) ∇ x X s,xt − n ˙ γ ( Y n,s,xτ − h ( X s,xτ )) ∇ x X s,xτ (cid:17) dτ (cid:21) + E h E T exp (cid:26)Z Ts ∇ y ψ ( t, X s,xσ , Y n,s,xσ , Z n,s,xσ ) + n ˙ γ ( Y n,s,xσ − h ( X s,xσ )) dσ (cid:27) ∇ φ ( X s,xT ) ∇ x X s,xT i , so that, since ˙ γ ≤ ∇ y ψ is bounded by hypothesis 2.6, point 1, |∇ x Y n,s,xs |≤ c E h E T |∇ φ ( X s,xT ) ∇ x X s,xT + Z Ts ∇ x ψ ( τ, X s,xτ , Y n,s,xτ , Z n,s,xτ ) ∇ x X s,xτ dτ | i + c E h E T | Z Ts exp (cid:26)Z τs n ˙ γ ( Y n,s,xσ − h ( X s,xσ )) dσ (cid:27) ( − n ˙ γ ( Y n,s,xτ − h ( X s,xτ ))) ∇ h ( X s,xτ ) ∇ x X s,xτ dτ | i = I + II,
We start by estimating I. Here and in the following we again denote by c a constant whose valuecan vary from line to line and that may depend on T , on the coefficients A, F, G, ψ, , h, φ , on p but not on n and x . I ≤ c E h E T |∇ φ ( X s,xT ) ∇ x X s,xT | i + c E h E T Z Ts |∇ x ψ ( τ, X s,xτ , Y n,s,xτ , Z n,s,xτ ) ∇ x X s,xτ | dτ i . Taking into account that E E pT ≤ c , by Holder inequality, with p, q, r conjugate exponents p > , < q < , qm >
2, (where m is the same as in hypothesis 2.6) we get: E h E T |∇ φ ( X s,xT ) ∇ x X s,xT | i ≤ c (cid:16) E h |∇ φ ( X s,xT ) | q i(cid:17) /q (cid:16) E h |∇ x X s,xT | r i(cid:17) /r ≤ c (1 + | x | m ) , where we have used the estimate on ∇ x X s,xT stated in [8], proposition 3.3.14n a similar way we can estimate (for q > E h E T Z Ts |∇ x ψ ( τ, X s,xτ , Y n,s,xτ , Z n,s,xτ ) ∇ x X s,xτ | dτ i ≤ c E h E T Z Ts (1 + | X s,xτ | m + | Y n,s,xτ | m ) | Z n,s,xτ | (1 + |∇ x X s,xτ | ) dτ i ≤ c E h τ ∈ [ s,T ] | X s,xτ | mq + sup τ ∈ [ s,T ] | Y n,s,xτ | mq ! τ ∈ [ s,T ] |∇ x X s,xτ | q ! (cid:18)Z Ts | Z n,s,xτ | dτ (cid:19) q i! /q ≤ c E h τ ∈ [ s,T ] | X s,xτ | mq + sup τ ∈ [ s,T ] | Y n,s,xτ | mq i! / q (cid:18) E h Z Ts | Z n,s,xτ | dτ i q (cid:19) / q ≤ c (cid:16) | x | m ( m +1) (cid:17) . where we have used estimates 2.20 and Proposition 2.5.For what concerns II , let p, q and ¯ p, ¯ q be two pairs of conjugate exponents, and let l ( τ ) := − n ˙ γ ( Y n,s,xτ − h ( X s,xτ )) ≥ , τ ∈ [ s, T ]Then E h E T (cid:12)(cid:12)(cid:12) Z Ts exp (cid:16) − Z τs l σ dσ (cid:17) ∇ h ( X s,xτ ) ∇ x X s,xτ dτ (cid:12)(cid:12)(cid:12)i ≤ c E h(cid:16) τ ∈ [ s,T ] | X s,xτ | m (cid:17) sup τ ∈ [ s,T ] |∇ x X s,xτ | Z Ts exp (cid:16) − Z τs l σ dσ (cid:17) l τ dτ i q ! /q ≤ c E h τ ∈ [ s,T ] | X s,xτ | m ¯ pq ! sup τ ∈ [ s,T ] |∇ x X s,xτ | ¯ pq i! / (¯ pq ) (cid:18) E h Z Ts exp (cid:16) − Z τs l σ dσ (cid:17) l τ dτ i q ¯ q (cid:19) / ( q ¯ q ) ≤ c (1 + | x | m ) (cid:18) E h − exp (cid:16) − Z τs l σ dσ (cid:17)i q ¯ q (cid:19) / ( q ¯ q ) ≤ c (1 + | x | m )where in the last passage we have used that Z Ts exp (cid:26) − Z τs l ( σ ) dσ (cid:27) l ( τ ) dτ = 1 − exp (cid:26) − Z Ts l ( σ ) dσ (cid:27) So |∇ x Y n,s,xs | ≤ c (cid:16) | x | m ( m +1) (cid:17) , (2.29)where c may depend on T , on the coefficients A, F G, ψ, h, φ , but not on n . By (2.29) we getthat ∀ x, y ∈ H | Y n,s,xs − Y n,s,ys | ≤ c | x − y | (1 + | x | m ( m +1) + | y | m ( m +1) ) . By letting n → ∞ , arguing as in section 6 in [6] finally get the desired Lipschitz continuity of Y s,xs : | Y s,xs − Y s,ys | ≤ c | x − y | (1 + | x | m ( m +1) + | y | m ( m +1) ) , ∀ x, y ∈ H. (2.30)Finally we have to remove the assumption of differentiability on the coefficient ψ, h, φ in thereflected BSDE. Since ψ is Lipschitz continuous with respect to y and z , and ∀ t , y , z ∈ [0 , T ] × R × Ξ, ψ ( t, · , y, z ) , h , φ are locally Lipschitz continuous with respect to x , then by taking theirinf-sup convolution ( ψ k , φ k , h k ) k ≥ we obtain differentiable functions where the derivative is15ounded by the Lipschitz constant in the Lipschitz case, and the derivative has the polynomialgrowth imposed by the locally Lipschitz growth, see e.g. [4] for the notion of inf-sup convolution,and [15] and [16] for the use of inf-sup convolutions in the Lipschitz and locally Lipschitz case.So in particular the growth of the derivatives the inf-sup convolutions is uniform with respectto k : it follows that Lipschitz and locally Lipschitz constants are uniform with respect to k ,and this allows to pass to the limit as k → ∞ and to preserve Lipschitz and locally Lipschitzproperties. Coming into more details, we denote by ( Y n,k,s,x , Z n,k,s,x , K n,k,s,x ) the solution ofthe penalized RBSDEs with regularized coefficients: − dY n,k,s,xt = ψ k ( t, X s,xt , Y n,k,s,xt , Z n,k,s,xt ) dt + nγ ( Y n,k,s,xt − h k ( X s,xt )) dt − Z n,k,s,xt dW t ,t ∈ [0 , T ] ,Y n,k,s,xT = φ k ( X s,xT ) , (2.31)By the previous calculations we get that ∀ x, y ∈ H | Y n,k,s,xs − Y n,k,s,ys | ≤ c | x − y | (1 + | x | m ( m +1) + | y | m ( m +1) ) , where c does not depend on n nor on k . By standard results on BSDEs (see [9]) we know that( Y n,k,s,x , Z n,k,s,x ) → ( Y n,s,x , Z n,s,x ) in L p P (Ω , C ([0 , T ])) × L p P (Ω × [0 , T ] , Ξ) , where ( Y n,s,x , Z n,s,x ) is solution to the smooth penalized BSDE (2.25). In particular Y n,k,s,xs → Y n,s,xs . Finally proceeding as in [6] if we let n → ∞ , we have already recalled that Y n,s,x ↑ Y s,x ,since for this monototne convergence what matters is the monotonicity of the penalization termcorrisponding to K , and we get the desired Lipschitz continuity of Y s,xs : | Y s,xs − Y s,ys | ≤ c | x − y | (1 + | x | m ( m +1) + | y | m ( m +1) ) , ∀ x, y ∈ H. (2.32). Remark 2.10
Notice that if h and φ are bounded and lipschitz continuous functions, if for every s ∈ [0 , T ] , sup x ∈ H | ψ ( s, x, , | < ∞ and as a function of x , ψ is lipschitz continuous uniformlywith respect to the other variables, that is hypothesis 2.6, point 3 holds true with m = 0 , then byrepeating the same argument in proposition 2.7, we can prove that the processes Y s,x , Z s,x arebounded processes with respect to x , that is namely E sup t ∈ [0 ,T ] | Y s,xt | p + E (cid:18)Z T | Z s,xt | dt (cid:19) p/ < C. (2.33) In this section we consider an obstacle problem for a semilinear PDE in an infinite dimensionalHilbert space H and we solve it in a suitable sense by means of reflected BSDEs. An informaldescription is as follows: we study an obstacle problem of the following form min (cid:0) u ( t, x ) − h ( x ) , − ∂u∂t ( t, x ) − L t u ( t, x ) − ψ ( t, x, u ( t, x ) , ∇ u ( t, x ) G ( t, x )) (cid:1) = 0 t ∈ [0 , T ] , x ∈ Hu ( T, x ) = φ ( x ) , (3.1)16here G : [0 , T ] × H → L (Ξ , H ), and ∇ u ( t, x ) G ( t, x ) is the directional generalized gradient of u with respect to x , see [11], section 3, and the following for the definition of generalized gradient.For a function f : H → R , the operator L t is formally defined by L t f ( x ) = 12 T race (cid:0) G ( t, x ) G ∗ ( t, x ) ∇ f ( x ) (cid:1) + h Ax, ∇ f ( x ) i H + h F ( t, x ) , ∇ f ( x ) i H , and it arises as the generator of an appropriate Markov process X in H .More precisely if X is the mild solution to the stochastic differential equation in H (cid:26) dX s,xt = [ AX s,xt + F ( t, X s,xt )] dt + G ( t, X s,xt ) dW t , t ∈ [ s, T ] X s,xs = x, x ∈ H, (3.2)where T > t ∈ [ s, T ] we denote by P s,t the transition semigroup P s,t [ φ ]( x ) = E φ ( X s,xt ) . where φ : H → R is bounded and measurable.Note that L t is formally the generator of the transition semigroup ( P s,t ) t ∈ [ s,T ] . This leads us toconsider solutions of the obstacle problem (3.1) in mild sense, as we are going to state. We observe that, under our assumptions, it is reasonable to expect that function u is locallyLipschitz but not that it is differentiable.To this aim, we briefly show an example where the value function of a deterministic optimalstopping problem is not differentiable. Let us consider, as state equation without control, (cid:26) dX s,xt = 0 X s,xs = x ∈ R We consider the following cost functional: J ( s, x, τ, ) = φ (cid:0) X u,s,xT (cid:1) χ { τ = T } + h ( τ, X u,s,xτ ) χ { τ Assume that Hypothesis 2.4 holds and that u : [0 , T ] × H → R is a Borel mea-surable function satisfying, for some r > | u ( t, x ) − u ( t, x ′ ) | ≤ c (1 + | x | + | x ′ | ) r | x − x ′ | . (3.3) Then there exists a Borel measurable function ζ : [0 , T ] × H → Ξ ∗ with the following properties. i) For every s ∈ [0 , T ] , x ∈ H and p ∈ [2 , ∞ ) , E Z Ts | ζ ( τ, X t,xτ ) | p dτ < + ∞ . (3.4) (ii) For ξ ∈ Ξ , x ∈ H and ≤ s ≤ T ′ < T the processes { u ( t, X s,xt ) , t ∈ [ s, T ] } and W ξ admita joint quadratic variation on the interval [ s, T ′ ] and h u ( · , X s,x · ) , W ξ i [ s,T ′ ] = Z T ′ s ζ ( t, X s,xt ) ξ dt, P − a.s. (iii) Moreover there exists a Borel measurable function ρ : [0 , T ] × H → H ∗ such that for all t ∈ [ s, T ] and all x ∈ Hζ ( t, X s,xt ) = ρ ( t, X s,xt ) G ( t, X s,xt ) P -a.s. for a.a. t ∈ [ s, T ] Proof. The proof is given in [11], section 4. In that paper it is also noticed, see remark 3.1, thatuniqueness can be stated in the following sense: if ˆ ζ is another function with the stated propertiesthen for 0 ≤ s ≤ t ≤ T and x ∈ H we have ζ ( t, X s,xt ) = ˆ ζ ( t, X s,xt ), P − a . s . for a . a . t ∈ [ s, T ] . Definition 3.1 Let u : [0 , T ] × H → R be a Borel measurable function satisfying (3.3). Thefamily of all measurable functions ζ : [0 , T ] × H → Ξ ∗ satisfying properties (i) and (ii) inTheorem 3.1 will be called the generalized directional gradient of u and denoted by e ∇ G u . Having defined the generalized directional gradient, we are in the position to give the precisedefinition of supersolution for the problem (3.1). Definition 3.2 We say that a Borel measurable function ¯ u : [0 , T ] × H → R is a mild super-solution of the obstacle problem (3.1) in the sense of the generalized directional gradient if thefollowing holds:1. for some C > , r ≥ and for every s ∈ [0 , T ] , x, y ∈ H | ¯ u ( s, x ) − ¯ u ( s, y ) | ≤ C | x − y | (1 + | x | + | y | ) r , | u ( s, | ≤ C ; 2. for every s ∈ [0 , T ] , x ∈ H , ¯ u ( s, x ) ≥ h ( x ); 3. for all ≤ s ≤ t ≤ T and x ∈ H ¯ u ( s, x ) ≥ P s,t [ u ( t, · )]( x ) + Z ts P s,τ h ψ ( τ, · , ¯ u ( τ, · ) , ζ ( τ, · )) i ( x ) dτ, (3.5) where ζ is an arbitrary element of the generalized gradient e ∇ G ¯ u ;4. ¯ u ( T, · ) = φ. We are now ready to state the main result of this paper.18 heorem 3.2 Assume that hypotheses 2.4 and 2.6 hold true. Let us define u ( s, x ) = Y s,xs , (3.6) where ( Y s,x , Z s,x ) is solution to the reflected BSDE in (2.16). Then u is a mild supersolutionin the sense of the generalized directional gradient for the obstacle problem (3.1).Moreover u is minimal in the sense that given any ¯ u , supersolution of (3.1) in the sense ofdefinition 3.2, it holds u ( s, x ) ≤ ¯ u ( s, x ) , and s ∈ [0 , T ] , x ∈ H Finally, if in addition sup s ∈ [0 ,T ] ,x ∈ H | ψ ( s, x, , | < ∞ and φ and h are bounded then u isalso bounded. Proof. By theorem 2.9 by defining u ( s, x ) := Y s,xs , u has the regularity required in definition3.2, point 1, and moreover points 2 and 4 immediately follow since Y is solution to the RBSDEin (2.16).For what concerns point 3 of definition 3.2, since Y is solution to the reflected BSDE, we get u ( s, x ) = Y s,xt + Z ts ψ ( τ, X s,xτ , Y s,xτ , Z s,xτ ) dτ + K s,xt − K s,xs − Z ts Z s,xτ dW τ , (3.7)Fixed ξ ∈ Ξ, let us consider the joint quadratic variation of both sides of (3.7) with W ξ .Proposition 2.1 in [11] and Theorem 3.1 yield that e ∇ G u exists and letting ζ ∈ e ∇ G u , we have h u ( · , X s,x · ) , W ξ · i [ s,t ] = Z ts ζ ( σ, X s,xσ ) ξ dσ, where W ξt := Z ts h ξ, dW σ i , ≤ s ≤ t ≤ T. On the other hand by the Markov property stated in Remark 2.8 u ( t, X s,xt ) = Y t,X s,xt t = Y s,xt and since Y is solution to the RBSDE in (2.16) we deduce: h Y s,x · , W · i [ s,t ] = Z ts Z s,xσ ξ dσ. So, by these two expression of the joint quadratic variation of u ( · , X s,x · ) and W ξ we get Z ts ζ ( σ, X s,xσ ) ξ dσ = Z ts Z t,xσ ξ dσ, (3.8) P -a.s. Since both sides of (3.8) are continuous with respect to t , it follows that, P -a.s., theycoincide for all t ∈ [ s, T ]. This implies that ζ ( t, X s,xt ) = Z s,xt , P -a.s. for a.a. t ∈ [ s, T ]. Thereforeequation (3.7) can be rewritten as u ( s, x ) = Y s,xt + Z ts ψ ( τ, X s,xτ , u ( τ, X s,xτ ) , ζ ( τ, X s,xτ )) dτ + K s,xt − K s,xs − Z ts Z s,xτ dW τ , (3.9)By taking the conditional expectation E F s and since K is a nondecreasing process, we get u ( s, x ) ≥ Y s,xt + Z ts P s,τ h ψ ( τ, · , u ( τ, · ) , ζ ( τ, · )) i ( x ) dτ, (3.10)19nd we have proved that u is a mild supersolution along the Definition 3.2We have to prove that u is the minimal supersolution. Let ¯ u be any supersolution and let usdefine ¯ Y s,xt = ¯ u ( t, X s,xt ). Then for every σ ∈ [ s, t ], with 0 ≤ s ≤ t , by point 3 of definition 3.2,having replaced x with X s,xσ which is F σ -measurable,¯ u ( σ, X s,xσ ) ≥ E F σ ¯ u ( t, X σ,X s,xσ t ) + E F σ Z tσ ψ ( τ, X σ,X s,xσ τ , ¯ Y σ,X s,xσ τ , ¯ ζ ( τ, X σ,X s,xσ τ ) dτ. (3.11)So it turns out that( L s,xσ ) σ ∈ [ s,T ] := (cid:18) − ¯ u ( σ, X s,xσ ) − Z σs ψ (cid:0) τ, X s,xτ , ¯ Y s,xτ , ¯ ζ ( τ, X s,xτ ) (cid:1) dτ (cid:19) σ ∈ [ s,T ] . is a submartingale. By hypothesis 2.6 on ψ , by the growth property of u as required in definition3.2, point 1, by relation 3.4 and finally by Proposition 2.5 we get that L s,x is a uniformlyintegrable martingale, so it is of class (D) and the Doob-Meyer decomposition can be applied,see e.g. Definition 4.8 and Theorem 4.10 in Chapter 1 of [13]. So L s,x can be decomposed into: L s,xσ = ¯ M s,xσ + ¯ K s,xσ , where ¯ K s,x is an integrable nondecreasing process such that ¯ K s,xs = 0, and ¯ M s,x is a uniformlyintegrable martingale. Moreover, see [5], Chapter VII relation (15.1), since E sup σ ∈ [ s,T ] | L s,xσ | < ∞ we have ¯ K s,xT ∈ L (Ω) . Notice that we are working in a complete probability space filtered with the filtration generatedby the Wiener process, so by the martingale representation theorem, see again [13] and [3] forits infinite dimensional version, there exists a process ¯ Z ∈ L P (Ω × [ s, T ]; L (Ξ , R )) such that¯ M s,xσ = − h u ( s, x ) + Z σs ¯ Z s,xτ dW τ i . We finally get ∀ σ ∈ [ s, T ] u ( s, x ) = ¯ u ( σ, X s,xσ ) + Z σs ψ ( τ, X s,xτ , ¯ Y s,xτ , ¯ ζ ( τ, X s,xτ ) dτ + ¯ K s,xσ − ¯ K s,xs − Z ts ¯ Z s,xτ dW τ , (3.12)that is, ∀ ≤ s ≤ t ≤ T ¯ Y s,xt = ¯ Y s,xT + Z Tt ψ ( τ, X s,xτ , ¯ Y s,xτ , ¯ ζ ( τ, X s,xτ )) dτ + ¯ K s,xt − ¯ K s,xs − Z ts ¯ Z s,xτ dW τ . (3.13)Finally we have to identify τ ζ ( τ, X s,xτ ) with ¯ Z s,xτ , P -a.s. for a.a. τ ∈ [ s, T ]. To this aim, for ξ ∈ Ξ, let us consider the joint quadratic variation of both sides of (3.12) with W ξ . Notice thatthe finite variation term K does not give any contribution to the joint quadratic variation with W ξ ; so Proposition 2.1 in [11] and Theorem 3.1 yield, for s ≤ σ < T and ζ ∈ e ∇ G u , Z σs ζ ( τ, X s,xτ ) ξ dτ = Z σs ¯ Z s,xτ ξ dτ, (3.14)20 -a.s.. Since both sides of (3.14) are continuous with respect to σ , it follows that, P -a.s., theycoincide for all σ ∈ [ s, T ]. This implies that ζ ( σ, X s,xσ ) = ¯ Z s,xσ , P -a.s. for a.a. σ ∈ [ s, T ]. So weget that, by defining, ¯ Y s,xs := ¯ u ( s, x ) and ¯ Y s,x := ¯ u ( · , X s,x ), the couple of processes ( ¯ Y s,x , ¯ Z s,x )solves the following problem − dY s,xt = ψ ( t, X s,xt , ¯ Y s,xt , ¯ Z s,xt ) dt + d ¯ K s,xt − ¯ Z s,xt dW t , t ∈ [ s, T ] ,Y T = φ ( X s,xT ) ,Y s,xt ≥ h ( X s,xt ) , (3.15)which is “almost” a reflected BSDE, what is lacking is the requirement that ¯ K s,x is the minimalincreasing process, namely it is not required the condition Z Ts (cid:0) ¯ Y s,xt − h ( X s,xt ) (cid:1) d ¯ K s,xt = 0Now we have to compare ¯ Y s,x with Y s,x . To this aim, extending a procedure used in [2], wecompare ¯ Y s,x with the penalized solution Y n,s,x of equation 2.21, that we rewrite in integralform, for t ∈ [ s, T ] Y n,s,xt = φ ( X s,xT )+ Z Tt ψ ( τ, X s,xτ , Y n,s,xτ , Z n,s,xτ ) dτ + Z Tt n ( Y n,s,xτ − h ( X s,xτ )) − dτ − Z Tt Z n,s,xτ dW τ . (3.16)Applying Itˆo formula to the process e n ( T − t ) Y n,s,xt we get − de n ( T − t ) Y n,s,xt = e n ( T − t ) ψ ( t, X s,xt , Y n,s,xt , Z n,s,xt ) dt + ne n ( T − t ) Y n,s,xt ∨ h ( X s,xt ) dt − e n ( T − t ) Z n,s,xt dW t , t ∈ [ s, T ] ,Y n,s,xT = φ ( X s,xT ) (3.17)Applying Itˆo formula to the process e n ( T − t ) ¯ Y s,xt we get − de n ( T − t ) ¯ Y s,xt = ne n ( T − t ) ¯ Y s,xt + e n ( T − t ) ψ ( t, X s,xt , ¯ Y s,xt , ¯ Z s,xt ) dt + e n ( T − t ) d ¯ K s,xt − e n ( T − t ) ¯ Z s,xt dW t , t ∈ [ s, T ] , ¯ Y s,xT = φ ( X s,xT ) (3.18)Notice that in (3.18) we can replace ¯ Y s,xt by ¯ Y s,xt ∨ h ( X s,xt ) (recall that since ¯ u is a supersolutionto the obstacle problem (3.1) it holds ¯ u ≥ h ). Assume for a moment the following lemma. Lemma 3.3 Let f i : Ω × [0 , T ] × R × Ξ → R , i = 1 , satisfy hypothesis 2.1 with p = 2 , fix ξ ∈ L F T (Ω) let K be a progressively measurable nondecreasing processes with E K T < ∞ . If let ( Y , Z ) and ( Y , Z ) with Y i ∈ L P (Ω , C ([0 , T ])) and Z i ∈ L P (Ω × [0 , T ] , Ξ) , i = 1 , , are thesolutions to the following equations of backward type: (cid:26) − dY t = f ( t, Y t , Z t ) dt + dK t − Z t dW t , t ∈ [0 , T ] ,Y T = ξ, (3.19) (cid:26) − dY t = f ( t, Y t , Z t ) dt − Z t dW t , t ∈ [0 , T ] ,Y T = ξ. (3.20) and δ f t := f ( t, Y t , Z t ) − f ( t, Y t , Z t ) ≥ , d P × dt a.s , (3.21) then we have that Y t ≥ Y t P -almost surely for any t ∈ [0 , T ] . 21y applying lemma 3.3 to the BSDEs 3.17 and 3.18 we get a comparison for the processes (cid:0) e n ( T − t ) Y n,s,xt (cid:1) t ∈ [ s,T ] and (cid:0) e n ( T − t ) ¯ Y s,xt (cid:1) t ∈ [ s,T ] , namely we get e n ( T − t ) ¯ Y s,xt ≥ e n ( T − t ) Y n,s,xt (3.22)almost surely and for any time t , and consequently¯ Y s,xt ≥ Y n,s,xt . (3.23)Now we let n → ∞ : by [6], section 6, Y n,s,xt ↑ Y s,xt for any s ≤ t ≤ T and P -a.s.. So taking s = t in (3.23) we finally get ¯ u ( s, x ) ≥ u ( s, x ) , (3.24)for any ¯ u supersolution for the obstacle problem 3.1. So the minimality of u is proved: theunique solution to the obstacle problem 3.1 is given by formula (3.6) and the other propertiesfollows by estimates (2.20), which passes to the limit as n → ∞ as stated in proposition 2.7 onthe solution of the RBSDE in terms of the growth of ψ , h and φ .In order to complete the proof of theorem 3.2, we have to prove lemma 3.3. Proof of Lemma 3.3 . We adequate the proof of the classical comparison theorem forBSDEs given in [7], Theorem 2.2, to the equations 3.19 and 3.20. By denoting∆ y f t = f ( t, Y t , Z t ) − f ( t, Y t , Z t ) Y t − Y t if Y t − Y t = 0 , ∆ y f t = 0 otherwise , ∆ z f t = f ( t, Y t , Z t ) − f ( t, Y t , Z t ) | Z t − Z t | ( Z t − Z t ) if Z t − Z t = 0 , ∆ z f t = 0 otherwise ,δ as defined in (3.21), δY t = Y t − Y t and δZ t = Z t − Z t we get (cid:26) − dδY t = ∆ y f t δY t dt + (∆ z f t ) ∗ δZ t dt + δ f t dt + dK t − δZ t dW t , t ∈ [0 , T ] ,δY T = 0 (3.25)We notice that ∆ y f t and ∆ z f t are bounded and that δ f ∈ L P (Ω × [0 , T ] , R ).Multiplying δY t by exp( R t ∆ y f τ dτ ) and then applying Girsanov theorem we obtain: δY t = E (cid:18) ρ t,T (cid:20)Z Tt exp (cid:18)Z st ∆ y f τ dτ (cid:19) dK s + Z Tt exp (cid:18)Z st ∆ y f τ dτ (cid:19) δ f s ds (cid:21)(cid:19) (3.26)where ρ t,T is the Girsanov density: ρ t,T = exp (cid:18)Z Tt (∆ z f s ) ∗ dW s − Z Tt | ∆ z f s | ds (cid:19) . The claim obviously follows from (3.26) being ( K ) non decreasing and δ Y non negative.22 The Optimal Control-Stopping problem An Admissible Control System is a set S = (Ω S , F S , ( F S t ) t ≥ , P S , ( W S t ) t ≥ )where (Ω S , F S , ( F S t ) t ≥ , P S ) is a complete probability space endowed with a filtration satisfyingthe usual assumptions and ( W S t ) t ≥ ) is a cylindrical Wiener process in Ξ. Fixed a closed subset U of a normed space U an admissible control in the setting S is any (cid:0) F S t (cid:1) -predictable process α : Ω S × [0 , T ] → U . The set of all admissible controls will be denoted by U S .We fix a function R : H × U → Ξ bounded, continuous such that: | R ( α, x ) − R ( α, x ′ ) | ≤ | x − x ′ | ∀ u ∈ U, x, x ′ ∈ H (4.1)Given an admissible setting S and an admissible control α ∈ U S and fixed x ∈ H , s ∈ [0 , T ]by X α,s,x we will denote the solution to the following stochastic differential equation in a Hilbertspace H (cid:26) dX α,s,xt = AX α,s,xt dt + F ( t, X α,s,xt ) dt + G ( t, X α,s,xt )( R ( X α,s,xt , α t ) dt + dW S t ) , t ∈ [ s, T ] X α,s,xs = x ∈ H. (4.2)Moreover given l : [0 , T ] × H × U → R we introduce the cost functional: J ( s, x, τ, α ) = E Z τs l ( r, X α,s,xr , α r ) dr + E [ φ (cid:0) X α,s,xT (cid:1) χ { τ = T } ] + E [ h ( τ, X α,s,xτ ) χ { τ 1, see [8]. Consequently J ( s, x, τ, α ) is a well defined real numberfor all α ∈ U S and all {F S t } t -stopping time τ ≤ T . We also notice that X α,s,x is adapted to thefiltration generated by ( W St ).By the Girsanov theorem, there exists a probability measure P S ,α such that the process W S ,αt := W S t + Z ts R ( X α,s,xr , α r ) dr t ≥ s is a cylindrical P S ,α -Wiener process in Ξ. We denote by ( F S ,αt ) t ≥ s its natural filtration, aug-mented in the usual way. X α,s,x satisfies the following equation: (cid:26) dX α,s,xt = AX α,s,xt dt + F ( t, X α,s,xt ) dt + G ( t, X α,s,xt ) dW S ,αt , t ∈ [ s, T ] X α,s,xs = x. (4.5)Consequently (notice that the above equation enjoys strong existence, in probabilistic sense, andpathwise uniqueness) X α,s,xt turns out to be adapted to ( F S ,αt ) t ≥ s . (cid:16) Ω S , F S , ( F S ,αt ) t ≥ , P S ,α (cid:17) we consider the solution ( e Y s,x , e Z s,x , e K s,x ) of the following re-flected backward stochastic differential equation: − d e Y s,xt = ψ ( s, X α,s,xt , e Z s,xt ) dt + d e K s,xt − e Z s,xt dW S ,αt , t ∈ [0 , T ] , e Y s,xT = φ ( X α,s,xT ) , e Y s,xt ≥ h ( t, X α,s,xt ) , R T ( e Y s,xt − h ( t, X α,s,xt )) d e K s,xt = 0 , (4.6)We omit to indicate the dependence on the admissible setting S and on the admissible control α since the law of e Y s,x , e Z s,x and e K s,x is uniquely determined by A , F , G , x , ψ and φ , and doesnot depend on the probability space and on the Wiener process, and in particular e Y s,xs is a realnumber that does not depend on S and on α . We argue as in [6], proposition 2.3. Rewriting(4.6) in terms of the original noise ( W S ) and integrating it between s and any ( F S s )-stoppingtime τ , we get that P S ,α -a.s., and consequently P S ,α -a.s., e Y s,xs = e Y s,xτ + Z τs ψ ( r, X α,s,xr , e Z s,xr ) dr + e K s,xτ − e K s,xt − Z τs e Z s,xr dW S r − Z τs e Z s,xr R ( X α,s,xr , α r ) dr Noticing that ( R t e Z α,s,xr dW S r ) t ≥ is a P S -martingale and that e Y α,s,xr ≥ h ( r, X α,s,xr ) by computingexpectation with respect to P S we get: e Y s,xs ≥ E Z τs ψ ( r, X α,s,xr , e Z α,s,xr ) dr − E Z τs e Z α,s,xr α r dr + E [ e K s,xτ − e K s,xt ] + E h ( τ, X α,s,xτ ) χ { τ For every admissible setting S and every admissible control u ∈ U S we have: J ( s, x, τ, α ) ≤ e Y s,xs moreover the equality holds if and only if ψ ( r, X α,s,xr , e Z s,xr ) − l ( r, X α,s,xr , u r ) − Z s,xr α r = 0 , P − a.s. for a.e. r ∈ [ s, τ ] (4.8) K s,xτ − e K s,xs = 0 , P − a.s. (4.9) e Y s,xτ I { τ Fixed an admissible setting S and ad admissible control α ∈ U S let ¯ τ be define asLet us consider τ = inf { t ≤ r ≤ T : e Y s,xr = h ( r, X α,s,xr ) } ∧ T. (4.11) The condition R T ( e Y s,xt − h ( t, X α,s,xt )) d e K s,xt = 0 together with continuity and monotonicity of e K imply that e K s,xτ − e K s,xt = 0 . Moreover (4.10) follows by definition. Consequently we have: e Y s,xs = J ( s, x, ¯ τ , α ) + E Z ¯ τs h ψ ( r, X α,s,xr , e Z α,s,xr ) − l ( r, X α,s,xr , α r ) − Z α,s,xr α r i dr. (4.12)24aking into account equations (4.5), (4.6) and Proposition 3.2 the above results can be refomu-lated as follows Corollary 4.3 Let u be the minimal mild supersolution to the obstacle problem and let ζ be anyelement of its generalized gradient. Given any admissible setting S and any admissible control α ∈ U S we have: J ( s, x, τ, α ) ≤ u ( s, x ) moreover the equality holds if and only if ψ ( r, X α,s,xr , ζ ( r, X α,s,xr )) − l ( r, X u,s,xr , α r ) − ζ ( r, X α,s,xr ) α r = 0 , P − a.s. for a.e. r ∈ [ s, τ ] K s,xτ − e K s,xs = 0 , P − a.s. ,u ( τ, X α,s,xτ ) I { τ The minimum in the definition (4.4) is attained for all t ∈ [ s, T ] , x ∈ H and z ∈ Ξ ∗ e.g. if we define Γ( s, x, z ) = { α ∈ U : zR ( x, α ) + l ( s, x, α ) = ψ ( s, x, z ) } (4.14) then Γ( s, x, z ) = ∅ for every s ∈ [0 , T ] , every x ∈ H and every z ∈ Ξ ∗ . Remark 4.5 By [1], see Theorems 8.2.10 and 8.2.11, under the above assumption Γ alwaysadmits a measurable selection, i.e. there exists a measurable function γ : [0 , T ] × H × Ξ ∗ → U with γ ( s, x, z ) ∈ Γ( s, x, z ) for every s ∈ [0 , T ] , every x ∈ H and every z ∈ Ξ ∗ .Moreover we notice that if U is compact then Hypothesis 4.4 always hold Theorem 4.6 Assume Hypothesis 4.4 and fix a measurable selection γ of Γ , s ∈ [0 , T ] , x ∈ H and an element ζ of the generalized gradient of the minimal supersolution u of the obstacleproblem (3.1); then there exists at least an admissible setting ¯ S in which the closed loop equation (cid:26) d ¯ X t = A ¯ X t dt + F ( t, ¯ X t ) dt + G ( t, ¯ X t )[ R ( t, γ ( t, ζ ( t, ¯ X t ))) + dW S t ] , t ∈ [ s, T ]¯ X s = x. (4.15) admits a mild solution. Proof. We fix any admissible setting S = (Ω S , F S , ( F S t ) t ≥ , P S , ( W ¯ S t ) t ≥ )and consider the uncontrolled forward SDE (cid:26) dX t = AX t dt + F ( t, X t ) dt + G ( t, X t ) dW S t , t ∈ [ s, T ] X s = x. (4.16)By the Girsanov theorem, there exists a probability measure ˆ P such that the processˆ W t := W S t − Z ts R ( X s,xr , ζ ( s, X s,xr )) dr t ≥ s 25s a cylindrical ˆ P -Wiener process in Ξ. We denote by ( ˆ F t ) t ≥ s its natural filtration, augmentedin the usual way. Clearly X solves (cid:26) dX t = AX t dt + F ( t, X t ) dt + G ( t, X t )[ R ( X t , γ ( t, ζ ( t, X t ) dt + d ˆ W t ] , t ∈ [ s, T ]ˆ X s = x. (4.17)and (Ω S , F S , ( ˆ F t ) t ≥ , ˆ P , ( ˆ W t ) t ≥ ) is the desired admissible system.We finally get the following Theorem 4.7 Assume Hypothesis 4.4 and fix a measurable selection γ of Γ , s ∈ [0 , T ] , x ∈ H and an element ζ of the generalized gradient of the minimal supersolution u of the obstacleproblem (3.1). Moreover let ¯ S be an admissible setting in which the closed loop equation (4.15)admits a mild solution then there exists ¯ α ∈ U ¯ S and an ( F ¯ S ) stopping time ¯ τ for which e Y s,xs = u ( s, x ) = J ( s, x, ¯ τ , ¯ α ) . Proof: Just let ¯ X be the mild solution of equation (4.15)and define ¯ α = γ ( t, ζ ( t, ¯ X t )) clearly¯ X t = X ¯ α,s,x and relation (4.13) holds. Thus by Corollary 4.3 it is enough to choose τ = inf { t ≤ r ≤ T : u ( r, ¯ X r ) = h ( r, ¯ X r ) } ∧ T. References [1] J.P. Aubin, H. Frankowska, Set-valued analysis , Systems & Control: Foundations & Ap-plications, Vol. 2, Birkh¨auser Boston Inc., Boston, MA, 1990,[2] A. Bensoussan, Stochastic control by functional analysis methods. Studies in Mathematicsand its Applications, 11. North-Holland Publishing Co., Amsterdam-New York, 1982.[3] G. Da Prato, J. Zabczyk, Stochastic equations in infinite dimensions. Encyclopedia ofMathematics and its Applications 44, Cambridge University Press, 1992.[4] G. Da Prato, J. Zabczyk, Second order partial differential equations in Hilbert spaces.L . ondon Mathematical Society Lecture Note Series, 293. Cambridge University Press,Cambridge, 2002[5] C. Dellacherie, P. A. Meyer Probability and Potential B: Theory of Martingales , North-Holland Amsterdam (1982).[6] N. El Karoui, C. Kapoudjian, E. Pardoux, S. Peng, M. C. Reflected solutions of backwardSDE’s, and related obstacle problems for PDE’s. Ann. Probab. (1997), no. 2, 702–737.[7] N. El Karoui, S. Peng, M. C. Quenez, Backward stochastic differential equations in finance.Mathematical Finance (1997), 1-71.[8] M. Fuhrman, G. Tessitore, Nonlinear Kolmogorov equations in infinite dimensional spaces:the backward stochastic differential equations approach and applications to optimal control.Ann. Probab. (2002), 1397–1465.[9] M. Fuhrman, G. Tessitore, The Bismut-Elworthy formula for backward SDEs and applica-tions to nonlinear Kolmogorov equations and control in infinite dimensional spaces. Stoch.Stoch. Rep. (2002), no. 1-2, 429–464. 2610] M. Fuhrman, G. Tessitore, Infinite horizon backward stochastic differential equations andelliptic equations in Hilbert spaces. Ann. Probab. (2004), 607–660.[11] M. Fuhrman, G. Tessitore, Generalized directional gradients, backward stochastic differen-tial equations and mild solutions of semilinear parabolic equations. Appl. Math. Optim. 51(2005), no. 3, 279–332.[12] Y. Hu, G. Tessitore, BSDE on an infinite horizon and elliptic PDEs in infinite dimension.NoDEA Nonlinear Differential Equations Appl. 14 (2007), no. 5-6, 825–846[13] I. Karatzas, S.E. Shreve, Steven, Brownian motion and stochastic calculus. Second edition. Graduate Texts in Mathematics, 113. Springer-Verlag, New York.[14] D. Kelome, A. Swiech, Viscosity solutions of an infinite-dimensional Black-Scholes-Barenblatt equation,Appl. Math. Optim., (2003),253–278.[15] F. Masiero, Semilinear Kolmogorov equations and applications to stochastic optimal control,Appl. Math. Optim., 51 (2005), pp. 201–250.[16] F. Masiero, Infinite horizon stochastic optimal control problems with degenerate noise andelliptic equations in Hilbert spaces. Appl. Math. Optim. 55 (2007), no. 3, 285-326[17] E. Pardoux, S. Peng, Adapted solution of a backward stochastic differential equation. Sys-tems and Control Lett. , 1990, 55-61.[18] E. Pardoux, S. Peng, Backward stochastic differential equations and quasilinear parabolicpartial differential equations, in: Stochastic partial differential equations and their appli-cations , eds. B.L. Rozowskii, R.B. Sowers, 200-217, Lecture Notes in Control Inf. Sci. 176,Springer, 1992.[19] J. Yong, X. Y. Zhou,