Noether theorem in stochastic optimal control problems via contact symmetries
Francesco C. De Vecchi, Elisa Mastrogiacomo, Mattia Turra, Stefania Ugolini
aa r X i v : . [ m a t h . O C ] F e b Noether theorem in stochastic optimal controlproblems via contact symmetries
Francesco C. De Vecchi ∗ , Elisa Mastrogiacomo † ,Mattia Turra ‡ , and Stefania Ugolini §31 Institute for Applied Mathematics and Hausdorff Center for Mathematics, University of Bonn, Germany Dipartimento di Economia, Università degli Studi dell’Insubria, Italy Dipartimento di Matematica, Università degli Studi di Milano, Italy
Abstract
We establish a generalization of Noether theorem for stochastic optimal control problems. Exploitingthe tools of jet bundles and contact geometry, we prove that from any (contact) symmetry of the Hamilton-Jacobi-Bellman equation associated to an optimal control problem it is possible to build a related localmartingale. Moreover, we provide an application of the theoretical results to Merton’s optimal portfo-lio problem, showing that this model admits infinitely many conserved quantities in the form of localmartingales.
Keywords:
Noether theorem, stochastic optimal control, contact symmetries, Merton’s optimal portfolioproblem.
The concept of symmetry of ordinary or partial differential equations (ODEs and PDEs) was introduced bySophus Lie at the end of the 19th century with the aim of extending the Galois theory from polynomial todifferential equations. Actually, all the theory of Lie groups and algebras was developed by Lie himself aswell as the principal tools for facing the problem of symmetries of differential equations (see [30] for anhistorical introduction to the subject and [47, 56] for some modern presentations).One of the most important application of the study of symmetries in physical systems was providedby Emmy Noether. She understood that when an equation comes from a variational problem, such asin Lagrangian mechanics, general relativity or, more generally, field theory, it is possible to relate eachsymmetry of the equation to a conserved quantity, i.e., a function of the state of the system that does notchange during the evolution of the dynamics, and conversely, to each conserved quantity it is possible toassociate a symmetry of the motion. The simplest examples are, in Newtonian and Lagrangian mechanics, the ∗ [email protected] † [email protected] ‡ [email protected] § [email protected] Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) of a contact symmetry (which is aregular function defined on the jet space 𝐽 ( R 𝑛 , R ) , i.e., a map depending on a function 𝑢 and on its firstderivatives 𝑢 𝑥 ), a regular solution 𝑈 ( 𝑡, 𝑥 ) to the HJB equation and the solution 𝑋 𝑡 to the optimal controlproblem, then the process 𝑂 𝑡 = Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) , obtained by composing the generator Ω withthe function 𝑈 and the process 𝑋 , is a local martingale.Furthermore, we generalize Noether theorem also to the case where the coefficients and the Lagrangian2f the control problem are random. Indeed, we establish that Noether theorem holds also in the case ofstochastic HJB equation, introduced in [48] by Peng to study the optimal control problem with stochastic finalcondition or stochastic Lagrangian, provided that we restrict ourselves to a subset of Lie point symmetries(Theorem 4.8 and Corollary 4.9).Finally, the present paper provides an application of our theory to a non-trivial interesting problem arisingin mathematical finance, that is Merton’s optimal portfolio problem. First proposed by Merton in [41],this model finds nowadays many different applications and generalizations (see [53] for a review of theoriginal problem and various generalizations and [8, 9, 23, 46] for some more recent works on the subject).A particular form of Noether theorem for this problem can be found in [6]. We show here that the HJBequation of this optimal control system admits infinitely many contact symmetries. It is important to noticethat the contact symmetry generalization is essential in this specific problem, since, when we restrict to Liepoint symmetries as it is done in the aforementioned literature, the equation admits only a finite numberof infinitesimal invariants. The presence of infinitely many contact symmetries yields the possibility toconstruct infinitely many martingales whose means are preserved by the evolution of the system. Moreover,we also point out that, when the final condition is random or the coefficients of the evolution of the stockare general adapted processes, our stochastic generalization of Noether theorem (Corollary 4.9) allows usto construct some non-trivial martingales for this classical mathematical model. We think that the presenceof these martingales could be related to the existence of many explicit solutions for Merton’s problem, andtherefore we expect that the methods presented here can be used to build other explicit solutions for it. Weplan to study in a future work the financial consequences of the conservation laws individuated in this paper.Since the stochastic and geometrical frameworks are not so commonly put together, we also provide aconcise introduction to both these subjects. Plan of the paper
The paper is organized as follows. Section 2 introduces stochastic optimal control problems both in thedeterministic and stochastic case, presenting also the HJB equation, and it is useful also to fix the notationsthat we adopt throughout the paper. Contact symmetries and their properties in the PDEs setting are discussedin Section 3. Section 4 contains the main theoretical results of the paper, namely Noether theorems fordeterministic and stochastic HJB equations. The application of such results to Merton’s optimal portfolioproblem is given in Section 5.
We give here an overview of some results about stochastic control problems, referring the interested readerto [22, 49, 58, 59] for further investigations on such results, though more precise references will be giventhroughout the section. The main aim of this section is to introduce the topics we will deal with and to givethe tools from the stochastic optimal control theory that we will use later on in the paper.
We start recalling some notions about deterministic optimal control and, in particular, we focus onLagrangian-type optimal control problems, i.e., problems arising from Lagrangian formulation of classicalmechanics. More precisely, we consider a system of controlled ODEs of the formd 𝑋 𝑖𝑡 = 𝛼 𝑖𝑡 d 𝑡, (2.1)3here 𝑋 · = ( 𝑋 · , . . . , 𝑋 𝑛 · ) : [ 𝑡 , 𝑇 ] → R 𝑛 is in 𝐶 ( [ 𝑡 , 𝑇 ] , R 𝑛 ) , 𝑡 , 𝑇 ∈ R , with 𝑡 𝑇 , are the initial timeand the final time horizon, respectively, and 𝛼 · = ( 𝛼 · , . . . , 𝛼 𝑛 · ) ∈ 𝐶 ( [ 𝑡 , 𝑇 ] , R 𝑛 ) is the control function. Wewant to maximize the following objective functional 𝐽 ( 𝑡 , 𝑥, 𝛼 ) = ∫ 𝑇𝑡 𝐿 ( 𝑋 𝑡 ,𝑥𝑠 , 𝛼 𝑠 ) d 𝑠 + 𝑔 ( 𝑋 𝑡 ,𝑥𝑇 ) . (2.2)where 𝑋 𝑡 ,𝑥𝑡 is the solution to the ODE (2.1) such that 𝑋 𝑡 ,𝑥𝑡 = 𝑥 ∈ R 𝑛 .We suppose that there exists only one smooth function A : R 𝑛 × R 𝑛 → R 𝑛 such that 𝑛 X 𝑖 = A 𝑖 ( 𝑥, 𝑝 ) 𝑝 𝑖 + 𝐿 ( 𝑥, A ( 𝑥, 𝑝 )) = sup 𝑎 ∈ R 𝑛 ( 𝑛 X 𝑖 = 𝑎 𝑖 𝑝 𝑖 + 𝐿 ( 𝑥, 𝑎 ) ) , ( 𝑥, 𝑝 ) ∈ R 𝑛 × R 𝑛 , and also that, for any 𝑥 ∈ R 𝑛 , the map A ( 𝑥, ·) ≔ ( A ( 𝑥, ·) , . . . , A 𝑛 ( 𝑥, ·)) is smoothly invertible in all itsvariables as a function from R 𝑛 into itself. Define then the PDE 𝑢 𝑡 − 𝐻 ( 𝑥, 𝑢 𝑥 ) = 𝑢 𝑡 − 𝑛 X 𝑖 = A 𝑖 ( 𝑥, 𝑢 𝑥 ) 𝑢 𝑥 𝑖 + 𝐿 ( 𝑥, A ( 𝑥, 𝑢 𝑥 )) ! = , (2.3)where 𝑢 𝑥 = ( 𝑢 𝑥 , . . . , 𝑢 𝑥 𝑛 ) . Equation (2.3) is usually referred to as Hamilton-Jacobi equation in the contextof Lagrangian mechanics or
Hamilton-Jacobi-Bellman equation in the one of optimal control theory.We state now the deterministic version of the so-called verification theorem . Theorem 2.1.
Let 𝑈 ( 𝑡, 𝑥 ) ∈ 𝐶 ( [ 𝑡 , 𝑇 ] × R 𝑛 , R ) be a solution to Hamilton-Jacobi equation (2.3) . Thenthe optimal control problem (2.1) with objective functional (2.2) admits a unique solution, for any 𝑥 ∈ R 𝑛 ,given, for every 𝑖 = , . . . , 𝑛 , by 𝛼 𝑖𝑡 = A 𝑖 ( 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) , for every 𝑡 ∈ [ 𝑡 , 𝑇 ] . Proof.
See, e.g., Theorem 4.4 in [22]. (cid:3)
Remark . It is important to note that, in the deterministic case and when 𝑈 ∈ 𝐶 , ( [ 𝑡 , 𝑇 ] × R 𝑛 , R ) , i.e., 𝑈 is differentiable one time with respect to time 𝑡 and two times with respect to space 𝑥 ∈ R 𝑛 , the function 𝑡 ↦→ 𝛼 𝑡 is 𝐶 ( [ 𝑡 , 𝑇 ] , R 𝑛 ) and it satisfies the Euler-Lagrange equations dd 𝑡 ( 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 )) − 𝜕 𝑥 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ) = , (2.4)where 𝑖 = , . . . , 𝑛 . An optimal control problem consists in maximizing an objective functional, depending on the state of adynamical system, on which we can act through a control process.Let 𝐾 be a (convex) subset of R 𝑑 and fix a final time 𝑇 >
0. Denote by 𝑊 an 𝑚 -dimensionalBrownian motion on a filtered probability space ( Ω , F , ( F 𝑡 ) 𝑡 ≥ , P ) , where ( F 𝑡 ) 𝑡 ≥ is the natural filtrationgenerated by 𝑊 . We assume that the state of the system is modeled by the following stochastic differentialequation (SDE) d 𝑋 𝑡 = 𝜇 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡 + 𝜎 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑊 𝑡 , 𝑡 < 𝑡 ≤ 𝑇,𝑋 𝑡 = 𝑥, (2.5)4here 𝜇 : R + × R 𝑛 × R 𝑑 → R 𝑛 and 𝜎 : R + × R 𝑛 × R 𝑑 → R 𝑛 × 𝑚 are measurable functions that are alsoLipschitz-continuous on the set 𝐾 , i.e., there exists a constant 𝐶 >
0, such that, for every 𝑡 ∈ R + , 𝑥, 𝑦 ∈ R 𝑛 , 𝑎 ∈ 𝐾 , | 𝜇 ( 𝑡, 𝑥, 𝑎 ) − 𝜇 ( 𝑡, 𝑦, 𝑎 )| + k 𝜎 ( 𝑡, 𝑥, 𝑎 ) − 𝜎 ( 𝑡, 𝑦, 𝑎 ) k 𝐶 | 𝑥 − 𝑦 | , (2.6)where k 𝜎 k = tr ( 𝜎 ∗ 𝜎 ) . We will also use the notation 𝜇 = ( 𝜇 𝑖 ) 𝑖 = ,...,𝑛 and 𝜎 = ( 𝜎 𝑖ℓ ) 𝑖 = ,...,𝑛,ℓ = ,...,𝑚 .The control process 𝛼 = ( 𝛼 𝑠 ) , appearing in (2.5), is a 𝐾 -valued progressively measurable process withrespect to the filtration ( F 𝑡 ) 𝑡 ≥ . We denote by K the set of control processes 𝛼 such that E (cid:20)∫ 𝑇 (| 𝜇 ( 𝑡, , 𝛼 𝑡 )| + k 𝜎 ( 𝑡, , 𝛼 𝑡 ) k ) d 𝑡 (cid:21) < +∞ . (2.7)We call 𝑋 𝑡 ,𝑥𝑡 , 𝑡 ∈ [ 𝑡 , 𝑇 ] the solution to the SDE (2.5). Remark . Conditions (2.6)–(2.7) imply that, for any initial condition ( 𝑡 , 𝑥 ) ∈ [ , 𝑇 ) × R 𝑛 and for all 𝛼 ∈ K , there exists a unique strong solution 𝑋 𝑥,𝑡 𝑡 to the SDE (2.5) (see, e.g., Theorem 2.2 in Chapter 4of [32]).Let now 𝐿 : R + × R 𝑛 × R 𝑑 → R and 𝑔 : R 𝑛 → R be two measurable functions such that(i) 𝑔 is bounded from below,(ii) 𝑔 satisfies the quadratic growth condition | 𝑔 ( 𝑥 )| 𝐶 ( + | 𝑥 | ) , for every 𝑥 ∈ R 𝑛 , for some constant 𝐶 independent of 𝑥 .For ( 𝑡 , 𝑥 ) ∈ [ , 𝑇 ) × R 𝑛 , we denote by K 𝐿 ( 𝑡 , 𝑥 ) the subset of controls in K such that E (cid:20)∫ 𝑇𝑡 | 𝐿 ( 𝑡, 𝑋 𝑡 ,𝑥𝑡 , 𝛼 𝑡 )| d 𝑡 (cid:21) < +∞ . We consider an objective function of the following form 𝐽 ( 𝑡 , 𝑥, 𝛼 ) = E (cid:20)∫ 𝑇𝑡 𝐿 ( 𝑠, 𝑋 𝑡 ,𝑥𝑠 , 𝛼 𝑠 ) d 𝑠 + 𝑔 ( 𝑋 𝑡 ,𝑥𝑇 ) (cid:21) . We are now in position to introduce the stochastic optimal control problem . Definition 2.4.
Fixed ( 𝑡 , 𝑥 ) ∈ [ , 𝑇 ) × R 𝑛 , the stochastic optimal control problem consists in maximizingthe objective function 𝐽 ( 𝑡 , 𝑥, 𝛼 ) over all 𝛼 ∈ K 𝐿 ( 𝑡 , 𝑥 ) subject to the SDE (2.5) . The associated valuefunction is then defined as 𝑈 ( 𝑡 , 𝑥 ) = max 𝛼 ∈ K 𝐿 ( 𝑡 ,𝑥 ) E (cid:20)∫ 𝑇𝑡 𝐿 ( 𝑡, 𝑋 𝑡 ,𝑥𝑡 , 𝛼 𝑡 ) d 𝑡 + 𝑔 ( 𝑋 𝑡 ,𝑥𝑇 ) (cid:21) . Given an initial condition ( 𝑡 , 𝑥 ) ∈ [ , 𝑇 ) × R 𝑛 , we call 𝛼 ∗ ∈ K 𝐿 ( 𝑡 , 𝑥 ) an optimal control if 𝐽 ( 𝑡 , 𝑥, 𝛼 ∗ ) = 𝑈 ( 𝑡 , 𝑥 ) . We call
Hamilton-Jacobi-Bellman equation (HJB) the PDE 𝜕 𝑡 𝜑 ( 𝑡, 𝑥 ) + sup 𝑎 ∈ 𝐾 { L 𝑎𝑡 𝜑 ( 𝑡, 𝑥 ) + 𝐿 ( 𝑡, 𝑥, 𝑎 )} = , ( 𝑡, 𝑥 ) ∈ [ 𝑡 , 𝑇 ) × R 𝑛 ,𝜑 ( 𝑇, 𝑥 ) = 𝑔 ( 𝑥 ) , 𝑥 ∈ R 𝑛 , (2.8)where L 𝑎𝑡 is the Kolmogorov operator associated with equation (2.5), namely, for 𝜓 ∈ 𝐶 ( R 𝑛 ) , L 𝑎𝑡 𝜓 ( 𝑥 ) = 𝑛 X 𝑖, 𝑗 = 𝜂 𝑖 𝑗 ( 𝑡, 𝑥, 𝑎 ) 𝜕 𝑥 𝑖 𝑥 𝑗 𝜓 ( 𝑥 ) + 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎 ) 𝜕 𝑥 𝑖 𝜓 ( 𝑥 ) , ( 𝑡, 𝑥, 𝑎 ) ∈ R + × R 𝑛 × 𝐾, 𝜂 𝑖 𝑗 defined, for every 𝑖, 𝑗 ∈ { , . . . , 𝑛 } , as 𝜂 𝑖 𝑗 ( 𝑡, 𝑥, 𝑎 ) = ( 𝜎𝜎 ⊤ ) 𝑖 𝑗 ( 𝑡, 𝑥, 𝑎 ) = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎 ) 𝜎 𝑗ℓ ( 𝑡, 𝑥, 𝑎 ) , ( 𝑡, 𝑥, 𝑎 ) ∈ R + × R 𝑛 × 𝐾. We also write, for 𝑥 ∈ R 𝑛 , 𝑝 ∈ R 𝑛 and 𝑞 ∈ R 𝑛 × 𝑛 , 𝐻 ( 𝑡, 𝑥, 𝑝, 𝑞 ) = sup 𝑎 ∈ 𝐾 ( 𝑛 X 𝑖, 𝑗 = 𝜂 𝑖 𝑗 ( 𝑡, 𝑥, 𝑎 ) 𝑞 𝑖 𝑗 + 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎 ) 𝑝 𝑖 + 𝐿 ( 𝑡, 𝑥, 𝑎 ) ) , so that the HJB equation (2.8) can be written also in the following way 𝜕 𝑡 𝜑 ( 𝑡, 𝑥 ) + 𝐻 ( 𝑡, 𝑥, ∇ 𝜑, 𝐷 𝜑 ) = , ( 𝑡, 𝑥 ) ∈ [ 𝑡 , 𝑇 ) × R 𝑛 ,𝜑 ( 𝑇, 𝑥 ) = 𝑔 ( 𝑥 ) , 𝑥 ∈ R 𝑛 . (2.9)We state here the classical verification theorem . Theorem 2.5.
Let 𝜑 ∈ 𝐶 , ( [ , 𝑇 ) × R 𝑛 ) ∩ 𝐶 ( [ , 𝑇 ] × R 𝑛 ) be a solution to the HJB equation (2.9) for 𝑡 = , satisfying the following quadratic growth, for some constant 𝐶 , | 𝜑 ( 𝑡, 𝑥 )| 𝐶 ( + | 𝑥 | ) , for all ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ] × R 𝑛 . (2.10) Suppose that there exists a measurable function 𝐴 ∗ ( 𝑡, 𝑥 ) , ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ) × R 𝑛 , taking values in 𝐾 , such that ( i ) We have = 𝜕 𝑡 𝜑 ( 𝑡, 𝑥 ) + 𝐻 ( 𝑡, 𝑥, ∇ 𝜑, 𝐷 𝜑 ) = 𝜕 𝑡 𝜑 ( 𝑡, 𝑥 ) + L 𝐴 ∗ ( 𝑡,𝑥 ) 𝑡 𝜑 ( 𝑡, 𝑥 ) + 𝐿 ( 𝑡, 𝑥, 𝐴 ∗ ( 𝑡, 𝑥 )) , ( ii ) The SDE d 𝑋 𝑠 = 𝜇 ( 𝑠, 𝑋 𝑠 , 𝐴 ∗ ( 𝑠, 𝑋 𝑠 )) d 𝑠 + 𝜎 ( 𝑠, 𝑋 𝑠 , 𝐴 ∗ ( 𝑠, 𝑋 𝑠 )) d 𝑊 𝑠 , with initial condition 𝑋 𝑡 = 𝑥 , admits a unique solution 𝑋 ∗ 𝑠 , ( iii ) The process 𝐴 ∗ ( 𝑠, 𝑋 ∗ 𝑠 ) , 𝑠 ∈ [ 𝑡, 𝑇 ] lies in K 𝐿 ( 𝑡, 𝑥 ) .Then 𝜑 ( 𝑡, 𝑥 ) = 𝑈 ( 𝑡, 𝑥 ) , ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ] × R 𝑛 , and 𝐴 ∗ (· , 𝑋 ∗· ) is an optimal control for the stochastic optimal control problem in Definition 2.4.Proof. See, e.g., Theorem 3.5.2 in [49]. Some other references for the verification theorem are alsoTheorem 4.1 in [22], Theorem 5.7 in [58], and Theorem 4.1 in [59]. (cid:3)
Remark . The quadratic growth condition (2.10) is used in Theorem 2.8 only to guarantee that the localmartingale part of the semi-martingale decomposition of 𝜑 ( 𝑡, 𝑋 𝑡 ) , namely, by Itô formula, 𝑛 X 𝑖 = 𝑚 X ℓ = ∫ 𝑡 𝜎 𝑖ℓ ( 𝑠, 𝑋 𝑠 , 𝐴 ∗ ( 𝑠, 𝑋 𝑠 )) 𝜕 𝑥 𝑖 𝜑 ( 𝑠, 𝑋 𝑠 ) d 𝑊 ℓ𝑠 , (2.11)is in 𝐿 and a martingale (and not only a local martingale). This means that the statement of Theorem 2.5holds assuming only that (2.11) is a 𝐿 martingale, i.e., without condition (2.10).6 .3 Stochastic Hamilton–Jacobi–Bellman equation The present section generalizes the aforementioned Hamilton–Jacobi–Bellman equation to its stochasticcounterpart. Let us first recall the
Itô–Kunita formula . Theorem 2.7 (Itô–Kunita formula) . Let 𝐹 ( 𝑡, 𝑥 ) , ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ] × R 𝑛 be a random field which is continuousin ( 𝑡, 𝑥 ) almost surely, such that ( i ) For every 𝑡 ∈ [ , 𝑇 ] , 𝐹 ( 𝑡, ·) is a 𝐶 -map from R 𝑛 into R , P -a.s., ( ii ) For each 𝑥 ∈ R 𝑛 , 𝐹 (· , 𝑥 ) is a continuous semi-martingale P -a.s., and it satisfies 𝐹 ( 𝑡, 𝑥 ) = 𝐹 ( , 𝑥 ) + 𝑚 X 𝑗 = ∫ 𝑡 𝑓 𝑗 ( 𝑠, 𝑥 ) d 𝑌 𝑗𝑠 , for every ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ] × R 𝑛 , a.s. , where 𝑌 𝑗𝑠 , 𝑗 = , . . . , 𝑚 , are 𝑚 continuous semi-martingales, 𝑓 𝑗 ( 𝑠, 𝑥 ) , 𝑥 ∈ R 𝑛 , 𝑠 ∈ [ , 𝑇 ] , are randomfields that are continuous in ( 𝑠, 𝑥 ) and satisfy the following properties: ( a ) For every 𝑠 ∈ [ , 𝑇 ] , 𝑓 𝑗 ( 𝑠, ·) is a 𝐶 -map from R 𝑛 to R , P -a.s., ( b ) For every 𝑥 ∈ R 𝑛 , 𝑓 𝑗 (· , 𝑥 ) is an adapted process.Let 𝑋 𝑡 = ( 𝑋 𝑡 , . . . , 𝑋 𝑛𝑡 ) be continuous semi-martingales Then we have, for 𝑡 ∈ [ , 𝑇 ] , 𝐹 ( 𝑡, 𝑋 𝑡 ) = 𝐹 ( , 𝑋 ) + 𝑚 X 𝑗 = ∫ 𝑡 𝑓 𝑗 ( 𝑠, 𝑋 𝑠 ) d 𝑌 𝑗𝑠 + 𝑛 X 𝑖 = ∫ 𝑡 𝜕 𝑥 𝑖 𝐹 ( 𝑠, 𝑋 𝑠 ) d 𝑋 𝑖𝑠 + 𝑚 X 𝑗 = 𝑛 X 𝑖 = ∫ 𝑡 𝜕 𝑥 𝑖 𝑓 𝑗 ( 𝑠, 𝑋 𝑠 ) d [ 𝑌 𝑗 , 𝑋 𝑖 ] 𝑠 + 𝑛 X 𝑖,𝑘 = ∫ 𝑡 𝜕 𝑥 𝑖 𝑥 𝑘 𝐹 ( 𝑠, 𝑋 𝑠 ) d [ 𝑋 𝑖 , 𝑋 𝑘 ] 𝑠 , where [· , ·] 𝑠 stands for the quadratic variation of semi-martingales. Furthermore, if 𝐹 ∈ 𝐶 and 𝑓 𝑗 ∈ 𝐶 , P -a.s., then we have, for 𝑖 = , . . . , 𝑛 , 𝜕 𝑥 𝑖 𝐹 ( 𝑡, 𝑥 ) = 𝜕 𝑥 𝑖 𝐹 ( , 𝑥 ) + 𝑚 X 𝑗 = ∫ 𝑡 𝜕 𝑥 𝑖 𝑓 𝑗 ( 𝑠, 𝑥 ) d 𝑌 𝑗𝑠 , for every ( 𝑡, 𝑥 ) ∈ [ , 𝑇 ] × R 𝑛 , P -a.s.Proof. See, e.g., the article [34] or the book [35], both by H. Kunita. (cid:3)
Sticking, where possible, with the notation introduced in Section 2.2, we consider a stochastic optimalcontrol problem where also the functions 𝐿 , 𝑔 , 𝜇 and 𝜎 are random. More precisely, they depend alsoon 𝜔 ∈ Ω in a predictable way, namely, 𝐿 ( 𝑡, 𝑥, 𝑎, ·) , 𝑔 ( 𝑥, ·) , 𝜇 ( 𝑡, 𝑥, ·) , 𝜎 ( 𝑡, 𝑥, ·) are F 𝑡 -measurable, for any ( 𝑡, 𝑥, 𝑎 ) ∈ R + × R 𝑛 × 𝐾 . In order to distinguish them from the functions in the previous section and recallthat the following are stochastic terms, we write also 𝐿 𝑆 ( 𝑡, 𝑥, 𝑎 ) = 𝐿 ( 𝑡, 𝑥, 𝑎, ·) , 𝑔 𝑆 ( 𝑥 ) = 𝑔 ( 𝑥, ·) . We want then to maximize the objective functional E (cid:20)∫ 𝑇𝑡 𝐿 𝑆 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡 + 𝑔 𝑆 ( 𝑋 𝑇 ) (cid:21) , (2.12)where 𝑋 solves the SDE d 𝑋 𝑡 = 𝜇 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 , 𝜔 ) d 𝑡 + 𝜎 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 , 𝜔 ) d 𝑊 𝑡 , 𝑡 < 𝑡 < 𝑇,𝑋 𝑡 = 𝑥. (2.13)7nd 𝛼 ∈ K 𝐿 .Let us introduce, in a completely analogous way as in the previous section, the value function 𝑈 ( 𝑡, 𝑥, 𝜔 ) = max 𝛼 ∈ K 𝐿 E (cid:20)∫ 𝑇𝑡 𝐿 𝑆 ( 𝑠, 𝑋 𝑠 , 𝛼 𝑠 ) d 𝑠 + 𝑔 𝑆 ( 𝑋 𝑇 ) (cid:12)(cid:12)(cid:12)(cid:12) F 𝑡 (cid:21) . From now on, we may omit the explicit dependence on 𝜔 ∈ Ω of the functions. Then, for any fixed 𝑥 , 𝑈 ( 𝑡, 𝑥 ) is an F 𝑡 -adapted process but, a priori, it is not of bounded variation. We can anyway expect thatit is a continuous semi-martingale, and therefore, by the representation theorem for semi-martingales andmartingales (see, e.g., Section IV.31 and Section IV.36 in [54]), that it can be written as follows, 𝑈 ( 𝑡, 𝑥 ) = Γ 𝑇 ( 𝑥 ) − Γ 𝑡 ( 𝑥 ) − ∫ 𝑇𝑡 𝑌 𝑠 ( 𝑥 ) d 𝑊 𝑠 , 𝑥 ∈ R 𝑛 , 𝑡 𝑇, where, for every 𝑥 ∈ R 𝑛 , Γ 𝑡 ( 𝑥 ) and 𝑌 𝑡 ( 𝑥 ) are F 𝑡 -adapted processes and Γ 𝑡 ( 𝑥 ) is of bounded variations. Inthis case, if Γ 𝑡 ( 𝑥 ) and 𝑌 𝑡 ( 𝑥 ) are almost surely continuous in ( 𝑡, 𝑥 ) , Γ 𝑡 ( 𝑥 ) is differentiable with respect to 𝑡 ,and both of them are sufficiently smooth with respect to 𝑥 , then the pair ( 𝑈, 𝑌 ) should satisfy a stochasticHamilton–Jacobi–Bellman equation (SHJB).More precisely, we say that ( 𝜑, Ψ ) solves the SHJB related with the optimal control problem (2.12)and (2.13) if ( 𝜑, Ψ ) satisfies the following backward stochastic partial differential equation d 𝜑 𝑡 ( 𝑥 ) + 𝐻 𝑆 ( 𝑟, 𝑥, ∇ 𝜑, 𝐷 𝜑, ∇ Ψ ) d 𝑡 = 𝑚 X ℓ = Ψ ℓ𝑡 ( 𝑥 ) d 𝑊 ℓ𝑡 , ( 𝑡, 𝑥 ) ∈ [ 𝑡 , 𝑇 ) × R 𝑛 ,𝜑 𝑇 ( 𝑥 ) = 𝑔 ( 𝑥 ) , 𝑥 ∈ R 𝑛 , (2.14)where 𝐻 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) ≡ 𝐻 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 , 𝜔 ) = sup 𝑎 ∈ 𝐾 H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎, 𝜓 𝑥 ) , and H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎, 𝜓 𝑥 ) = 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝑢 𝑥 𝑖 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝜎 𝑗ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝑢 𝑥 𝑖 𝑥 𝑗 + 𝑛 X 𝑖 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝜓 ℓ𝑥 𝑖 + 𝐿 𝑆 ( 𝑡, 𝑥, 𝑎 ) , with 𝜓 𝑥 = ( 𝜓 ℓ𝑥 𝑖 ) 𝑖 = ,...,𝑛,ℓ = ,...,𝑚 ∈ R 𝑛 × 𝑚 . See, e.g., Section 3.1 in [48] for more details about the derivationof equation (2.14) and Section 4 in the same reference for results concerning the well-posedness of such anequation.We state here the verification theorem , which tells us that a sufficiently smooth solution of the SHJBequation coincides with the value function 𝑣 . Theorem 2.8.
Let ( 𝜑, Ψ ) be a smooth solution of the SHJB equation (2.14) with 𝑡 = and assume that thefollowing conditions hold: ( i ) For each 𝑡 ∈ [ , 𝑇 ] , 𝑥 ↦→ ( 𝜑 𝑡 ( 𝑥 ) , Ψ 𝑡 ( 𝑥 )) is a 𝐶 -map from R 𝑛 into R × R 𝑚 , P -a.s., ( ii ) For each 𝑥 ∈ R 𝑛 , 𝑡 ↦→ ( 𝜑 𝑡 ( 𝑥 ) , Ψ 𝑡 ( 𝑥 )) and 𝑡 ↦→ (∇ 𝜑 𝑡 ( 𝑥 ) , 𝐷 𝜑 𝑡 ( 𝑥 ) , ∇ Ψ 𝑡 ( 𝑥 )) are continuous F 𝑡 -adaptedprocesses.Suppose further that there exists a predictable admissible control 𝐴 ∗ ( 𝑡, 𝑥, 𝜔 ) such that 𝐻 𝑆 ( 𝑡, 𝑥, ∇ 𝜑, 𝐷 𝜑, ∇ Ψ ) = H 𝑆 ( 𝑡, 𝑥, ∇ 𝜑, 𝐷 𝜑, 𝐴 ∗ ( 𝑡, 𝑥, 𝜔 ) , ∇ Ψ ) , and that it is regular enough so that the SDE (2.13) is well-posed with solution 𝑋 . Then ( 𝜑, Ψ ) = ( 𝑉, 𝑌 ) and moreover, for any initial data ( , 𝑥 ) with 𝑥 ∈ R 𝑛 , 𝐴 ∗ ( 𝑡, 𝑋 𝑡 , 𝜔 ) maximizes the objective function 𝑈 . roof. See Section 3.2 in [48]. (cid:3)
Remark . Under suitable regularity conditions on 𝜇 , 𝜎 , 𝐿 𝑆 and 𝑔 𝑆 , it is possible to prove that the SHJBequation (2.14) admits a unique solution satisfying the hypotheses of Theorem 2.8. A rigorous proof ofthis fact can be found in Section 4 of [48]. For further developments on SHJB equations and the relatedstochastic optimal control problems we refer the reader to, e.g., [11, 12, 21, 52], as well as the alreadymentioned paper by Peng [48]. In this section, we recall some basic facts concerning the theory of symmetries on which our results arebased, referring to [16, 24, 31, 47, 56] for a complete treatment of these topics. We start with a formalintroduction on jet spaces (for an extended introduction to the subject see, e.g., [10, 55]), and then proceedwith contact symmetries and their applications in solving PDEs. Despite the fact that these results arewell-known, we insert here a small survey for the ease of the reader, as well as we introduce the notationthat will be adopted in the rest of the paper.
The jet space is a generalization of the notion of tangent bundle of a manifold. Let 𝑀 and 𝑁 be two opensubsets of R 𝑚 and R 𝑛 , respectively, and consider a smooth function 𝑓 : 𝑀 → 𝑁 . Take a standard coordinatesystem 𝑥 = ( 𝑥 , . . . , 𝑥 𝑚 ) in 𝑀 and let 𝑢 = ( 𝑢 , . . . , 𝑢 𝑛 ) = 𝑓 ( 𝑥 ) ∈ 𝑁 . We can then consider the 𝑘 -thprolongation 𝑢 ( 𝑘 ) = pr ( 𝑘 ) 𝑓 ( 𝑥 ) , that is defined by the relations 𝑢 𝑗𝑥 𝑖 = 𝜕 𝑥 𝑖 𝑓 𝑗 ( 𝑥 ) , 𝑢 𝑗𝑥 𝑖 𝑥 𝑙 = 𝜕 𝑥 𝑖 𝑥 𝑙 𝑓 𝑗 ( 𝑥 ) , . . . , upto order 𝑘 . For example, if 𝑚 = 𝑛 =
1, then pr ( ) 𝑓 ( 𝑥 , 𝑥 ) is given by ( 𝑢 ; 𝑢 𝑥 , 𝑢 𝑥 ; 𝑢 𝑥 𝑥 , 𝑢 𝑥 𝑥 , 𝑢 𝑥 𝑥 ) = ( 𝑓 ; 𝜕 𝑥 𝑓 , 𝜕 𝑥 𝑓 ; 𝜕 𝑥 𝑥 𝑓 , 𝜕 𝑥 𝑥 𝑓 , 𝜕 𝑥 𝑥 𝑓 ) ( 𝑥 , 𝑥 ) . The 𝑘 -th prolongation can also be looked at as the Taylor polynomial of degree 𝑘 for 𝑓 at the point 𝑥 . Thespace whose coordinates represent the independent variables, the dependent variables and the derivativesof the dependent variables up to order 𝑘 is called the 𝑘 -th order jet space of the underlying space 𝑁 × 𝑀 ,and we denote it by 𝐽 𝑘 ( 𝑀, 𝑁 ) . It is a smooth vector bundle on 𝑀 with projection 𝜋 𝑘, − : 𝐽 𝑘 ( 𝑀, 𝑁 ) → 𝑀 given by 𝜋 𝑘, − ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 , . . . ) = 𝑥. More explicitly, 𝐽 𝑘 ( 𝑀, 𝑁 ) = 𝑀 × 𝑁 × 𝑁 ( ) × · · · × 𝑁 ( 𝑘 ) , where 𝑁 ( 𝑖 ) , is the space of 𝑖 -th order derivativesof 𝑢 with respect to 𝑥 . It is clear that 𝑁 ( 𝑖 ) ⊆ R 𝑛 𝑖 with 𝑛 𝑖 = 𝑚 + 𝑖 − 𝑖 ! . To any function 𝑓 ∈ 𝐶 𝑘 ( 𝑀, 𝑁 ) , where 𝐶 𝑘 ( 𝑀, 𝑁 ) is the infinite-dimensional Fréchet space of 𝑘 times differentiable functions on 𝑀 taking values in 𝑁 , we associate a continuous section of the bundle ( 𝐽 𝑘 ( 𝑀, 𝑁 ) , 𝑀, 𝜋 𝑘, − ) in the following way 𝑓 ↦→ 𝑫 𝑘 ( 𝑓 ) ( 𝑥 ) = ( 𝑥, 𝑢 = 𝑓 ( 𝑥 ) , 𝑢 𝑥 = ∇ 𝑓 ( 𝑥 ) , 𝑢 𝑥𝑥 = 𝐷 𝑓 ( 𝑥 ) , . . . , 𝐷 𝑘 𝑓 ( 𝑥 )) , where 𝐷 𝑖 𝑓 ( 𝑥 ) is the vector collecting all the 𝑖 -th order derivatives of 𝑓 with respect to 𝑥 . In this setting, adifferential equation is a sub-manifold Δ E ⊂ 𝐽 𝑘 ( 𝑀, 𝑁 ) . For example, in the scalar case 𝑁 = R , we usuallyconsider Δ E as the null set of some regular functions, i.e., Δ E = { 𝐸 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 , . . . ) = , 𝑖 ∈ { , . . . , 𝑝 }} .9 efinition 3.1. Consider a (finite) set 𝐸 𝑖 : 𝐽 𝑘 ( 𝑀, 𝑁 ) → R , for 𝑖 = , ..., 𝑝 where 𝑝 ∈ N and 𝑝 > , ofsmooth functions defining a sub-manifold Δ E = { 𝐸 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 , . . . ) = , 𝑖 ∈ { , . . . , 𝑝 }} of 𝐽 𝑘 ( 𝑀, 𝑁 ) .We say that a smooth function 𝑓 : 𝑀 → 𝑁 is a solution to the equation E (represented by the sub-manifold Δ E ) if, for any 𝑥 ∈ 𝑀 , we have 𝑫 𝑘 𝑓 ( 𝑥 ) ∈ Δ E . The set of all solutions to equation E will be denoted by S E . For instance, in the previous case where 𝑁 = R and Δ E = { 𝐸 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , . . . ) = , 𝑖 ∈ { , . . . , 𝑝 }} , 𝑓 is asolution to equation E if 𝐸 𝑖 ( 𝑥, 𝑓 ( 𝑥 ) , ∇ 𝑓 ( 𝑥 ) , . . . ) =
0, for every 𝑖 = , . . . , 𝑝 , 𝑥 ∈ 𝑀 . Remark . For technical reasons, it is usually not possible to consider generic equations E (correspondingto generic sub-manifold Δ E ⊂ 𝐽 𝑘 ( 𝑀, 𝑁 ) ). In the following, we always consider non-degenerate systemsof differential equations in the sense of Definition 2.70 in [47]. This condition assures that, for each fixed 𝑥 ∈ 𝑀 and each set of derivatives ( 𝑢 , 𝑢 𝑥 , 𝑢 𝑥𝑥 , . . . ) , there exists a solution to the equation defined ina neighborhood of 𝑥 with the prescribed derivatives ( 𝑢 , 𝑢 𝑥 , 𝑢 𝑥𝑥 , . . . ) at the point 𝑥 . Since the preciseformulation of this condition is quite technical and the evolution equations considered in Section 4 alwayssatisfy such an assumption, we refer to Section 2.6 of [47] for complete details. We want to introduce a class of transformations induced by diffeomorphisms of 𝐽 𝑘 ( 𝑀, 𝑁 ) . For simplicity,we consider the case 𝑘 = 𝑀 ⊂ R 𝑛 and 𝑁 = R . Consider a diffeomorphism Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) given by the following relations ˜ 𝑥 𝑖 = Φ 𝑥 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) , ˜ 𝑢 = Φ 𝑢 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) , ˜ 𝑢 𝑥 𝑖 = Φ 𝑢 𝑥𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) , ˜ 𝑢 𝑥 𝑖 𝑥 𝑗 = Φ 𝑢 𝑥𝑖 𝑥 𝑗 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) . Hereafter, we use the notation Φ 𝑥 = ( Φ 𝑥 , · · · , Φ 𝑥 𝑛 ) , Φ 𝑢 𝑥 = ( Φ 𝑢 𝑥 , · · · , Φ 𝑢 𝑥𝑛 ) and Φ 𝑢 𝑥𝑥 = ( Φ 𝑢 𝑥𝑖 𝑥 𝑗 )| 𝑖, 𝑗 = ,...,𝑛 .We now aim to define a transformation 𝐹 Φ on the space of smooth functions induced by the map Φ onthe jet space. Let 𝑈 ∈ 𝐶 ∞ ( 𝑀, 𝑁 ) and consider the map 𝐶 𝑈, Φ : 𝑀 → 𝑀 given by 𝐶 𝑈, Φ ( 𝑥 ) = Φ 𝑥 ( 𝑥, 𝑈 ( 𝑥 ) , ∇ 𝑈 ( 𝑥 ) , 𝐷 𝑈 ( 𝑥 )) . Let also F Φ ⊂ 𝐶 ∞ ( 𝑀, 𝑁 ) be the subset of smooth functions 𝑈 ∈ 𝐶 ∞ ( 𝑀, 𝑁 ) such that 𝐶 𝑈, Φ is a diffeomor-phism from 𝑀 into itself. Definition 3.3.
We say that the diffeomorphism Φ generates the (nonlinear) operator 𝐹 Φ on the space offunctions F Φ , if there is a map 𝐹 Φ : F Φ → 𝐶 ∞ ( 𝑀, 𝑁 ) such that 𝐹 Φ ( 𝑈 ) ( 𝑥 ) = Φ 𝑢 ( 𝐶 − 𝑈, Φ ( 𝑥 ) , 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , ∇ 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , 𝐷 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 ))) ,𝜕 𝑥 𝑖 𝐹 Φ ( 𝑈 ) ( 𝑥 ) = Φ 𝑢 𝑥𝑖 ( 𝐶 − 𝑈, Φ ( 𝑥 ) , 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , ∇ 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , 𝐷 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 ))) ,𝜕 𝑥 𝑖 𝑥 𝑗 𝐹 Φ ( 𝑈 ) ( 𝑥 ) = Φ 𝑢 𝑥𝑖 𝑥 𝑗 ( 𝐶 − 𝑈, Φ ( 𝑥 ) , 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , ∇ 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , 𝐷 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 ))) . Not every diffeomorphism Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) generates an operator 𝐹 Φ on the space of func-tions F Φ . For example, consider 𝑀 = R and the map Φ 𝑥 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝜆𝑥 , Φ 𝑢 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑢 , Φ 𝑢 𝑥 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑢 𝑥 , and Φ 𝑢 𝑥𝑥 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑢 𝑥𝑥 , where 𝜆 >
0. In this case, for any 𝑈 ∈ 𝐶 ∞ ( 𝑀, 𝑁 ) ,the map 𝐶 𝑈, Φ is given by 𝐶 𝑈, Φ ( 𝑥 ) = 𝜆𝑥 and, thus, it does not depend on 𝑈 and it is always a diffeomorphismfrom R into itself, since 𝜆 ≠
0. This implies that F Φ = 𝐶 ∞ ( 𝑀, 𝑁 ) and also that, if the map 𝐹 Φ exists, thenit must satisfy 𝐹 Φ ( 𝑈 ) = 𝑈 ( 𝜆 − 𝑥 ) , 𝑈 ∈ 𝐶 ∞ ( 𝑀, 𝑁 ) . On the other hand, we have 𝜕 𝑥 𝐹 Φ ( 𝑈 ) = 𝜆 − 𝑈 ′ ( 𝜆 − 𝑥 ) ≠ 𝑈 ′ ( 𝜆 − 𝑥 ) = Φ 𝑢 𝑥 ( 𝐶 − 𝑈, Φ ( 𝑥 ) , 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , ∇ 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 )) , 𝐷 𝑈 ( 𝐶 − 𝑈, Φ ( 𝑥 ))) . This simple counterexample shows that a diffeomorphism Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) must satisfy someadditional conditions in order to generate an operator 𝐹 Φ . For this reason, we introduce the followingdefinition. Definition 3.4.
A diffeomorphism Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) is said to be a contact transformation if itgenerates a (nonlinear) operator 𝐹 Φ in the sense of Definition 3.3. It is possible to give a nice geometric characterization of the set of contact transformations. From nowon, we write Λ 𝐽 𝑛 ( 𝑀, 𝑁 ) for the vector space of 1-forms on 𝐽 𝑛 ( 𝑀, 𝑁 ) . In particular, consider the following1-forms, 𝜅 = d 𝑢 − 𝑛 X 𝑖 = 𝑢 𝑥 𝑖 d 𝑥 𝑖 , (3.1) 𝜅 𝑥 𝑖 = d 𝑢 𝑥 𝑖 − 𝑛 X 𝑗 = 𝑢 𝑥 𝑖 𝑥 𝑗 d 𝑥 𝑗 . We denote by ℭ ⊂ Λ 𝐽 ( 𝑀, 𝑁 ) the contact structure , also called Cartan distribution in [10], which isgenerated by ℭ = span { 𝜅, 𝜅 𝑥 𝑖 , 𝑖 = , ..., 𝑛 } . Theorem 3.5.
A diffeomorphism Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) is a contact transformation in the sense ofDefinition 3.4 if and only if it preserves the contact structure ℭ , that is, Φ ∗ ( ℭ ) = ℭ , where Φ ∗ is the pull-back of differential forms on 𝐽 ( 𝑀, 𝑁 ) induced by Φ .Proof. See, e.g., Chapter 2 in [10], Section 4 in [16], Chapter 21 in [56], and the references therein. (cid:3)
Remark . The contact transformation Φ is uniquely determined by its action on 𝐽 ( 𝑀, 𝑁 ) . In particular, Φ 𝑥 , Φ 𝑢 and Φ 𝑢 𝑥 depend only on ( 𝑥, 𝑢, 𝑢 𝑥 ) and they do not depend on 𝑢 𝑥𝑥 (see, e.g., Chapter 2 in [10]). Remark . In contact geometry, a contact structure on a ( 𝑛 + ) -dimensional manifold M is a 1-form 𝜁 which is maximally non-integrable, namely, 𝜁 ∧ ( d 𝜁 ) 𝑛 ≠
0, and the contact transformations are thediffeomorphisms Ψ of M such that Ψ ∗ ( 𝜁 ) = 𝑓 · 𝜁 , for some 𝑓 ∈ 𝐶 ∞ ( M , R ) (see, e.g., [4, 29] for anintroduction to the subject and [28] for an historical overview). This definition is satisfied by 𝐽 ( 𝑀, R ) withthe 1-form 𝜁 = 𝜅 defined in (3.1).In the study of the geometry of jet spaces (see, e.g., Chapter 6 of [55]), the term “contact structure” is oftenused to express the set of forms ℭ . This custom is due to the fact that, as explained in Remark 3.6, the contacttransformations are extensions of diffeomorphisms on 𝐽 ( 𝑀, 𝑁 ) , i.e., the set of transformations consideredhere is in one-to-one correspondence with the one usually considered in contact geometry.In the following, we will not consider just a single contact transformation but one parameter groupsof contact transformations Φ 𝜆 , which means that Φ · : R × 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) is 𝐶 ∞ , Φ 𝜆 is a contacttransformation for each 𝜆 ∈ R , Φ ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) , and, for each 𝜆 , 𝜆 ∈ R , Φ 𝜆 ◦ Φ 𝜆 = Φ 𝜆 + 𝜆 .
11n general, a one parameter group of diffeomorphisms Φ 𝑌 ,𝜆 , where 𝜆 ∈ R , is generated by a vector field 𝑌 ∈ 𝑇 𝐽 ( 𝑀, 𝑁 ) , i.e., belonging to the tangent bundle of 𝐽 ( 𝑀, 𝑁 ) , which in local coordinates has theexpression 𝑌 = 𝑛 X 𝑖 = 𝑌 𝑥 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) 𝜕 𝑥 𝑖 + 𝑌 𝑢 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) 𝜕 𝑢 + 𝑛 X 𝑖 = 𝑌 𝑢 𝑥𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) 𝜕 𝑢 𝑥𝑖 + 𝑛 X 𝑖, 𝑗 = 𝑌 𝑢 𝑥𝑖 𝑥 𝑗 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) 𝜕 𝑢 𝑥𝑖𝑥 𝑗 , (3.2)by the following relations 𝜕 𝜆 Φ 𝑥 𝑖 𝑌 ,𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑌 𝑥 𝑖 ◦ Φ 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) ,𝜕 𝜆 Φ 𝑢𝑌 ,𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑌 𝑢 ◦ Φ 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) ,𝜕 𝜆 Φ 𝑢 𝑥𝑖 𝑌 ,𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑌 𝑢 𝑥𝑖 ◦ Φ 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) ,𝜕 𝜆 Φ 𝑢 𝑥𝑖 𝑥 𝑗 𝑌 ,𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑌 𝑢 𝑥𝑖𝑥 𝑗 ◦ Φ 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) , (3.3)for any 𝜆 ∈ R and ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) ∈ 𝐽 ( 𝑀, 𝑁 ) . It is useful to introduce the following natural notion. Definition 3.8.
A vector field 𝑌 (of the form (3.2) ) on 𝐽 ( 𝑀, 𝑁 ) is called an infinitesimal contact transfor-mation if it generates (through equation (3.3) ) a one parameter group of diffeomorphisms Φ 𝜆 of contacttransformations. The following theorem characterizes all the infinitesimal contact transformations on 𝐽 ( 𝑀, 𝑁 ) . Theorem 3.9.
A vector field 𝑌 on 𝐽 ( 𝑀, 𝑁 ) is an infinitesimal contact transformation (in the sense ofDefinition 3.8) if and only if there exists a unique smooth map Ω : 𝐽 ( 𝑀, 𝑁 ) → R such that 𝑌 = 𝑌 Ω , where 𝑌 Ω is a vector field on 𝐽 ( 𝑀, 𝑁 ) defined as 𝑌 Ω = − 𝑛 X 𝑖 = 𝜕 𝑢 𝑥𝑖 Ω 𝜕 𝑥 𝑖 + Ω − 𝑛 X 𝑖 = 𝑢 𝑥 𝑖 𝜕 𝑢 𝑥𝑖 Ω ! 𝜕 𝑢 + 𝑛 X 𝑖 = ( 𝜕 𝑥 𝑖 Ω + 𝑢 𝑥 𝑖 𝜕 𝑢 Ω ) 𝜕 𝑢 𝑥𝑖 + 𝑛 X 𝑖, 𝑗,𝑘,ℓ = (cid:18) 𝜕 𝑥 𝑖 𝑥 𝑗 Ω + 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 𝑢 Ω + 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑥 𝑖 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝜕 𝑢𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝜕 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑥 𝑗 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝜕 𝑢 𝑥𝑘 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝑥 ℓ 𝜕 𝑢 𝑥𝑘 𝑢 𝑥ℓ Ω (cid:19) 𝜕 𝑢 𝑥𝑖𝑥 𝑗 . (3.4) Proof.
The proof can be found in Chapter 21 of [56] and references therein. (cid:3)
Remark . We say that a vector field of the form 𝑌 Ω satisfying the hypotheses of Theorem 3.9 isthe infinitesimal contact transformation generated by the (contact generating) function Ω . Under thisterminology, Theorem 3.5 guarantees that any infinitesimal contact transformation is generated in a uniqueway by some smooth function Ω : 𝐽 ( 𝑀, 𝑁 ) → R .There is a special subset of vector fields of the type (3.4) arising from coordinate transformationsinvolving only the dependent and independent variables ( 𝑥, 𝑢 ) . Definition 3.11.
We say that 𝑌 Ω Lie , 𝑓 ,𝑔 is a (projected) Lie point transformation if it is a contact transformationof the form 𝑌 Ω Lie , 𝑓 ,𝑔 = 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 + 𝑔 ( 𝑥, 𝑢 ) 𝜕 𝑢 + 𝑛 X 𝑖 = 𝑌 𝑢 𝑥𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 ) 𝜕 𝑢 𝑥𝑖 + 𝑛 X 𝑖, 𝑗 = 𝑌 𝑢 𝑥𝑖𝑥 𝑗 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) 𝜕 𝑢 𝑥𝑖𝑥 𝑗 , (3.5) where 𝑓 𝑖 ∈ 𝐶 ∞ ( 𝑀, R ) , 𝑔 ∈ 𝐶 ∞ ( 𝐽 ( 𝑀, 𝑁 ) , R ) , 𝑌 𝑢 𝑥𝑖 ∈ 𝐶 ∞ ( 𝐽 ( 𝑀, 𝑁 ) , R ) and 𝑌 𝑢 𝑥𝑖𝑥 𝑗 ∈ 𝐶 ∞ ( 𝐽 ( 𝑀, 𝑁 ) , R ) . emark . It is simple to see that a Lie point transformation 𝑌 Ω Lie , 𝑓 ,𝑔 can be reduced to a standardvector field ˜ 𝑌 = P 𝑖 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 + 𝑔 ( 𝑥, 𝑢 ) 𝜕 𝑢 on 𝐽 ( R 𝑛 , R ) , i.e., ˜ 𝑌 is the generator of a one parameter group oftransformations involving only the dependent and independent variables ( 𝑥, 𝑢 ) . Remark . Another important property of Lie point transformations is the following. Denoting by Φ Lie , 𝑓 ,𝑔,𝜆 , where 𝜆 ∈ R , the one parameter group generated by the Lie point transformation 𝑌 Ω Lie , 𝑓 ,𝑔 , wehave that, for any 𝜆 ∈ R , the domain F Φ Lie , 𝑓 ,𝑔,𝜆 of the nonlinear operator 𝐹 Φ Lie , 𝑓 ,𝑔,𝜆 generated by Φ Lie , 𝑓 ,𝑔,𝜆 is the whole 𝐶 ∞ ( 𝑀, 𝑁 ) = F Φ Lie , 𝑓 ,𝑔,𝜆 .For what follows, we introduce the (formal) operators D 𝑥 𝑖 : 𝐶 ∞ ( 𝐽 𝑘 ( 𝑀, 𝑁 )) → 𝐶 ∞ ( 𝐽 𝑘 + ( 𝑀, 𝑁 )) givenby D 𝑥 𝑖 = 𝜕 𝑥 𝑖 + 𝑢 𝑥 𝑖 𝜕 𝑢 + 𝑛 X 𝑗 = 𝑢 𝑥 𝑖 𝑥 𝑗 𝜕 𝑢 𝑥 𝑗 + . . . + 𝑛 X 𝑗 ≥ ... ≥ 𝑗 𝑝 = 𝑢 𝑥 𝑗 ··· 𝑥 𝑗ℓ 𝑥 𝑖 𝜕 𝑢 𝑥 𝑗 ··· 𝑥 𝑗ℓ + . . . (3.6)In a similar way, we write D 𝑥 𝑖 𝑥 𝑗 (·) = D 𝑥 𝑖 ( D 𝑥 𝑗 (·)) , D 𝑥 𝑖 𝑥 𝑗 𝑥 ℓ (·) = D 𝑥 𝑖 ( D 𝑥 𝑗 ( D 𝑥 ℓ (·))) , etc.We can characterize more precisely the general form of Lie point transformations. Theorem 3.14.
The vector field 𝑌 Ω Lie , 𝑓 ,𝑔 is a (projected) Lie point transformation if and only if it is generatedby a function of the form Ω Lie , 𝑓 ,𝑔 ( 𝑥, 𝑢, 𝑢 𝑥 ) = 𝑔 ( 𝑥, 𝑢 ) − 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑥 ) 𝑢 𝑥 𝑖 , (3.7) namely, 𝑌 Ω Lie , 𝑓 ,𝑔 has the following expression 𝑌 Ω Lie , g , f ≔ 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 + 𝑔 ( 𝑥, 𝑢 ) 𝜕 𝑢 + 𝑛 X 𝑖, 𝑗 = (− D 𝑥 𝑖 𝑓 𝑗 ( 𝑥 ) 𝑢 𝑥 𝑗 + D 𝑥 𝑖 ( 𝑔 )) 𝜕 𝑢 𝑥𝑖 + 𝑛 X 𝑖, 𝑗,𝑘 = (− D 𝑥 𝑖 𝑥 𝑗 ( 𝑓 𝑘 ) 𝑢 𝑥 𝑘 − D 𝑥 𝑖 ( 𝑓 𝑘 ) 𝑢 𝑥 𝑘 𝑥 𝑗 + D 𝑥 𝑖 𝑥 𝑗 ( 𝑔 )) 𝜕 𝑢 𝑥𝑖𝑥 𝑗 . Proof.
The theorem is a direct application of Theorem 3.9 to vector fields of the form (3.5). (cid:3) If 𝑛 =
1, and the coordinate system of 𝐽 ( R , R ) is given by ( 𝑥, 𝑢 ) , some examples of Lie pointtransformations are: • The dilation of independent variable 𝑥 , i.e., ˜ 𝑌 = 𝑥𝜕 𝑥 (see the notation in Remark 3.12), related to thegenerator function Ω = − 𝑥𝑢 𝑥 and generating the one parameter group defined by Φ 𝑥𝜆 ( 𝑥, 𝑢 ) = 𝑒 𝜆 𝑥 Φ 𝑢𝜆 ( 𝑥, 𝑢 ) = 𝑢 Φ 𝑢 𝑥 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 ) = 𝑒 − 𝜆 𝑢 𝑥 Φ 𝑢 𝑥𝑥 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑒 − 𝜆 𝑢 𝑥𝑥 . • The dilation of dependent variable 𝑢 , namely, ˜ 𝑌 = 𝑢𝜕 𝑢 related to the generator function Ω = 𝑢 andgenerating the one parameter group defined by Φ 𝑥𝜆 ( 𝑥, 𝑢 ) = 𝑥 Φ 𝑢𝜆 ( 𝑥, 𝑢 ) = 𝑒 𝜆 𝑢 Φ 𝑢 𝑥 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 ) = 𝑒 𝜆 𝑢 𝑥 Φ 𝑢 𝑥𝑥 𝜆 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = 𝑒 𝜆 𝑢 𝑥𝑥 . We conclude this section providing the definition of symmetry of a differential equation.
Definition 3.15.
A contact transformation Φ : 𝐽 ( 𝑀, 𝑁 ) → 𝐽 ( 𝑀, 𝑁 ) is a (contact) symmetry of thedifferential equation E if, for any solution 𝑈 ∈ 𝐶 ∞ ( 𝑀, 𝑁 ) ∩ F Φ to the equation E , also 𝐹 Φ ( 𝑈 ) is a solutionto E , where 𝐹 Φ and F Φ are the operator generated by the contact transformation Φ and the domain of 𝐹 Φ ,respectively (see Definition 3.3).We say that an (infinitesimal) contact transformation 𝑌 Ω is an (infinitesimal contact) symmetry of thedifferential equation E if the one parameter group Φ 𝑌 Ω ,𝜆 generated by 𝑌 Ω is a set of symmetries of theequation E . emark . With an abuse of language, we say that the function Ω ∈ 𝐶 ∞ ( 𝐽 ( 𝑀, 𝑁 ) , R ) is a contactsymmetry of the equation E if the corresponding contact vector field 𝑌 Ω is a symmetry of E . Remark . If 𝑌 is a Lie point transformation and it is a contact symmetry of the equation E , then we saythat 𝑌 is a Lie point symmetry of the equation E .It is possible to give a completely geometric characterization of the contact symmetries of a differentialequation E . Theorem 3.18 (Determining equations) . A contact transformation Φ is a symmetry of the equation E represented by the sub-manifold Δ E ⊂ 𝐽 ( 𝑀, 𝑁 ) of the form Δ E = { 𝐸 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = , 𝑖 ∈ { , . . . , 𝑝 }} , where 𝑝 ∈ N , 𝑝 > and 𝐸 𝑖 ∈ 𝐶 ∞ ( 𝐽 ( 𝑀, 𝑁 ) , R ) , if and only if Φ ( Δ E ) = Δ E . The infinitesimal contact transformation 𝑌 Ω is a symmetry of the non-degenerate differential equation E (seeRemark 3.2 for the definition of non-degenerate differential equation) if and only if 𝑌 ( 𝐸 𝑖 ( 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥𝑥 ))| Δ E = , (3.8) where 𝑖 = , . . . , 𝑝 .Proof. The proof is given in Theorem 2.27 and Theorem 2.71 in [47] for the case of Lie point symmetrieswhich are diffeomorphisms of 𝐽 𝑘 ( 𝑀, 𝑁 ) , for 𝑘 ≥
0. Since the contact transformations are diffeomorphismof 𝐽 ℎ ( 𝑀, 𝑁 ) , for any ℎ ≥ (cid:3) Let us discuss here the classical Noether theorem in the Lagrangian mechanics setting described in Sec-tion 2.1. Heuristically, Noether theorem says that to any infinitesimal transformation leaving invariant theoptimal control problem, namely equation (2.1) and the Lagrangian 𝐿 , a constant of motion is associated.More precisely, let 𝑌 𝑥,𝑎 be a vector field in R 𝑛 × R 𝑛 transforming the variables 𝑥 𝑖 and 𝑎 𝑖 of equations (2.1)and the Lagrangian 𝐿 . We suppose that 𝑌 𝑥,𝑎 is “projected” with respect to the variables 𝑥 𝑖 , that is, 𝑌 𝑥,𝑎 = 𝑛 X 𝑖 = (cid:0) 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 + 𝑔 𝑖 ( 𝑥, 𝑎 ) 𝜕 𝑎 𝑖 (cid:1) . (3.9)If we want the projected vector field (3.9) to be a symmetry of equation (2.1), then we need that 𝑔 𝑖 ( 𝑥, 𝑎 ) = 𝑛 X 𝑗 = 𝜕 𝑥 𝑗 𝑓 𝑖 ( 𝑥 ) 𝑎 𝑗 . (3.10)If we also require that 𝐿 is invariant with respect to the flow of 𝑌 𝑥,𝑎 , then we must have 𝑌 𝑥,𝑎 ( 𝐿 ) ( 𝑥, 𝑎 ) = 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 𝐿 ( 𝑥, 𝑎 ) + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑗 𝑓 𝑖 ( 𝑥 ) 𝑎 𝑗 𝜕 𝑎 𝑖 𝐿 ( 𝑥, 𝑎 ) = . (3.11)So we say that 𝑌 𝑥,𝑎 is a symmetry of the optimal control problem of Section 2.1 if and only if conditions (3.10)and (3.11) hold. 14 heorem 3.19 (Noether theorem) . Let 𝑌 𝑥,𝑎 be a symmetry of the Lagrangian 𝐿 according with equa-tion (3.11) . Then, supposing the existence of a 𝐶 optimal control 𝛼 𝑡 , we have that 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑋 𝑡 ) 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ) (3.12) is constant with respect to time 𝑡 ∈ [ 𝑡 , 𝑇 ] .Proof. Let us compute the derivative with respect to time of (3.12), then, by Euler-Lagrange equations (2.4),we havedd 𝑡 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑋 𝑡 ) 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ) ! = 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑗 𝑓 𝑖 ( 𝑋 𝑡 ) 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑋 𝑗𝑡 d 𝑡 + 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑋 𝑡 ) dd 𝑡 ( 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 )) = 𝑛 X 𝑖, 𝑗 = [ 𝜕 𝑥 𝑗 𝑓 𝑖 ( 𝑋 𝑡 ) 𝛼 𝑗𝑡 𝜕 𝑎 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ) + 𝑓 𝑖 ( 𝑋 𝑡 ) ( 𝜕 𝑥 𝑖 𝐿 ( 𝑋 𝑡 , 𝛼 𝑡 ))] , which is zero as a consequence of equation (3.11). (cid:3) It is possible to give an equivalent formulation of Theorem 3.19 using the Lie point symmetries ofHamilton-Jacobi equation.
Theorem 3.20 (Noether theorem, Hamilton-Jacobi version) . Let Ω ( 𝑥, 𝑢 𝑥 ) = P 𝑛𝑖 = 𝑓 𝑖 ( 𝑥 ) 𝑢 𝑥 𝑖 be a contactsymmetry of the Hamilton-Jacobi equation (2.3) . Then, if 𝑈 ∈ 𝐶 , ( [ 𝑡 , , 𝑇 ] × R 𝑛 , R ) is a solution toequation (2.3) , we have that Ω ( 𝑋 𝑡 , ∇ 𝑈 ( 𝑋 𝑡 )) = 𝑛 X 𝑖 = 𝑓 𝑖 ( 𝑋 𝑡 ) 𝜕 𝑥 𝑖 𝑈 ( 𝑋 𝑡 ) , (3.13) where 𝑋 𝑡 is the solution to equation (2.1) with 𝛼 𝑖𝑡 = A 𝑖 ( 𝑋 𝑡 , ∇ 𝑈 ( 𝑋 𝑡 )) (see Section 2 for the definition of themap A ), is constant with respect to time 𝑡 ∈ [ 𝑡 , 𝑇 ] . Lemma 3.21. 𝑌 Ω is a contact symmetry of the Hamilton-Jacobi equation (2.3) if and only if 𝑛 X 𝑖 = (cid:16) 𝜕 𝑥 𝑖 Ω 𝜕 𝑢 𝑥𝑖 𝐻 − 𝜕 𝑢 𝑥𝑖 Ω 𝜕 𝑥 𝑖 𝐻 (cid:17) = . (3.14) Proof.
It is a consequence of equation (3.4) and Definition 3.15. See, e.g., Section 21.2 in [56]. (cid:3)
Proof of Theorem 3.20.
See the proof of Theorem 4.3 below, where the statement is proven in the generalstochastic case. (cid:3)
Remark . The two formulations of Noether theorem given by Theorem 3.19 and Theorem 3.20 areequivalent in the sense that 𝑌 𝑥,𝑎 = P 𝑛𝑖, 𝑗 = ( 𝑓 𝑖 ( 𝑥 ) 𝜕 𝑥 𝑖 + 𝜕 𝑥 𝑗 𝑓 𝑖 ( 𝑥, 𝑎 ) 𝑎 𝑗 𝜕 𝑎 𝑖 ) is a symmetry of the optimalcontrol problem if and only if Ω is a contact symmetry of the related Hamilton-Jacobi equation, namely,equation (3.14) holds. Furthermore, if we choose the optimal control 𝛼 𝑖𝑡 to be equal to 𝐴 𝑖 ( 𝑋 𝑡 , ∇ 𝑈 ( 𝑋 𝑡 )) ,then the two conserved quantities (3.12) and (3.13) are equal. Considering 𝑀 = R + × R 𝑛 and denoting the first variable by 𝑡 and the other independent variables by 𝑥 𝑖 , for 𝑖 = , . . . , 𝑛 , for the Hamilton-Jacobi-Bellman equation we have that Δ E is described by the equation 𝑢 𝑡 + max 𝑎 ∈ 𝐾 ( 𝑛 X 𝑖, 𝑗 = 𝜂 𝑖 𝑗 ( 𝑡, 𝑥, 𝑎 ) 𝑢 𝑥 𝑖 𝑥 𝑗 + 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎 ) 𝑢 𝑥 𝑖 + 𝐿 ( 𝑡, 𝑥, 𝑎 ) ) = . (4.1)15quation (4.1) is a special kind of evolution equation since it has the form 𝑢 𝑡 + 𝐻 ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 , 𝑢 𝑥,𝑥 ) = , (4.2)for some smooth function 𝐻 ∈ 𝐶 ( R × 𝐽 ( R 𝑛 , R )) , where 𝑢 𝑥 = ( 𝑢 𝑥 , . . . , 𝑢 𝑥 𝑛 ) , and 𝑢 𝑥𝑥 = ( 𝑢 𝑥 𝑖 𝑥 𝑗 ) 𝑖, 𝑗 = ,...,𝑛 .In this case, it is convenient to choose a generating function of the form Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) . (4.3) Remark . It is important to notice that, for a generic contact symmetry on 𝐽 ( 𝑀, R ) = 𝐽 ( R + × R 𝑛 , R ) ,the generating function has the form ˜ Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑡 , 𝑢 𝑥 ) , (4.4)depending also on the variable 𝑢 𝑡 which represents the time derivative. Choosing a generating function ofthe form (4.3) instead of the form (4.4) means to consider contact transformations that do not change thetime variable 𝑡 . The main reason is that the time variable in stochastic equations plays a peculiar role andcannot be changed in the same way as the spacial variable 𝑥 . Nevertheless, in [36, 37, 57] also a specialkind of time change has been considered corresponding to the generating function˜ Ω = 𝑓 ( 𝑡 ) 𝑢 𝑡 + Ω Lie , 𝑓 ,𝑔 ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) , (4.5)where 𝑓 ∈ 𝐶 ∞ ( R + , R ) , and Ω Lie , 𝑓 ,𝑔 ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) is the generator of a Lie point transformation, see equa-tion (3.7) (see also Remark 4.4 for a further discussion of this point). Theorem 4.2.
Consider an evolution PDE of the form (4.2) . An infinitesimal contact transformationgenerated by the function Ω of the form (4.3) is a contact symmetry for equation (4.2) if and only if 𝜕 𝑡 Ω − 𝐻𝜕 𝑢 Ω + 𝑛 X 𝑖, 𝑗 = (cid:16) D 𝑥 𝑖 Ω 𝜕 𝑢 𝑥𝑖 𝐻 + D 𝑥 𝑖 𝑥 𝑗 Ω 𝜕 𝑢 𝑥𝑖𝑥 𝑗 𝐻 − D 𝑥 𝑖 𝐻𝜕 𝑢 𝑥𝑖 Ω (cid:17) = , (4.6) where D 𝑥 𝑖 are defined in equation (3.6) and D 𝑥 𝑖 𝑥 𝑗 · = D 𝑥 𝑖 ( D 𝑥 𝑗 (·)) .Proof. The statement follows directly from Theorem 3.9 and Theorem 3.18 (in particular equations (3.4)and (3.8)). (cid:3)
Let us introduce 𝑂 𝑡 = Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) , where 𝑋 𝑡 is a solution to equation (2.5) with respect to an optimal control 𝐴 ∗ 𝑡 . Assumption 1.
There exists at least one measurable function A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) such that A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) ∈ arg max H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , ·) , where H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎 ) = 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎 ) 𝑢 𝑥 𝑖 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎 ) 𝜎 𝑗ℓ ( 𝑡, 𝑥, 𝑎 ) 𝑢 𝑥 𝑖 𝑥 𝑗 + 𝐿 ( 𝑡, 𝑥, 𝑎 ) . As a consequence of Assumption 1, we can choose the process 𝛼 𝑡 = A ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) tobe the optimal control provided that the solution 𝑈 to equation (2.9) is at least 𝐶 .The next result is our first stochastic generalization of Noether theorem.16 heorem 4.3. Let Assumption 1 hold true. Suppose that the solution 𝑈 to equation (2.9) is continuouslydifferentiable with respect to time and 𝐶 with respect to 𝑥 . If Ω is a contact symmetry of equation (2.9) ,then 𝑂 𝑡 is a local martingale.Remark . The works [36, 37, 57] present a Noether theorem involving a time change and a Lie pointtransformation with a generator of the form (4.5) for an optimal control system with affine type control andan objective function with quadratic dependence from the control. More precisely, they proved that, if ˜ Ω ofthe form (4.5) is a symmetry of the HJB equation, then the processˆ 𝑂 𝑡 = ˜ Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝜕 𝑡 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) = − 𝑓 ( 𝑡 ) 𝐻 ( 𝑡, ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) + Ω Lie , 𝑓 ,𝑔 ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) , (4.7)is a local martingale. The presence of some time invariance was essential in the papers [3, 51] for extendingthe concept of integrable systems to the stochastic framework. We expect that the martingality of theprocess (4.7) holds also in the general setting presented here. Since it is not completely clear what it is therole of time change in our setting and if the conservation of (4.7) holds for more general time changes, weprefer to postpone this analysis to some later works.From now on we take 𝐻 as in Section 2.2, namely, 𝐻 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = sup 𝑎 ∈ 𝐾 H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎 ) . In order to prove Theorem 4.3, we anticipate the following result.
Lemma 4.5.
We have that 𝜕 𝑢 𝑥𝑖 𝐻 = 𝜇 𝑖 ( 𝑡, 𝑥, A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 )) ,𝜕 𝑢 𝑥𝑖𝑥 𝑗 𝐻 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 )) 𝜎 𝑗ℓ ( 𝑡, 𝑥, A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 )) . Proof.
In the case where 𝜇 , 𝜎 , and A are 𝐶 in all their variables, the result follows from the fact that 𝜕 𝑎 𝑖 H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , A ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 )) = . The general case is a consequence of Assumption 1 and the Envelope Theorem. For the latter we refer thereader to, e.g., [13, 42]. (cid:3)
Proof of Theorem 4.3.
We compute the differential of 𝑂 𝑡 using Itô formula, to getd 𝑂 𝑡 = d Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) = 𝜕 𝑡 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝜕 𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑈 ( 𝑡, 𝑋 𝑡 )+ 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑋 𝑖𝑡 + 𝑛 X 𝑖 = 𝜕 𝑢 𝑥𝑖 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 )+ 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑖 𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝑋 𝑖 , 𝑋 𝑗 ] 𝑡 + 𝜕 𝑢𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝑈 (· , 𝑋 · ) , 𝑈 (· , 𝑋 · )] 𝑡 + 𝑛 X 𝑗 = 𝜕 𝑢𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝑈 (· , 𝑋 · ) , 𝑋 𝑗 ] 𝑡 𝑛 X 𝑗 = 𝜕 𝑢𝑢 𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝜕 𝑥 𝑗 𝑈 (· , 𝑋 · ) , 𝑈 (· , 𝑋 · )] 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑢 𝑥𝑖 𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝜕 𝑥 𝑖 𝑈 (· , 𝑋 · ) , 𝑋 𝑗 ] 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d [ 𝜕 𝑥 𝑖 𝑈 (· , 𝑋 · ) , 𝜕 𝑥 𝑗 𝑈 (· , 𝑋 · )] 𝑡 . Since 𝑈 ∈ 𝐶 , ( [ , 𝑇 ] × R 𝑛 , R ) , we also haved 𝑈 ( 𝑡, 𝑋 𝑡 ) = 𝜕 𝑡 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑡 + 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑋 𝑖𝑡 + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑖 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) d [ 𝑋 𝑖 , 𝑋 𝑗 ] 𝑡 , (4.8)d 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 ) = 𝜕 𝑥 𝑖 ,𝑡 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑡 + 𝑛 X 𝑗 = 𝜕 𝑥 𝑖 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑋 𝑗𝑡 + 𝑛 X 𝑗,𝑘 = 𝜕 𝑥 𝑖 𝑥 𝑗 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) d [ 𝑋 𝑗 , 𝑋 𝑘 ] 𝑡 . (4.9)Exploiting equations (4.8) and (4.9), the fact that 𝑋 𝑡 is solution to (2.5), and the relationsd [ 𝑋 𝑖 , 𝑋 𝑗 ] 𝑡 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡, d [ 𝑈 (· , 𝑋 · ) , 𝑋 𝑖 ] 𝑡 = 𝑛 X 𝑗 = 𝑚 X ℓ = 𝜕 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡, d [ 𝑈 (· , 𝑋 · ) , 𝑈 (· , 𝑋 · )] 𝑡 = 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜕 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡, d [ 𝑈 (· , 𝑋 · ) , 𝜕 𝑥 𝑖 𝑈 (· , 𝑋 · )] 𝑡 = 𝑛 X 𝑘, 𝑗 = 𝑚 X ℓ = 𝜕 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜕 𝑥 𝑖 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑘ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡, d [ 𝜕 𝑥 𝑙 𝑈 (· , 𝑋 · ) , 𝜕 𝑥 𝑖 𝑈 (· , 𝑋 · )] 𝑡 = 𝑛 X 𝑘, 𝑗 = 𝑚 X ℓ = 𝜕 𝑥 𝑙 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜕 𝑥 𝑖 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡, we obtaind 𝑂 𝑡 = 𝑛 X 𝑖,𝑘 = 𝜇 𝑖 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) (cid:16) 𝜕 𝑥 𝑖 Ω + 𝑢 𝑥 𝑖 𝜕 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖, 𝑗,𝑘,𝑙 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) (cid:18) 𝜕 𝑥 𝑖 ,𝑥 𝑗 Ω + 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 ,𝑢 Ω + 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑥 𝑖 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 ,𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝜕 𝑢𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝜕 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑥 𝑗 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝜕 𝑢 𝑥𝑘 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝑥 𝑙 𝜕 𝑢 𝑥𝑘 𝑢 𝑥𝑙 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω (cid:19) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝜕 𝑡 𝑈 𝜕 𝑢 Ω + 𝑛 X 𝑖 = 𝜕 𝑡 𝑥 𝑖 𝑈 𝜕 𝑢 𝑥𝑖 Ω + 𝜕 𝑡 Ω ! ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + d 𝑀 𝑡 , where 𝑀 𝑡 is a local martingale. Using the explicit definition of D 𝑥 𝑖 , it is simple to note that D 𝑥 𝑖 Ω = 𝜕 𝑥 𝑖 Ω + 𝑢 𝑥 𝑖 𝜕 𝑢 Ω + 𝑛 X 𝑘 = 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω , D 𝑥 𝑖 𝑥 𝑗 Ω = 𝜕 𝑥 𝑖 𝑥 𝑗 Ω + 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 𝑢 Ω + 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝜕 𝑢𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝜕 𝑢 Ω + 𝑛 X 𝑘,𝑙 = (cid:16) 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑥 𝑖 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑥 𝑗 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝜕 𝑢 𝑥𝑘 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝑥 𝑙 𝜕 𝑢 𝑥𝑘 𝑢 𝑥𝑙 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω (cid:17) , and we have 𝜕 𝑡 𝑈 = − 𝐻 ( 𝑡, 𝑥, ∇ 𝑈, 𝐷 𝑈 ) and 𝜕 𝑡,𝑥 𝑖 𝑈 = −( D 𝑥 𝑖 𝐻 ) ( 𝑡, 𝑥, ∇ 𝑈, 𝐷 𝑈, 𝐷 𝑈 ) . 𝛼 𝑡 = A ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) , and the determiningequations (4.6), we obtaind 𝑂 𝑡 == 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) ( D 𝑥 𝑖 Ω ) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑚 X ℓ = 𝑛 X 𝑖, 𝑗 = 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) · 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) ( D 𝑥 𝑖 𝑥 𝑗 Ω ) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖 = (cid:16) − 𝐻𝜕 𝑢 Ω − D 𝑥 𝑖 𝐻𝜕 𝑢 𝑥𝑖 Ω + 𝜕 𝑡 Ω (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + d 𝑀 𝑡 = 𝑛 X 𝑖, 𝑗 = (cid:16) D 𝑥 𝑖 Ω 𝜕 𝑢 𝑥𝑖 𝐻 + D 𝑥 𝑖 𝑥 𝑗 Ω 𝜕 𝑢 𝑥𝑖𝑥 𝑗 𝐻 − 𝐻𝜕 𝑢 Ω − D 𝑥 𝑖 𝐻𝜕 𝑢 𝑥𝑖 Ω + 𝜕 𝑡 Ω (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + d 𝑀 𝑡 = d 𝑀 𝑡 , which concludes the proof. (cid:3) We face the problem of stochastic HJB equation, that is, we consider, as in Section 2.3, H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎, 𝜓 𝑥 ) = 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝑢 𝑥 𝑖 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝜎 𝑗ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝑢 𝑥 𝑖 𝑥 𝑗 + 𝑛 X 𝑖 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, 𝑎, 𝜔 ) 𝜓 ℓ𝑥 𝑖 + 𝐿 𝑆 ( 𝑡, 𝑥, 𝑎 ) . (4.10)and 𝐻 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) = sup 𝑎 ∈ 𝐾 H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝑎, 𝜓 𝑥 ) . In this case, d 𝑈 𝑡 ( 𝑥 ) = − 𝐻 𝑆 ( 𝑡, 𝑥, ∇ 𝑈, 𝐷 𝑈, ∇ Ψ ) d 𝑡 + 𝑚 X ℓ = Ψ ℓ𝑡 ( 𝑥 ) d 𝑊 ℓ𝑡 . (4.11)Though some ideas concerning symmetries for SPDEs are discussed, e.g., in [14] and [15], a general theoryhas not been developed yet. For this reason, we extend the notion of infinitesimal symmetry introducedin Definition 3.15 in the following way. Hereafter, we consider the probability space ( W , F 𝑡 , P ) where W = 𝐶 ( R , R 𝑚 ) is the canonical space for the Brownian motion 𝑊 , F 𝑡 is the natural filtration generated by 𝑊 𝑡 , and P is the Wiener measure on W . Definition 4.6.
Let Ω : R + × 𝐽 ( R 𝑛 , R ) × W → R be a predictable regular random field on R + × 𝐽 ( R 𝑛 , R ) which is 𝐶 with respect to the time 𝑡 and 𝐶 in all other variables. We say that 𝑌 Ω is a contact symmetryfor equation (4.11) when we have 𝜕 𝑡 Ω − 𝐻 𝑆 𝜕 𝑢 Ω + 𝑛 X 𝑖, 𝑗 = (cid:16) D 𝑥 𝑖 Ω 𝜕 𝑢 𝑥𝑖 𝐻 𝑆 + D 𝑥 𝑖 𝑥 𝑗 Ω 𝜕 𝑢 𝑥𝑖𝑥 𝑗 𝐻 𝑆 − D 𝑥 𝑖 𝐻 𝑆 𝜕 𝑢 𝑥𝑖 Ω (cid:17) = . Assumption 2.
There exists at least one measurable function A 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) such that A 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) ∈ arg max H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , · , 𝜓 𝑥 ) , where H 𝑆 is defined by equation (4.10). 19 emma 4.7. We have that 𝜕 𝑢 𝑥𝑖 𝐻 𝑆 = 𝜇 𝑖 ( 𝑡, 𝑥, A 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) , 𝜔 ) ,𝜕 𝑢 𝑥𝑖𝑥 𝑗 𝐻 𝑆 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑥, A 𝑆 ( 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) , 𝜔 ) 𝜎 𝑗ℓ ( 𝑡, 𝑥, A 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) , 𝜔 ) ,𝜕 𝜓 ℓ𝑥𝑖 𝐻 𝑆 = 𝜎 𝑖ℓ ( 𝑡, 𝑥, A 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) , 𝜔 ) . Proof.
The proof is similar to the one of Lemma 4.5. (cid:3)
The following result represents our second stochastic generalization of Noether theorem.
Theorem 4.8.
Let Assumption 2 hold true. Suppose that the solution ( 𝑈, Ψ ) to equation (4.11) is continu-ously differentiable with respect to time and 𝐶 with respect to 𝑥 almost surely. If Ω is a contact symmetryof equation (2.9) , then ˜ 𝑂 𝑡 = Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ))− ∫ 𝑡 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢𝑢 Ω (cid:16) ( Ψ ℓ𝑠 ) + 𝑢 𝑥 𝑖 𝜎 𝑖ℓ Ψ ℓ𝑠 (cid:17) − 𝜕 𝑥 𝑖 𝑢 Ω 𝜎 𝑖ℓ Ψ ℓ − 𝜕 𝑢 𝑥𝑖 Ω 𝜕 𝑥 𝑖 𝜎 𝑗ℓ Ψ ℓ𝑥 𝑗 + 𝜕 𝑥 𝑖 𝑢 𝑥 𝑗 Ω 𝜎 𝑖ℓ Ψ ℓ𝑥 𝑗 + 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 Ω ( Ψ ℓ𝑥 𝑖 Ψ ℓ𝑥 𝑗 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 Ψ ℓ𝑥 𝑗 + 𝜎 𝑗ℓ 𝑢 𝑥 𝑗 Ψ ℓ𝑥 𝑖 )+ 𝜕 𝑢𝑢 𝑥 𝑗 Ω (cid:0) Ψ ℓ Ψ ℓ𝑥 𝑗 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 Ψ ℓ𝑥 𝑗 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝑥 𝑗 Ψ ℓ (cid:1) (cid:17) ( 𝑠, 𝑋 𝑠 , 𝑈 ( 𝑠, 𝑋 𝑠 ) , ∇ 𝑈 ( 𝑠, 𝑋 𝑠 )) d 𝑠 (4.12) is a local martingale.Proof. Since the proof is similar to the one of Theorem 4.3, we report here only some steps of the proof.By Theorem 2.7, we haved 𝑈 ( 𝑡, 𝑋 𝑡 ) = − 𝐻 𝑆 ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ Ψ 𝑡 ( 𝑋 𝑡 )) d 𝑡 + 𝑚 X ℓ = Ψ ℓ𝑡 ( 𝑋 𝑡 ) d 𝑊 ℓ𝑡 + 𝑚 X ℓ = 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 Ψ ℓ ( 𝑋 𝑡 ) d [ 𝑊 ℓ , 𝑋 𝑖 ] 𝑡 + 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑋 𝑖𝑡 + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑖 𝑥 𝑗 𝑈 ( 𝑡, 𝑋 𝑡 ) d [ 𝑋 𝑖 , 𝑋 𝑗 ] 𝑡 , (4.13)and d 𝜕 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) = − D 𝑥 𝑘 𝐻 𝑆 ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑋 𝑡 , 𝑡 ) , ∇ Ψ 𝑡 ( 𝑋 𝑡 )) d 𝑡 + 𝑚 X ℓ = 𝜕 𝑥 𝑘 Ψ ℓ𝑡 ( 𝑋 𝑡 ) d 𝑊 ℓ𝑡 + 𝑛 X 𝑖 = 𝑚 X ℓ = 𝜕 𝑥 𝑖 𝑥 𝑘 Ψ ℓ𝑡 ( 𝑋 𝑡 ) d [ 𝑊 ℓ , 𝑋 𝑖 ] 𝑡 + 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) d 𝑋 𝑖𝑡 + 𝑛 X 𝑖, 𝑗 = 𝜕 𝑥 𝑖 𝑥 𝑗 𝑥 𝑘 𝑈 ( 𝑡, 𝑋 𝑡 ) d [ 𝑋 𝑖 , 𝑋 𝑗 ] 𝑡 . (4.14)Writing, as usual, 𝑂 𝑡 = Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) and 𝛼 𝑡 = A 𝑆 ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ Ψ 𝑡 ( 𝑋 𝑡 )) ,we haved 𝑂 𝑡 = 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜕 𝑥 𝑖 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜕 𝑥 𝑖 𝑥 𝑗 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝜕 𝑢 Ω d 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝜕 𝑢𝑢 Ω d [ 𝑈, 𝑈 ] + 𝑛 X 𝑖 = 𝜕 𝑥 𝑖 𝑢 Ω d [ 𝑋 𝑖 , 𝑈 ] + 𝑛 X 𝑖 = 𝜕 𝑢 𝑥𝑖 Ω d 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑋 𝑡 )+ 𝑛 X 𝑖, 𝑗 = 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 Ω d [ 𝜕 𝑥 𝑖 𝑈, 𝜕 𝑥 𝑗 𝑈 ] + 𝑛 X 𝑗 = 𝜕 𝑢𝑢 𝑥 𝑗 Ω d [ 𝑈, 𝜕 𝑥 𝑗 𝑈 ] + 𝑛 X 𝑖,𝑖 = 𝜕 𝑥 𝑖 𝑢 𝑥 𝑗 Ω d [ 𝑋 𝑖 , 𝜕 𝑥 𝑗 𝑈 ]+ 𝜕 𝑡 Ω d 𝑡 + d ˜ 𝑀 𝑡 . 𝑂 𝑡 = 𝑛 X 𝑖 = 𝜇 𝑖 ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜕 𝑥 𝑖 Ω + 𝑢 𝑥 𝑖 𝜕 𝑢 Ω + 𝑛 X 𝑘 = 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω ! ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = 𝜎 𝑖ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝜎 𝑗ℓ ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) 𝑛 X 𝑘,𝑙 = 𝜕 𝑥 𝑖 𝑥 𝑗 Ω + 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 𝑢 Ω + 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑥 𝑖 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝜕 𝑢,𝑢 Ω + 𝑢 𝑥 𝑖 𝑢 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝜕 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝜕 𝑥 𝑗 𝑢 𝑥𝑘 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝜕 𝑢 𝑥𝑘 𝑢 Ω + 𝑢 𝑥 𝑖 𝑥 𝑘 𝑢 𝑥 𝑗 𝑥 𝑙 𝜕 𝑢 𝑥𝑘 𝑢 𝑥𝑙 Ω + 𝑢 𝑥 𝑖 𝑥 𝑗 𝑥 𝑘 𝜕 𝑢 𝑥𝑘 Ω ! ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 − 𝜕 𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) 𝐻 𝑆 ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ Ψ 𝑡 ( 𝑋 𝑡 )) d 𝑡 + 𝜕 𝑢𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) 𝑛 X 𝑖 = 𝑚 X ℓ = (cid:16) Ψ ℓ𝑡 ( 𝑥 ) + 𝜕 𝑥 𝑖 𝑈 ( 𝑡, 𝑥 ) 𝜎 𝑖ℓ ( 𝑥, 𝑎 ) Ψ ℓ𝑡 ( 𝑥 ) (cid:17) ( 𝑡, 𝑋 𝑡 , 𝛼 𝑡 ) d 𝑡 + 𝑛 X 𝑖 = 𝑚 X ℓ = 𝜕 𝑥 𝑖 𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) 𝜎 𝑖ℓ ( 𝑋 𝑡 , 𝛼 𝑡 ) Ψ ℓ𝑡 ( 𝑋 𝑡 ) d 𝑡 − 𝑛 X 𝑖 = (cid:16) 𝜕 𝑢 𝑥𝑖 Ω 𝐷 𝑥 𝑖 𝐻 𝑆 (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝐷 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ Ψ 𝑡 ( 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖,𝑘 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢 𝑥𝑖 Ω 𝜎 𝑘ℓ 𝜕 𝑥 𝑖 𝑥 𝑘 Ψ ℓ𝑡 + 𝜕 𝑢 Ω 𝜎 𝑖ℓ Ψ ℓ𝑥 𝑖 (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 Ω ( 𝜕 𝑥 𝑖 Ψ ℓ𝑡 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 Ψ ℓ + 𝜎 𝑗ℓ 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 Ψ ℓ ) (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑥 𝑖 𝑢 𝑥 𝑗 Ω 𝜎 𝑖ℓ Ψ ℓ𝑥 𝑗 (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + 𝑛 X 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢𝑢 𝑥 𝑗 Ω ( Ψ ℓ𝑡 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝑥 𝑗 Ψ ℓ𝑡 ) (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + d ˜ 𝑀 𝑡 Notice that, by Definition 4.6, we have0 = 𝜕 𝑢 Ω 𝐻 𝑆 + 𝑛 X 𝑖, 𝑗 = (cid:16) 𝜕 𝑢 𝑥𝑖 Ω D 𝑥 𝑖 𝐻 𝑆 − 𝜕 𝑢 𝑥𝑖 𝐻 𝑆 D 𝑥 𝑖 Ω − 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 𝐻 𝑆 D 𝑥 𝑖 𝑥 𝑗 Ω (cid:17) , which, by Lemma 4.7, is equivalent to 𝜕 𝑢 Ω 𝐻 𝑆 + 𝑛 X 𝑖, 𝑗 = (cid:16) 𝜕 𝑢 𝑥𝑖 Ω D 𝑥 𝑖 𝐻 𝑆 − 𝜕 𝑢 𝑥𝑖 𝐻 𝑆 D 𝑥 𝑖 Ω − 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 𝐻 𝑆 D 𝑥 𝑖 𝑥 𝑗 Ω (cid:17) == 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢 𝑥𝑖 Ω ( 𝜕 𝑥 𝑖 𝜎 𝑗ℓ Ψ ℓ𝑥 𝑗 + 𝜎 𝑗ℓ Ψ ℓ𝑥 𝑗 𝑥 𝑖 ) + 𝜕 𝑢 Ω ( 𝜎 𝑖ℓ Ψ ℓ𝑥 𝑖 ) (cid:17) . Then we obtaind 𝑂 𝑡 = 𝜕 𝑢,𝑢 Ω ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) 𝑛 X 𝑖 = 𝑚 X ℓ = (cid:16) ( Ψ ℓ𝑡 ) + 𝑢 𝑥 𝑖 𝜎 𝑖ℓ Ψ ℓ𝑡 (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖 = 𝑚 X ℓ = (cid:16) 𝜕 𝑥 𝑖 𝑢 Ω 𝜎 𝑖ℓ Ψ ℓ (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢 𝑥𝑖 𝑢 𝑥 𝑗 Ω ×× ( 𝜕 𝑥 𝑖 Ψ ℓ𝑡 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑗ℓ 𝑢 𝑥 𝑗 𝜕 𝑥 𝑖 Ψ ℓ𝑡 ) (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) − 𝜕 𝑢 𝑥𝑖 Ω 𝜕 𝑥 𝑖 𝜎 𝑗ℓ 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜕 𝑥 𝑖 𝑢 𝑥 𝑗 Ω 𝜎 𝑖ℓ 𝜕 𝑥 𝑗 Ψ ℓ𝑡 (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + 𝑛 X 𝑖, 𝑗 = 𝑚 X ℓ = (cid:16) 𝜕 𝑢𝑢 𝑥 𝑗 Ω ( Ψ ℓ 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝜕 𝑥 𝑗 Ψ ℓ𝑡 + 𝜎 𝑖ℓ 𝑢 𝑥 𝑖 𝑥 𝑗 Ψ ℓ𝑡 ) (cid:17) ( 𝑡, 𝑋 𝑡 , 𝑈 ( 𝑡, 𝑋 𝑡 ) , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 ) , 𝛼 𝑡 ) d 𝑡 + d ˜ 𝑀 𝑡 . Following then the same steps as in the proof of Theorem 4.3 we get the result. (cid:3)
Corollary 4.9.
Suppose that Ω is a Lie point symmetry of the form Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) = 𝑐𝑢 + 𝑔 ( 𝑡, 𝑥 ) − 𝑛 X 𝑘 = 𝑓 𝑘 ( 𝑡, 𝑥 ) 𝑢 𝑥 𝑘 , where 𝑐 ∈ R and 𝑓 𝑘 , 𝑔 : R 𝑛 + → R are smooth functions such that, for 𝑗 = , . . . , 𝑛 and ℓ = , . . . , 𝑚 , 𝑛 X 𝑘 = (cid:16) 𝑓 𝑘 𝜕 𝑥 𝑘 𝜎 𝑗ℓ − 𝜎 𝑘ℓ 𝜕 𝑥 𝑘 𝑓 𝑗 (cid:17) = . Then 𝑂 𝑡 = Ω ( 𝑡, 𝑋 𝑡 , ∇ 𝑈 ( 𝑡, 𝑋 𝑡 )) is a local martingale.Proof. Under the previous conditions, we have 𝑛 X 𝑗 = (cid:16) − 𝜕 𝑢 𝑥 𝑗 Ω 𝜕 𝑥 𝑗 𝜎 𝑖ℓ (cid:17) + 𝜕 𝑥 𝑖 𝑢 𝑥 𝑗 Ω 𝜎 𝑖ℓ = 𝜕 𝑥 𝑖 𝑢 Ω 𝜎 𝑖ℓ = . The thesis follows from Theorem 4.8. (cid:3)
In this section, we propose a symmetry analysis of Merton’s problem of optimal portfolio selection (see theoriginal paper [41] and [53] for a review on the subject). Let us consider a set of controls 𝛼 𝑡 = ( 𝑐 ( 𝑡 ) , 𝛾 ( 𝑡 )) and a controlled diffusion dynamics described by the SDEd 𝑋 𝑡 = (cid:0) ( 𝛾 ( 𝑡 ) ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 ) 𝑋 𝑡 − 𝑐 ( 𝑡 ) (cid:1) d 𝑡 + 𝑋 𝑡 𝛾 ( 𝑡 ) 𝜎 ( 𝑡 ) d 𝑊 𝑡 , (5.1)where 𝑋 is the wealth process controlled by the proportion 𝛾 ( 𝑡 ) ∈ [ , ] invested in the risky asset at time 𝑡 and by the consumption 𝑐 ( 𝑡 ) ∈ [ , +∞) per unit time at time 𝑡 . Moreover, 𝑟 is the constant interest rate,and 𝜇 ( 𝑡 ) , 𝜎 ( 𝑡 ) > 𝜎 ( 𝑡 ) > 𝜖 > 𝑇 >
0, the problemof choosing optimal portfolio selection consists in maximizing the objective functional E (cid:20)∫ 𝑇𝑡 𝐿 ( 𝑡, 𝛼 𝑡 ) d 𝑠 + 𝑔 ( 𝑋 𝑇 ) (cid:21) , where 𝐿 ( 𝑡, 𝛼 𝑡 ) = 𝑒 − 𝜌𝑡 𝑉 ( 𝑐 ( 𝑡 )) . Here, 𝜌 ∈ ( , +∞) is the discount rate, 𝑉 is a strictly concave utility function and 𝑔 is a given function.Let us remark that the set 𝐾 introduced in Section 2.2 here has the form 𝐾 = [ , +∞) × [ , ] . .1 Markovian case The maximization problem introduced above is a particular case of the general one studied in Section 2.2.The associated value function is 𝑈 ( 𝑡, 𝑥 ) = max 𝛼 ∈ K 𝐿 E (cid:20)∫ 𝑇𝑡 𝐿 ( 𝑠, 𝛼 𝑠 ) d 𝑠 + 𝑔 ( 𝑋 𝑇 ) (cid:12)(cid:12)(cid:12)(cid:12) 𝑋 𝑡 = 𝑥 (cid:21) , while the HJB equation becomes 𝜕 𝑡 𝑈 + max ( 𝑐,𝛾 ) ∈ 𝐾 (cid:8) H ( 𝑡, 𝑥, ∇ 𝑈, 𝐷 𝑈, ( 𝑐, 𝛾 )) (cid:9) = , with H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , ( 𝑐, 𝛾 )) = exp (− 𝜌𝑡 ) 𝑉 ( 𝑐 ) + 𝑢 𝑥 ( 𝛾 ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 ) 𝑥 − 𝑢 𝑥 𝑐 + 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝛾 𝑥 . The optimal value ( 𝑐 ★ , 𝛾 ★ ) of ( 𝑐, 𝛾 ) is given by the solutions to the system 𝜕 𝑐 H = exp (− 𝜌𝑡 ) 𝑉 ′ ( 𝑐 ) − 𝑢 𝑥 = ,𝜕 𝛾 H = ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑥𝑢 𝑥 + 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝑥 𝛾 = , that is 𝑐 ★ ( 𝑡 ) = ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) , (5.2) 𝛾 ★ ( 𝑡 ) = − ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 𝑥𝑢 𝑥𝑥 𝜎 ( 𝑡 ) . (5.3)The corresponding functional H takes the form 𝐻 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 ) = H ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , ( 𝑐 ★ ( 𝑡 ) , 𝛾 ★ ( 𝑡 ))) = exp (− 𝜌𝑡 ) 𝑉 ( 𝑐 ★ ( 𝑡 )) + 𝑢 𝑥 ( 𝛾 ★ ( 𝑡 ) ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 ) 𝑥 − 𝑢 𝑥 𝑐 ★ ( 𝑡 ) + 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝛾 ★ ( 𝑡 ) 𝑥 = exp (− 𝜌𝑡 ) 𝑉 (cid:16) ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) (cid:17) − (cid:18) ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 𝑥𝑢 𝑥𝑥 𝜎 ( 𝑡 ) ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 (cid:19) 𝑥𝑢 𝑥 − 𝑢 𝑥 ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) + 𝑢 𝑥𝑥 𝜎 ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 𝑥 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝑥 = exp (− 𝜌𝑡 ) 𝑉 (cid:16) ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) (cid:17) − (cid:18) ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 𝑥𝑢 𝑥𝑥 𝜎 ( 𝑡 ) ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 (cid:19) 𝑥𝑢 𝑥 − 𝑢 𝑥 ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) + ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) . So we study the following PDE 𝑢 𝑡 − 𝛿 ( 𝑡 ) 𝑢 𝑥 𝑢 𝑥𝑥 + 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) = , (5.4)with 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) = ℎ 𝑉 ( 𝑡, 𝑢 𝑥 ) + 𝑟𝑥𝑢 𝑥 , (5.5) 𝛿 ( 𝑡 ) = ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝜎 ( 𝑡 ) , where ℎ 𝑉 ( 𝑡, 𝑢 𝑥 ) = exp (− 𝜌𝑡 ) 𝑉 (cid:16) ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) (cid:17) − 𝑢 𝑥 ( 𝑉 ′ ) − ( 𝑢 𝑥 exp ( 𝜌𝑡 )) . We are looking for the symmetry generated by the generating function Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) . Hereafter, we assumethat the function ℎ 𝑉 defined above is a smooth function in a suitable open subset of R .23 heorem 5.1. The function Ω generates a contact symmetry of equation (5.4) if and only if it admits one ofthe following forms Ω = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) [ 𝑢 · 𝑢 𝑟𝑥 − 𝑥𝑢 𝑟 + 𝑥 ] + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = − 𝑢 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = exp ( 𝑟𝑡 ) 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = 𝐺 ( 𝑡, 𝑢 𝑥 ) , where 𝐺 , 𝐺 , 𝐺 , 𝐺 : R + × R → R are smooth functions satisfying the PDEs (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) 𝑢 𝑟𝑥 ℎ 𝑉 + 𝛿 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.6)2 𝑢 𝑥 𝜕 𝑢 𝑥 ℎ 𝑉 − ℎ 𝑉 + 𝛿 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.7)2 exp ( 𝑟𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 ℎ 𝑉 + 𝑥𝑟 exp ( 𝑟𝑡 ) 𝑢 𝑥 + 𝛿 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.8) 𝛿 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = . (5.9)Finally, Theorem 5.1 and Theorem 4.3 allow us to obtain the explicit forms of the local martingales ofMerton’s model. Corollary 5.2.
Let 𝑈 ( 𝑡, 𝑥 ) be a classical solution to equation (5.4) and let 𝑋 𝑡 be the solution to equation (5.1) with ( 𝛾, 𝑐 ) satisfying the equalities (5.2) and (5.3) . Then, the processes 𝑂 ,𝑡 = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) [ 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝑟 − 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝑟 + ] + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) ,𝑂 ,𝑡 = 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) ,𝑂 ,𝑡 = exp ( 𝑟𝑡 ) 𝑋 𝑡 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) ,𝑂 ,𝑡 = 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) , are local martingales.Proof of Theorem 5.1. The generating function Ω is a (contact) symmetry of the PDE if and only if thefollowing set of determining equations holds 𝛿 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 Ω + 𝑢 𝑥 𝜕 𝑢 Ω · 𝜕 𝑢 𝑥 𝐾 + 𝜕 𝑥 Ω · 𝜕 𝑢 𝑥 𝐾 − 𝜕 𝑢 Ω · 𝐾 − 𝜕 𝑢 𝑥 Ω · 𝜕 𝑥 𝐾 + 𝜕 𝑡 Ω = , (5.10) 𝛿𝑢 𝑥 𝜕 𝑢𝑢 𝑥 Ω + 𝛿𝑢 𝑥 𝜕 𝑢 𝑥 𝑥 Ω − 𝛿𝜕 𝑥 Ω = , (5.11) 𝛿 𝑢 𝑥 𝜕 𝑢𝑢 Ω + 𝛿𝑢 𝑥 𝜕 𝑢𝑥 Ω + 𝛿 𝜕 𝑥𝑥 Ω = . (5.12)We can differentiate equation (5.11) with respect to 𝑢 and equation (5.12) with respect to 𝑢 𝑥 , and equate theterm 𝜕 𝑢𝑢𝑢 𝑥 Ω to obtain ( 𝜕 𝑢𝑢 Ω + 𝜕 𝑢𝑢 𝑥 𝑥 Ω ) 𝑢 𝑥 + ( 𝜕 𝑢 𝑥 𝑥𝑥 Ω + 𝜕 𝑢𝑥 Ω ) 𝑢 𝑥 + 𝜕 𝑥𝑥 Ω = . (5.13)Differentiating equation (5.11) with respect to 𝑥 , we can get an expression of 𝜕 𝑢𝑢 𝑥 𝑥 Ω in terms of 𝜕 𝑢 𝑥 𝑥𝑥 Ω and 𝜕 𝑥𝑥 Ω . Replacing now the obtained expression in equation (5.13) yields 𝑢 𝑥 𝜕 𝑢𝑢 Ω + 𝜕 𝑢𝑥 Ω = . (5.14)If we differentiate equation (5.11) with respect to 𝑢 and use equation (5.14), then we have 𝜕 𝑢𝑥 Ω = . (5.15)24nserting equation (5.15) in equation (5.14), we get 𝜕 𝑢𝑢 Ω = , from which, thanks to equations (5.12) and (5.15), we obtain 𝜕 𝑥𝑥 Ω = . This means that Ω is a function of the form Ω ( 𝑡, 𝑥, 𝑢, 𝑢 𝑥 ) = 𝑓 ( 𝑡, 𝑢 𝑥 ) 𝑢 + 𝑓 ( 𝑡, 𝑢 𝑥 ) 𝑥 + 𝑓 ( 𝑡, 𝑢 𝑥 ) . (5.16)If we replace expression (5.16) inside the determining equations (5.10), (5.11), and (5.12), we have that 𝑓 , 𝑓 , and 𝑓 have to satisfy the following set of equations 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝑓 + 𝜕 𝑡 𝑓 = , (5.17) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝑓 + 𝜕 𝑡 𝑓 − 𝑓 𝑟 = , (5.18) − 𝑢 𝑥 𝑓 · 𝜕 𝑢 𝑥 𝐾 − 𝑓 · 𝜕 𝑢 𝑥 𝐾 + 𝑓 · 𝐾 + 𝛿𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝑓 + 𝜕 𝑡 𝑓 = , (5.19) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑓 + 𝑢 𝑥 𝜕 𝑢 𝑥 𝑓 − 𝑓 = . (5.20)Solving equation (5.20) with respect to 𝑓 , we obtain that 𝑓 = − 𝑢 𝑥 𝑓 + 𝑔 ( 𝑡 ) 𝑢 𝑥 . (5.21)Replacing the expression (5.21) in equation (5.18) and using equation (5.17), we have the equation − 𝑢 𝑥 𝜕 𝑢 𝑥 𝑓 + 𝑢 𝑥 𝜕 𝑡 𝑔 + 𝑟𝑢 𝑥 𝑓 − 𝑟𝑔 𝑢 𝑥 = , from which we get that 𝑓 = (cid:20) 𝑑 ( 𝑡 ) + 𝑏 ( 𝑡 ) 𝑟 (cid:21) 𝑢 𝑟𝑥 − 𝑏 ( 𝑡 ) 𝑟 , (5.22)where 𝑏 ( 𝑡 ) = 𝜕 𝑡 𝑔 − 𝑟𝑔 . Replacing equation (5.22) in equation (5.17), we obtain 𝑟 ( 𝑟 − ) (cid:20) 𝑑 ( 𝑡 ) + 𝑏 ( 𝑡 ) 𝑟 (cid:21) + (cid:18) 𝑑 ′ ( 𝑡 ) + 𝑏 ′ ( 𝑡 ) 𝑟 (cid:19) = ,𝜕 𝑡,𝑡 𝑔 − 𝑟𝜕 𝑡 𝑔 = , giving 𝑏 ( 𝑡 ) = 𝑐 , 𝑑 ( 𝑡 ) = 𝑑 exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) − 𝑑 𝑟 , 𝑔 ( 𝑡 ) = 𝑑 exp ( 𝑟𝑡 ) − 𝑑 𝑟 , for some arbitrary constants 𝑑 , 𝑑 , and 𝑑 . Therefore, we have 𝑓 = 𝑑 exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) 𝑢 𝑟𝑥 − 𝑑 𝑟 . By (5.21), we obtain 𝑓 = − 𝑑 exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) 𝑢 𝑟 + 𝑥 + 𝑑 exp ( 𝑟𝑡 ) 𝑢 𝑥 . Inserting the previous expression of 𝑓 , 𝑓 , and 𝑓 in (5.19), we get that Ω is a contact symmetry ofequation (5.4) if and only if it is a linear combination of the following expressions Ω = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) [ 𝑢 · 𝑢 𝑟𝑥 − 𝑥𝑢 𝑟 + 𝑥 ] + 𝐺 ( 𝑡, 𝑢 𝑥 ) , 𝑑 = , 𝑑 = 𝑑 = , = − 𝑢 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , 𝑑 = − 𝑟, 𝑑 = 𝑑 = , Ω = exp ( 𝑟𝑡 ) 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , 𝑑 = , 𝑑 = 𝑑 = , Ω = 𝐺 ( 𝑡, 𝑢 𝑥 ) , 𝑑 = 𝑑 = 𝑑 = , where 𝐺 , 𝐺 , 𝐺 , and 𝐺 are smooth solutions to the PDEs satisfying equations (5.6), (5.7), (5.8),and (5.9). (cid:3) Equations (5.6), (5.7), (5.8), and (5.9) can be solved explicitly for some special form of 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) .Taking, in particular, the following two expressions 𝐾 = ℎ + 𝑟𝑥𝑢 𝑥 , ℎ = − exp (− 𝜌𝑡 ) [ log ( 𝑢 𝑥 ) + 𝜌𝑡 + ] , and 𝐾 = ℎ + 𝑟𝑥𝑢 𝑥 , ℎ = − exp (cid:16) 𝜌𝑡𝜃 − (cid:17) 𝑢 𝜃𝜃 − 𝑥 𝜃 − 𝜃 , derived by taking the isoelastic utility functions, also known as constant relative risk aversion utilities(see [50]) defined as 𝑉 ( 𝑧 ) = log ( 𝑧 ) and 𝑉 ( 𝑧 ) = 𝑧 𝜃 𝜃 , 𝜃 ∈ R , respectively. If we denote by Ω = exp (cid:16) − 𝑟 ( 𝑟 − ) 𝑡 (cid:17) [ 𝑢 · 𝑢 𝑟𝑥 − 𝑥𝑢 𝑟 + 𝑥 ] + 𝐺 ( 𝑡, 𝑢 𝑥 ) and by Ω = exp (cid:16) − 𝑟 ( 𝑟 − ) 𝑡 (cid:17) [ 𝑢 · 𝑢 𝑟𝑥 − 𝑥𝑢 𝑟 + 𝑥 ] + 𝐺 ( 𝑡, 𝑢 𝑥 ) the symmetries of the equation (5.4) when 𝐾 = 𝐾 and 𝐾 = 𝐾 , respectively, wehave that 𝐺 solves the equation Γ ( 𝑡, 𝑢 𝑥 ) + 𝛿 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 ,𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.23)where Γ ( 𝑡, 𝑢 𝑥 ) = (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 − 𝜌𝑡 (cid:19) [ − 𝑢 𝑟𝑥 − log ( 𝑢 𝑥 ) 𝑢 𝑟𝑥 − 𝜌𝑡𝑢 𝑟𝑥 ] . (5.24)Making the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) 𝑢 𝑟𝑥 + 𝜙 ( 𝑡 ) 𝑢 𝑟𝑥 log ( 𝑢 𝑥 ) + 𝜙 ( 𝑡 ) , we have that 𝐺 solves (5.23) if and only if 𝜙 , 𝜙 , 𝜙 solve the following ODEs 𝜙 ′ = (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 − 𝜌𝑡 (cid:19) ( + 𝜌𝑡 ) − 𝛿 ( 𝑡 ) 𝑟 ( 𝑟 − ) 𝜙 − 𝛿 ( 𝑡 ) ( 𝑟 − ) 𝜙 − 𝛿 ( 𝑡 ) 𝜙 ,𝜙 ′ = (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 − 𝜌𝑡 (cid:19) − 𝛿 ( 𝑡 ) 𝑟 ( 𝑟 − ) 𝜙 ,𝜙 ′ = (cid:18) 𝑟 ( 𝑟 − ) 𝑡 − 𝜌𝑡 (cid:19) . In the same way 𝐺 solves Γ ( 𝑡, 𝑢 𝑥 ) + 𝑢 𝑥 𝜕 𝑢 𝑥 ,𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.25)where Γ ( 𝑡, 𝑢 𝑥 ) = 𝑢 𝜃𝜃 − 𝑥 exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 + 𝜌𝜃 − 𝑡 (cid:19) (cid:20) − 𝜃 − 𝜃 𝑢 𝑟𝑥 (cid:21) . With the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) 𝑢 𝜃𝜃 − 𝑥 + 𝜙 ( 𝑡 ) 𝑢 𝜃𝜃 − + 𝑟 − 𝑥 , equation (5.25) holds if and only if 𝜙 , 𝜙 , and 𝜙 solve the following ODEs 𝜙 ′ = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 + 𝜌𝜃 − 𝑡 (cid:19) − 𝛿 ( 𝑡 ) (cid:18) 𝜃𝜃 − (cid:19) (cid:18) 𝜃𝜃 − − (cid:19) 𝜙 , ′ = 𝜃 − 𝜃 exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 + 𝜌𝜃 − 𝑡 (cid:19) − 𝛿 ( 𝑡 ) (cid:18) 𝜃𝜃 − + 𝑟 (cid:19) (cid:18) 𝜃𝜃 − + 𝑟 − (cid:19) 𝜙 . If we denote by Ω = − 𝑢 + 𝐺 ( 𝑡, 𝑢 𝑥 ) and by Ω = − 𝑢 + 𝐺 ( 𝑡, 𝑢 𝑥 ) the symmetries of the equation (5.4) when 𝐾 = 𝐾 and 𝐾 = 𝐾 , respectively, then we get that 𝐺 𝑖 , 𝑖 = ,
2, solve (5.7) with ℎ and ℎ given by (5.23)and (5.24).With the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) + 𝜙 ( 𝑡 ) log ( 𝑢 𝑥 ) , The function 𝐺 solves (5.7) (with ℎ = ℎ ) if and only if 𝜙 and 𝜙 solve the following ODEs 𝜙 ′ ( 𝑡 ) = 𝛿 ( 𝑡 ) 𝜙 ( 𝑡 ) − exp (− 𝜌𝑡 ) − exp (− 𝜌𝑡 ) ( 𝜌𝑡 + ) 𝜙 ′ ( 𝑡 ) = − exp (− 𝜌𝑡 ) . With the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) 𝑢 𝜃𝜃 − 𝑥 , the function 𝐺 solves (5.7) (with ℎ = ℎ ) if and only if 𝜙 solves the following ODE 𝜙 ′ ( 𝑡 ) = − 𝛿 ( 𝑡 ) 𝜃 ( 𝜃 − ) 𝜙 − 𝜃 exp (cid:16) 𝜌𝜃 − 𝑡 (cid:17) . If we denote by Ω = exp ( 𝑟𝑡 ) 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) and by Ω = exp ( 𝑟𝑡 ) 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) the symmetries of theequation (5.4) when 𝐾 = 𝐾 and 𝐾 = 𝐾 , respectively, then 𝐺 𝑖 , 𝑖 = ,
2, solve (5.8) with ℎ and ℎ givenabove, respectively.With the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) + 𝜙 ( 𝑡 ) log ( 𝑢 𝑥 ) , the function 𝐺 solves (5.7) (with ℎ = ℎ ) if and only if 𝜙 and 𝜙 solve the following ODEs 𝜙 ′ ( 𝑡 ) = 𝛿 ( 𝑡 ) 𝜙 ( 𝑡 ) − exp (( 𝑟 − 𝜌 ) 𝑡 ) ,𝜙 ′ ( 𝑡 ) = . With the ansatz 𝐺 ( 𝑡, 𝑢 𝑥 ) = 𝜙 ( 𝑡 ) 𝑢 𝜃𝜃 − 𝑥 , The function 𝐺 solves (5.8) (with ℎ = ℎ ) if and only if 𝜙 solves the following ODE 𝜙 ′ ( 𝑡 ) = − 𝛿 ( 𝑡 ) 𝜃 ( 𝜃 − ) 𝜙 + exp (cid:16)(cid:16) 𝑟 + 𝜌𝜃 − (cid:17) 𝑡 (cid:17) . We consider here the case where 𝜇 ( 𝑡 ) and 𝜎 ( 𝑡 ) are predictable continuous stochastic processes with respectto the filtration generated by F 𝑡 , that is, the problem now fits in the more general model treated in Section 2.3.This case is relevant, for example, when we are considering stochastic volatility models (see, e.g., [8, 23, 39]for stochastic volatility models and [46] for the non-Markovian Merton problem of the form approachedhere). We assume also that 𝑔 ( 𝑥, 𝜔 ) is a F 𝑡 random field. In this case, the value function is a random fielddepending on the time 𝑡 and the variable 𝑥 of the form 𝑈 ( 𝑡, 𝑥 ) = E (cid:20) ∫ 𝑇𝑡 𝐿 ( 𝑠, 𝛼 𝑡 ) d 𝑠 + 𝑔 ( 𝑋 𝑇 , 𝜔 ) (cid:12)(cid:12)(cid:12)(cid:12) F 𝑡 ∩ { 𝑋 𝑡 = 𝑥 } (cid:21) . 𝑈 satisfies the following backward stochastic PDEd 𝑈 ( 𝑡, 𝑥 ) + sup ( 𝑐,𝛾 ) ∈ 𝐾 H 𝑆 ( 𝑡, 𝑥, ∇ 𝑈 ( 𝑡, 𝑥 ) , 𝐷 𝑈 ( 𝑡, 𝑥 ) , ∇ Ψ ( 𝑡, 𝑥 ) , ( 𝑐, 𝛾 )) d 𝑡 = Ψ ( 𝑡, 𝑥 ) d 𝑊 𝑡 (5.26)where H 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 , ( 𝑐, 𝛾 )) = exp (− 𝜌𝑡 ) 𝑉 ( 𝑐 ) + ( 𝛾 ( 𝜇 ( 𝑡 ) − 𝑟 ) + 𝑟 ) 𝑥𝑢 𝑥 − 𝑐𝑢 𝑥 + 𝑥𝜎 ( 𝑡 ) 𝛾𝜓 𝑥 + 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝛾 𝑥 . The optimal value of the function ( 𝑐, 𝛾 ) is given by the solution to the system 𝜕 𝑐 H = exp (− 𝜌𝑡 ) 𝑉 ′ ( 𝑐 ) − 𝑢 𝑥 = ,𝜕 𝛾 H = ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑥𝑢 𝑥 + 𝑥𝜎 ( 𝑡 ) 𝜓 𝑥 + 𝑢 𝑥𝑥 𝜎 ( 𝑡 ) 𝑥 𝛾 = , which means that 𝛾 ∗ = − ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 + 𝜎 ( 𝑡 ) 𝜓 𝑥 𝑥𝑢 𝑥𝑥 𝜎 ( 𝑡 ) , (5.27)while 𝑐 ∗ is given by equation (5.2). This implies that 𝐻 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , 𝜓 𝑥 ) = (( 𝜇 ( 𝑡 ) − 𝑟 ) 𝑢 𝑥 + 𝜎 ( 𝑡 ) 𝜓 𝑥 ) 𝜎 ( 𝑡 ) 𝑢 𝑥𝑥 + 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) , where 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) is given by equation (5.5). In the following, we write 𝛿 𝑆 ( 𝑡 ) = ( 𝜇 ( 𝑡 ) − 𝑟 ) 𝜎 ( 𝑡 ) , where we recall that here 𝜇 and 𝜎 are generic predictable continuous stochastic processes. So we considera generator function Ω 𝑆 ( 𝑡, 𝑢, 𝑢 𝑥 , 𝜔 ) , depending explicitly on 𝜔 . Theorem 5.3.
The generator function Ω 𝑆 ( 𝑡, 𝑢, 𝑢 𝑥 , 𝜔 ) is a symmetry of equation (5.26) in the sense ofDefinition 4.6 if and only if Ω 𝑆 has one of the following forms Ω = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) [ 𝑢 · 𝑢 𝑟𝑥 − 𝑥𝑢 𝑟 + 𝑥 ] + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = − 𝑢 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = exp ( 𝑟𝑡 ) 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , Ω = 𝐺 ( 𝑡, 𝑢 𝑥 ) , where 𝐺 𝑆 , 𝐺 𝑆 , 𝐺 𝑆 : R + × R × Ω → R are smooth predictable random fields satisfying the following ran-dom PDEs (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) 𝑢 𝑟𝑥 ℎ 𝑉 + 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.28)2 𝑢 𝑥 𝜕 𝑢 𝑥 ℎ 𝑉 − ℎ 𝑉 + 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.29)2 exp ( 𝑟𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 ℎ 𝑉 + 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.30) 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = . (5.31) Proof.
Since 𝐻 𝑆 ( 𝑡, 𝑥, 𝑢 𝑥 , 𝑢 𝑥𝑥 , ) = 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥𝑥 + 𝐾 ( 𝑡, 𝑥, 𝑢 𝑥 ) , which is formally equal to 𝐻 defined in Subsection 5.1, the theorem can be easily proven using the sameargument exploited in the proof of Theorem 5.1. (cid:3) emark . The symmetries Ω 𝑆𝑖 of Theorem 5.3 depend on 𝜔 ∈ W since the functions 𝐺 𝑖 solve the randomequations (5.28), (5.29), (5.30), and (5.31) (where the random dependence is given by 𝛿 𝑆 ( 𝑡 ) ). Corollary 5.5.
Let ( 𝑈 ( 𝑡, 𝑥 ) , Ψ ( 𝑡, 𝑥 )) be a classical solution to equation (5.26) and let 𝑋 𝑡 be the solution toequation (5.1) with ( 𝛾, 𝑐 ) satisfying equalities (5.2) and (5.27) . Then, the processes ˜ 𝑂 ,𝑡 = exp (cid:18) − 𝑟 ( 𝑟 − ) 𝑡 (cid:19) [ 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝑟 − 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) 𝑟 + ] + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) − 𝐼 ( 𝑡, 𝑈, ∇ 𝑈, ∇ Ψ ) , ˜ 𝑂 ,𝑡 = 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) − 𝐼 ( 𝑡, 𝑈, ∇ 𝑈, ∇ Ψ ) , ˜ 𝑂 ,𝑡 = exp ( 𝑟𝑡 ) 𝑋 𝑡 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) − 𝐼 ( 𝑡, 𝑈, ∇ 𝑈, ∇ Ψ ) , ˜ 𝑂 ,𝑡 = 𝐺 ( 𝑡, 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 )) − 𝐼 ( 𝑡, 𝑈, ∇ 𝑈, ∇ Ψ ) , are local martingales. Here, 𝐼 , 𝐼 , 𝐼 and 𝐼 are the integral expressions associated with 𝑂 ,𝑡 , 𝑂 ,𝑡 , 𝑂 ,𝑡 and 𝑂 ,𝑡 , respectively, by the relation given in equation (4.12) .Proof. The first statement follows from Theorem 5.3 and Theorem 4.8. (cid:3)
In the particular case where 𝑟 = 𝑉 ( 𝑧 ) = 𝑧 𝜃 𝜃 (where 𝜃 ∈ R ) or without the consumption 𝑉 ( 𝑧 ) = 𝑐 =
0, we can obtain the following stronger result.
Corollary 5.6.
Suppose that 𝑟 = and 𝑉 ( 𝑧 ) = 𝑧 𝜃 𝜃 . Then, we have that 𝑂 𝑡 = − 𝑈 ( 𝑡, 𝑋 𝑡 ) − 𝜃 𝑋 𝑡 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) is a local martingale. Furthermore, if 𝑉 = (and we consider 𝑐 = ) we have that, for any 𝑐 , 𝑐 ∈ R , 𝑂 𝑐 ,𝑐 𝑡 = 𝑐 𝑈 ( 𝑡, 𝑋 𝑡 ) + 𝑐 𝑋 𝑡 𝜕 𝑥 𝑈 ( 𝑡, 𝑋 𝑡 ) is a local martingale.Proof. If 𝑉 ( 𝑧 ) = 𝑧 𝜃 𝜃 we have ℎ 𝑉 ( 𝑡, 𝑢 𝑥 ) = − exp (− 𝜌𝑡 ) 𝑢 𝜃𝜃 − 𝑥 𝜃 − 𝜃 . This implies that (cid:16) 𝜃 − (cid:17) 𝜕 𝑢 𝑥 ℎ 𝑉 − ℎ 𝑉 = Ω = Ω − 𝜃 Ω = − 𝑢 − 𝜃 𝑥𝑢 𝑥 + 𝐺 ( 𝑡, 𝑢 𝑥 ) , where 𝐺 ( 𝑡, 𝑢 𝑥 ) is any solution to the equation 𝛿 𝑆 ( 𝑡 ) 𝑢 𝑥 𝜕 𝑢 𝑥 𝑢 𝑥 𝐺 + 𝜕 𝑡 𝐺 = , (5.32)is a symmetry of the equation (5.26). A particular solution to equation (5.32) is 𝐺 ≡
0, in which case Ω has the form Ω = − 𝑢 − 𝑥𝑢 𝑥 𝜃 . But − 𝑢 − 𝑥𝑢 𝑥 𝜃 satisfies the hypotheses of Corollary 4.9, from which we get thethesis. The second part of the corollary can be proven in a similar way. (cid:3) As already mentioned in the introduction, the construction of the martingales obtained in Corollaries 5.2and 5.6 could be deeply connected to the well-known explicit solutions of Merton’s optimal portfolio problem(see, e.g., [53] for a review and [8, 9] for recent developments on the explicit solutions of Merton’s problem).The investigation of the link between these two notions will be the subject of a future paper.
Acknowledgments.
The first, second, and fourth author are funded by Istituto Nazionale di Alta Matem-atica “Francesco Severi” (INdAM), Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loroApplicazioni (GNAMPA): “Lie’s Symmetries Analysis of Stochastic Optimal Control Problems with Ap-plications”. The first and third author are funded by the DFG under Germany’s Excellence Strategy - GZ2047/1, project-id 390685813. 29 eferences [1] S. Albeverio, F. C. De Vecchi, P. Morando, and S. Ugolini. Weak symmetries of stochastic differentialequations driven by semimartingales with jumps.
Electron. J. Probab. , 25(44):34, 2020.[2] S. Albeverio, F. C. De Vecchi, P. Morando, and S. Ugolini. Random transformations and invarianceof semimartingales on Lie groups.
Random Oper. Stoch. Equ. , 2021. (published online ahead of print2021).[3] M. Arnaudon and J.-C. Zambrini. A stochastic look at geodesics on the sphere. In F. Nielsen andF. Barbaresco, editors,
Geometric science of information , volume 10589 of
Lecture Notes in ComputerScience , pages 470–476. Springer, Cham, 2017.[4] V. I. Arnol ′ d. Geometrical Methods in the Theory of Ordinary Differential Equations , volume 250 of
Grundlehren der mathematischen Wissenschaften . Springer-Verlag New York, 1988.[5] V. I. Arnol ′ d. Mathematical methods of classical mechanics , volume 60 of
Graduate Texts in Mathe-matics . Springer-Verlag, New York, second edition, 1989. Translated from the Russian by K. Vogtmannand A. Weinstein.[6] P. Askenazy. Symmetry and optimal control in economics.
J. Math. Anal. Appl. , 282(2):603–613,2003.[7] J. C. Baez and B. Fong. A Noether theorem for Markov processes.
J. Math. Phys. , 54(1):013301, 8,2013.[8] F. E. Benth, K. H. Karlsen, and K. Reikvam. Merton’s portfolio optimization problem in a Black andScholes market with non-Gaussian stochastic volatility of Ornstein-Uhlenbeck type.
Math. Finance ,13(2):215–244, 2003.[9] S. Biagini and M. Ç. Pınar. The robust merton problem of an ambiguity averse investor.
Math. andFinanc. Econ. , 11(1):1–24, 2017.[10] A. Bocharov, V. Chetverikov, S. Duzhin, N. Khor ′ kova, I. Krasil ′ shchik, A. Samokhin, Y. Torkhov,A. Verbovetsky, and A. Vinogradov. Symmetries and conservation laws for differential equations ofmathematical physics , volume 182 of
Translations of Mathematical Monographs . American Mathe-matical Society, Providence, RI, 1999.[11] R. Buckdahn and J. Ma. Pathwise stochastic control problems and stochastic HJB equations.
SIAM J.Control Optim. , 45(6):2224–2256, 2007.[12] M.-H. Chang, T. Pang, and J. Yong. Optimal stopping problem for stochastic differential equationswith random coefficients.
SIAM J. Control Optim. , 48(2):941–971, 2009.[13] M. de Carvalho Griebeler and J. P. de Araújo. General envelope theorems for multidimensional typespaces. In º Meeting of the Brazilian Econometric Society , 2009.[14] M. C. de Lara. Reduction of the Zakai equation by invariance group techniques.
Stochastic Process.Appl. , 73(1):119–130, 1998.[15] F. C. De Vecchi. Finite dimensional solutions to SPDEs and the geometry of infinite jet bundles.
ArXivpreprint arXiv:1712.08490 , 2017. 3016] F. C. De Vecchi and P. Morando. The geometry of differential constraints for a class of evolution PDEs.
J. Geom. Phys. , 156:103771, 23, 2020.[17] F. C. De Vecchi, P. Morando, and S. Ugolini. A note on symmetries of diffusions within a martingaleproblem approach.
Stoch. Dyn. , 19(2):1950011, 21, 2019.[18] F. C. De Vecchi, P. Morando, and S. Ugolini. Reduction and reconstruction of SDEs via Girsanov andquasi Doob symmetries.
ArXiv preprint arXiv:2011.08986 , 2020.[19] F. C. De Vecchi, P. Morando, and S. Ugolini. Symmetries of stochastic differential equations usingGirsanov transformations.
J. Phys. A , 53(13):135204, 31, 2020.[20] F. C. De Vecchi, A. Romano, and S. Ugolini. A symmetry-adapted numerical scheme for SDEs.
J.Geom. Mech. , 11(3):325–359, 2019.[21] N. Englezos and I. Karatzas. Utility maximization with habit formation: Dynamic programming andstochastic PDEs.
SIAM J. Control Optim. , 48(2):481–520, 2009.[22] W. H. Fleming and R. W. Rishel.
Deterministic and Stochastic Optimal Control , volume 1 of
Applica-tions of Mathematics . Springer-Verlag New York, 1975.[23] J.-P. Fouque, R. Sircar, and T. Zariphopoulou. Portfolio optimization and stochastic volatility asymp-totics.
Math. Finance , 27(3):704–745, 2017.[24] G. Gaeta.
Nonlinear Symmetries and Nonlinear Equations , volume 299 of
Mathematics and ItsApplications . Springer Netherlands, 1994.[25] G. Gaeta. Symmetry of stochastic non-variational differential equations.
Phys. Rep. , 686:1–62, 2017.[26] G. Gaeta. W-symmetries of Ito stochastic differential equations.
J. Math. Phys. , 60(5):053501, 29,2019.[27] G. Gaeta and F. Spadaro. Symmetry classification of scalar Ito equations with multiplicative noise.
J.Nonlinear Math. Phys. , 27(4):679–687, 2020.[28] H. Geiges. A brief history of contact geometry and topology.
Expo. Math. , 19(1):25–53, 2001.[29] H. Geiges.
An introduction to contact topology , volume 109 of
Cambridge Studies in AdvancedMathematics . Cambridge University Press, Cambridge, 2008.[30] T. Hawkins.
Emergence of the theory of Lie groups: An essay in the history of mathematics 1869–1926 .Sources and Studies in the History of Mathematics and Physical Sciences. Springer-Verlag New York,2012.[31] P. E. Hydon.
Symmetry Methods for Differential Equations: A Beginner’s Guide . Cambridge Texts inApplied Mathematics. Cambridge University Press, 2000.[32] N. Ikeda and S. Watanabe.
Stochastic differential equations and diffusion processes . North HollandPubl. Co., 1989.[33] R. Kozlov. Lie point symmetries of Stratonovich stochastic differential equations.
J. Phys. A ,51(50):505201, 15, 2018. 3134] H. Kunita. Some extensions of Ito’s formula. In J. Azéma and M. Yor, editors,
Séminaire de ProbabilitésXV 1979/80 , volume 850 of
Lecture Notes in Mathematics , pages 118–141. Springer, 1981.[35] H. Kunita.
Stochastic flows and stochastic differential equations . Cambridge University Press, 1990.[36] P. Lescot and J.-C. Zambrini. Isovectors for the Hamilton-Jacobi-Bellman equation, formal stochasticdifferentials and first integrals in Euclidean quantum mechanics. In R. C. Dalang, M. Dozzi, andF. Russo, editors,
Seminar on Stochastic Analysis, Random Fields and Applications IV , volume 58 of
Progress in Probability , pages 187–202. Birkhäuser, Basel, 2004.[37] P. Lescot and J.-C. Zambrini. Probabilistic deformation of contact geometry, diffusion processes andtheir quadratures. In R. C. Dalang, M. Dozzi, and F. Russo, editors,
Seminar on Stochastic Analysis,Random Fields and Applications V , volume 59 of
Progress in Probability , pages 203–226. Birkhäuser,Basel, 2008.[38] M. Liao. Invariant diffusion processes under Lie group actions.
Sci. China Math. , 62(8):1493–1510,2019.[39] M. Lorig and R. Sircar. Portfolio optimization under local-stochastic volatility: Coefficient Taylorseries approximations and implied sharpe ratio.
SIAM J. Financial Math. , 7(1):418–447, 2016.[40] S. Luo, J. Shen, and Y. Shen. A Noether theorem for random locations.
ArXiv preprintarXiv:1811.03490 , 2018.[41] R. C. Merton. Lifetime portfolio selection under uncertainty: The continuous-time case.
Rev. Econ.Stat. , 51(3):247–257, 1969.[42] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets.
Econometrica , 70(2):583–601,2002.[43] T. Misawa. Conserved quantities and symmetry for stochastic dynamical systems.
Phys. Lett. A ,195(3-4):185–189, 1994.[44] T. Misawa. New conserved quantities derived from symmetry for stochastic dynamical systems.
J.Phys. A , 27(20):L777–L782, 1994.[45] T. Misawa. Conserved quantities and symmetries related to stochastic dynamical systems.
Ann. Inst.Statist. Math. , 51(4):779–802, 1999.[46] B. Øksendal, A. Sulem, and T. Zhang. A stochastic HJB equation for optimal control of forward-backwards SDEs. In
The fascination of probability, statistics and their applications , pages 435–446.Springer, Cham, 2016.[47] P. J. Olver.
Applications of Lie Groups to Differential Equations , volume 107 of
Graduate Texts inMathematics . Springer-Verlag New York, 2nd edition, 1993.[48] S. G. Peng. Stochastic Hamilton-Jacobi-Bellman equations.
SIAM J. Control Optim. , 30(2):284–304,1992.[49] H. Pham.
Continuous-time Stochastic Control and Optimization with Financial Applications ,volume 61of
Stochastic Modelling and Applied Probability . Springer-Verlag Berlin Heidelberg, 2009.3250] J. W. Pratt. Risk aversion in the small and in the large. In
Uncertainty in Economics , pages 59–79.Elsevier, 1978.[51] N. Privault and J.-C. Zambrini. Stochastic deformation of integrable dynamical systems and randomtime symmetry.
J. Math. Phys. , 51(8):082104, 19, 2010.[52] J. Qiu. Viscosity solutions of stochastic Hamilton–Jacobi–Bellman equations.
SIAM J. Control Optim. ,56(5):3708–3730, 2018.[53] L. C. G. Rogers.
Optimal Investment , volume 1007 of
SpringerBriefs in Quantitative Finance . Springer-Verlag Berlin Heidelberg, 2013.[54] L. C. G. Rogers and D. Williams.
Diffusions, Markov processes and martingales: Volume 2, Itôcalculus . Cambridge University Press, 2000.[55] D. J. Saunders.
The geometry of jet bundles , volume 142 of
London Mathematical Society LectureNote Series . Cambridge University Press, Cambridge, 1989.[56] H. Stephani.
Differential Equations: Their Solution Using Symmetries . Cambridge University Press,1989.[57] M. Thieullen and J.-C. Zambrini. Symmetries in the stochastic calculus of variations.
Probab. TheoryRelated Fields , 107(3):401–427, 1997.[58] N. Touzi.
Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE , volume 29 of
Fields Institute Monographs . Springer Science & Business Media, 2012.[59] J. Yong and X. Y. Zhou.
Stochastic Controls: Hamiltonian Systems and HJB Equations , volume 43 of
Stochastic Modelling and Applied Probability . Springer-Verlag New York, 1999.[60] J.-C. Zambrini. On the geometry of the Hamilton-Jacobi-Bellman equation.
J. Geom. Mech. , 1(3):369–387, 2009.[61] J.-C. Zambrini. The research program of stochastic deformation (with a view toward geometricmechanics). In
Stochastic analysis: a series of lectures , volume 68 of