Optimal Tracking Portfolio with A Ratcheting Capital Benchmark
aa r X i v : . [ q -f i n . P M ] J u l Optimal Tracking Portfolio with A Ratcheting Capital Benchmark
Lijun Bo ∗ Huafu Liao † Xiang Yu ‡ Abstract
This paper studies the finite horizon portfolio management by optimally tracking a ratcheting cap-ital benchmark process. To formulate such an optimal tracking problem, we envision that the fundmanager can dynamically inject capital into the portfolio account such that the total capital domi-nates the nondecreasing benchmark floor process at each intermediate time. The control problem is tominimize the cost of the accumulative capital injection. We first transform the original problem withfloor constraints into an unconstrained control problem, however, under a running maximum cost. Byidentifying a controlled state process with reflection, we next transform the problem further into anequivalent auxiliary problem, which leads to a nonlinear Hamilton-Jacobi-Bellman (HJB) with a Neu-mann boundary condition. By employing the dual transform, the probabilistic representation approachand some stochastic flow arguments, the existence of the unique classical solution to the dual HJB isestablished. The verification theorem is carefully proved, which gives the complete characterization ofthe primal value function and the feedback optimal portfolio.
Mathematics Subject Classification (2020) : 91G10, 93E20, 60H10
Keywords : Ratcheting capital benchmark, optimal tracking, running maximum cost, dual transform,probabilistic representation, verification theorem
Portfolio allocation with benchmark performance has been an active research topic in recent years, seesome related work in Browne (2000), Gaivoronski et al. (2005), Yao et al. (2006), Strub and Baumann(2018) and many others. The target benchmark is either a prescribed capital process or a fixed portfolioin the financial market, and the goal is to choose the portfolio in a passive way to dynamically track thereturn or the value of the benchmark process. In practice, both professional and individual investors maymeasure their porfolio performance using different benchmarks, such as S&P500 index, Goldman Sachscommodity index, special liability, inflation and exchange rates. Some dominating mathematical problemsin the existing studies are to minimize the difference between the controlled portfolio and the benchmarkas either a linear quadratic control problem using the mean-variance analysis or a utility maximizationproblem at the terminal time. In the present paper, we aim to enrich the research on optimal tracking byformulating a different tracking procedure and examining the associated control problem. Taking a fundportfolio management for instance, we assume that the fund manager can dynamically inject capital intothe portfolio account such that the total capital stays above the benchmark process as an American typefloor constraint at each intermediate time. The control problem combines the regular portfolio control andthe singular capital injection control and the optimality is attained when the cost from the accumulative ∗ Email: [email protected], School of Mathematical Sciences, University of Science and Technology of China, Hefei,Anhui 230026, China. † Email: lhfl[email protected], School of Mathematical Sciences, University of Science and Technology of China, Hefei,Anhui, 230026, China. ‡ E-mail: [email protected], Department of Applied Mathematics, The Hong Kong Polytechnic University, HungHom, Kowloon, Hong Kong.
Market Model and Problem Formulation
Under the filtered probability space (Ω , F , P ), in which F = ( F t ) t ∈ [0 ,T ] satisfies the usual conditions, theprocess ( W , . . . , W d ) is a d -dimensional Brownian motion adapted to F . Let T ∈ R + := (0 , ∞ ) be thefinite terminal horizon. The financial market consists of d risky assets and the price processes are describedby, for t ∈ [0 , T ], dS it S it = µ i dt + d X j =1 σ ij dW jt , i = 1 , . . . , d, with constant drift µ i ∈ R and constant volatility σ ij ∈ R for i, j = 1 , ..., d . To simplify the presentation,we will only focus on zero interest rate r = 0. In view of our control problem and underlying processes,the case r > r = 0 by considering thecontrolled state process discounted by e − rt in the auxiliary problem (3.5).For t ∈ [0 , T ], let us denote θ it the amount of wealth (as an F -adapted process) that the fund managerallocates in asset S i = ( S it ) t ∈ [0 ,T ] at time t . The self-financing wealth process under the control θ =( θ t , . . . , θ dt ) ⊤ t ∈ [0 ,T ] is given by V θt = v + Z t θ ⊤ s µds + Z t θ ⊤ s σdW s , t ∈ [0 , T ] , (2.1)with the initial wealth V θ = v ≥
0, the return vector µ = ( µ , . . . , µ d ) ⊤ and the volatility matrix σ =( σ ij ) d × d that is assumed to be invertible (the invertible matrix is denoted by σ − ).We are interested in the present paper a passive portfolio selection by a fund manager to optimallytrack an exogenous ratcheting capital benchmark. In particular, the benchmark process is defined as anondecreasing process A = ( A t ) t ∈ [0 ,T ] taking the absolutely continuous form that A t := a + Z t f ( s, Z s ) ds, t ∈ [0 , T ] , (2.2)where a ≥ t = 0. Thefunction f ( · , · ), representing the benchmark growth rate, is required to satisfy the condition:( A f ): the function f : [0 , T ] × R → R + is continuous and for t ∈ [0 , T ], f ( t, · ) ∈ C ( R ) with bounded firstand second order derivatives.The stochastic factor process Z = ( Z t ) t ∈ [0 ,T ] appearing in (2.2) satisfies the SDE: dZ t = µ Z ( Z t ) dt + σ Z ( Z t ) dW γt , t ∈ [0 , T ] , (2.3)with the initial value Z = z ∈ R and W γ = ( W γt ) t ∈ [0 ,T ] is a linear combination of the d -dimensional Brow-nian motion ( W , . . . , W d ) with weights γ = ( γ , . . . , γ d ) ⊤ ∈ [ − , d , which itself is a Brownian motion.We enforce the condition on coefficients µ Z ( · ) and σ Z ( · ) that:( A Z ): the coefficients µ Z : R → R and σ Z : R → R belong to C ( R ) with bounded first and secondorder derivatives.If Z is an OU process or a geometric Brownian motion, the assumption (A Z ) clearly holds. In reallife applications, the stochastic factor process Z can be understood as the random inflation rate processaffecting the benchmark growth dynamically. The fund manager is required to choose the portfolio ina way such that the fund capital is sufficiently competitive with respect to the growing inflation-driven3apital benchmark. On the other hand, by its definition in (2.3), the stochastic factor process Z can alsodepend on one or some risky asset price dynamics, which allows us to accommodate the possible scenariowhen the growing capital benchmark is influenced by some risky asset prices.Given the nondecreasing benchmark process A , unlike the conventional portfolio tracking problem for-mulated as a linear quadratic control or a utility maximization problem, we aim to choose the portfoliocontrol with another singular capital injection control such that its total capital outperforms the benchmark A t as a floor constraint at each intermediate time t . This variant of the optimal tracking formulation com-bines some mathematical features from both the conventional tracking problem and the stochastic controlproblem with a minimum guarantee constraint, which is both theoretically and practically interesting andnew from the literature.To be precise, we assume that the fund manager can inject capital carefully to the fund account fromtime to time whenever it is necessary such that the total capital dynamically dominates the benchmarkfloor process A . That is, the goal of the fund manager is to optimally track the process A by choosing theregular control θ as the dynamic portfolio in risky assets and the singular control C = ( C t ) t ∈ [0 ,T ] as theaccumulative capital injection such that C t + V θt ≥ A t at each intermediate time t ∈ [0 , T ]. The optimaltracking problem is defined to minimize the expected cost of the discounted accumulative capital injectionsubjecting to a American-type floor constraint that u ( a, v , z ) := inf C,θ E " C + Z T e − ρt dC t subject to A t ≤ C t + V θt at each t ∈ [0 , T ], (2.4)where the constant ρ ≥ C = ( a − v) + is the initial injected capital to matchwith the initial benchmark. Remark 2.1.
For a large initial wealth v ≫ a and some special choices of f ( t, z ) and ( Z t ) t ∈ [0 ,T ] , it ispossible that the benchmark process A t is dynamically superhedgeable by a portfolio in risky assets at eachtime t ∈ [0 , T ] . That is, there exists a portfolio θ ∗ such that V θ ∗ t ≥ A t , for any t ∈ [0 , T ] . Then C ∗ t ≡ for any t ∈ [0 , T ] is an admissible capital injection control and (0 , θ ∗ ) is an optimal control for the problem (2.4) and the value function u ( a, v , z ) ≡ . We will characterize the region for v explicitly in Remark 5.3such that there is no need to inject capital for the problem (2.4) . To make the problem in (2.4) more tractable, our first step is to reformulate the problem (2.4) withdynamic American-type constraints using the observation that for a fixed control θ , the optimal C is alwaysthe smallest adapted right-continuous and nondecreasing process that dominates A − V θ . Let U be the setof regular F -adapted control processes θ = ( θ t ) t ∈ [0 ,T ] such that (2.1) is well-defined. Then, the followinglemma states that this corresponds to its running maximum process and its proof is given in Section 6. Lemma 2.2.
For each fixed regular control θ , the optimal singular control C ∗ satisfies C ∗ t = 0 ∨ sup s ∈ [0 ,t ] ( A s − V θs ) , t ∈ [0 , T ] . (2.5) The problem (2.4) with the American-type floor constraints A t ≤ C t + V θt for all t ∈ [0 , T ] , admits theequivalent formulation as a unconstrained control problem under a running maximum cost that u ( a, v , z ) = ( a − v) + + inf θ ∈U E "Z T e − ρt d ∨ sup s ∈ [0 ,t ] ( A s − V θs ) ! . (2.6) Remark 2.3.
To handle the running maximum term in the objective function, one can choose the monotonerunning maximum process as a controlled state process as in Barles et al. (1994), Kr¨oner et al. (2018) andet al. to derive the HJB equation with a free boundary condition. Or one can choose the distance between theunderlying process and its running maximum as a state process with reflection as in Weerasinghe and Zhu In this section, we choose to introduce a new controlled state process to replace the current controlledprocess V θ = ( V θt ) t ∈ [0 ,T ] given by (2.1). Let us first define the difference process D t := A t − V θt + v − a with the initial value D = 0. For any x ≥
0, we then consider its running maximum process L = ( L t ) t ∈ [0 ,T ] defined by L t := x ∨ sup s ∈ [0 ,t ] D s ≥ , t ∈ [0 , T ] , (3.1)with the initial value L = x ≥ a − v) + − u ( a, v , z ) with u ( a, v , z ) given in (2.6) is equivalent to the auxiliarycontrol problem sup θ ∈U E " − Z T e − ρs dL s , (3.2)when we set the initial level L = x = (v − a ) + . We can start to introduce our new controlled state process X = ( X t ) t ∈ [0 ,T ] for the problem (3.2), which is defined as the reflected process X t := L t − D t for t ∈ [0 , T ]that satisfies the SDE, for t ∈ [0 , T ], X t = − Z t f ( s, Z s ) ds + Z t θ ⊤ s µds + Z t θ ⊤ s σdW s + L t , (3.3)with the initial value X = x ≥
0. In particular, the running maximum process L t increases if and only if X t = 0, i.e., L t = D t . In view of “the Skorokhod problem”, it satisfies the representation that L t = x ∨ Z t { X s =0 } dL s , t ∈ [0 , T ] . We shall change the notation from L t to L Xt from this point onwards to emphasize its dependence on thenew state process X given in (3.3). Moreover, the stochastic factor process Z = ( Z t ) t ∈ [0 ,T ] defined in (2.3)is chosen as another state process.To simplify the future presentation, we shall denote the variable domain D T := [0 , T ] × R × [0 , ∞ ).Let us denote U t the set of admissible controls taking the feedback form as θ s = θ ( s, Z s , X s ) for s ∈ [ t, T ],where θ : D T → R n is a measurable function such that the following reflected SDE has a weak solution: X t = − Z t f ( s, Z s ) ds + Z t θ ( s, Z s , X s ) ⊤ µds + Z t θ ( s, Z s , X s ) ⊤ σdW s + L Xt , (3.4)with X = x ≥
0, where L Xt = x ∨ R t { X s =0 } dL Xs is a continuous, nonnegative and nondecreasing process,which increases only when the state process X t hits the level 0. For ( t, z, x ) ∈ D T , the dynamic version ofthe auxiliary stochastic control problem (3.2) is given by v ( t, z, x ) := sup θ ∈U t E t,z,x " − Z Tt e − ρs dL Xs , (3.5)5here E t,z,x [ · ] := E [ · | Z t = z, X t = x ] denotes the conditional expectation and the underlying stateprocesses ( Z t ) t ∈ [0 ,T ] and ( X t ) t ∈ [0 ,T ] are given in (2.3) and (3.4) respectively.It is important to note the equivalence that v (0 , z, (v − a ) + ) = ( a − v) + − u ( a, v , z ), i.e. we have u ( a, v , z ) = ( a − v − v (0 , z, , if a ≥ v , − v (0 , z, v − a ) , if a < v , where u ( a, v, z ) is the value function of the original optimal tracking problem defined by (2.4) and a and vrepresent the initial benchmark level and initial wealth respectively. Starting from this section, we mainlyfocus on the auxiliary control problem (3.5) and seek to obtain its optimal portfolio in a feedback form.We first have the next result on some properties of the value function v on D T defined in (3.5). Theproof is standard following the solution representation of “the Skorokhod problem” and it is hence omitted. Lemma 3.1.
For ( t, z, x ) ∈ D T , the value function v ( t, z, x ) defined by (3.5) is nondecreasing in x ≥ .Moreover, for all ( t, z ) ∈ [0 , T ] × R , we have | v ( t, z, x ) − v ( t, z, x ) | ≤ e − ρt | x − x | for all x , x ≥ . Remark 3.2.
For ( t, z ) ∈ [0 , T ] × R , if x → v ( t, z, x ) is C ([0 , ∞ )) , Lemma 3.1 implies that the followingrange for the partial derivative v x ( t, z, x ) needs to be considered: ≤ v x ( t, z, x ) ≤ e − ρt ≤ for all ( t, z, x ) ∈D T . Hereafter, we shall use v x , v t , v xx , v zx and v zz to denote the (first, second order or mixed) partialderivatives of the value function v with respect to its arguments, if exists. By some heuristic arguments of dynamic programming, we can show that v satisfies the following HJBequation: v t + sup θ ∈ R n h v x θ ⊤ µ + v xx θ ⊤ σσ ⊤ θ + v xz σ Z ( z ) θ ⊤ σγ i + v z µ Z ( z ) + v zz σ Z ( z )2 − f ( t, z ) v x = ρv, ( t, z, x ) ∈ [0 , T ] × R × R + ; v ( T, z, x ) = 0 , ∀ ( z, x ) ∈ R × [0 , ∞ ); v x ( t, z,
0) = 1 , ∀ ( t, z ) ∈ [0 , T ] × R , (3.6)in which the Neumann boundary condition v x ( t, z,
0) = 1 stems from the martingale optimality conditionbecause the process L Xt increases whenever the process X t visits the value 0. Suppose v xx < , T ) × R × R + , the feedback optimal control determined by (3.6) is obtained by θ ∗ ( t, z, x ) = − ( σσ ⊤ ) − v x ( t, z, x ) µ + v xz ( t, z, x ) σ Z ( z ) σγv xx ( t, z, x ) , ( t, z, x ) ∈ D T . (3.7)Plugging (3.7) into the HJB equation (3.6), we have for ( t, z, x ) ∈ [0 , T ) × R × R + that v t − ρv − α v x v xx + σ Z ( z )2 (cid:18) v zz − v xz v xx (cid:19) − φ ( z ) v x v xz v xx + µ Z ( z ) v z − f ( t, z ) v x = 0 , (3.8)where the coefficients are given by α := 12 µ ⊤ ( σσ ⊤ ) − µ, φ ( z ) := σ Z ( z ) µ ⊤ ( σσ ⊤ ) − σγ, z ∈ R . (3.9)Note that the HJB equation (3.6) is fully nonlinear. To study the existence of a classical solution to (3.6),we will first apply the heuristic dual transform to linearize the original HJB equation (3.6) and establishthe existence and uniqueness of a classical solution to the dual PDE using the probabilistic representationapproach and stochastic flow analysis in the next section.6 Dual Transform and Probabilistic Representation
To reduce the challenge from the nonlinear PDE, we choose to employ the Legendre-Fenchel dual transformto the primal HJB equation (3.6). We start by assuming that the value function v satisfies v ∈ C , , ([0 , T ) × R × [0 , ∞ )) ∩ C ( D T ) and v xx < , T ) × R × R + , which will be verified later. For ( t, z, y ) ∈ [0 , T ] × R × R + ,let us consider the dual transform function given byˆ v ( t, z, y ) := sup x> { v ( t, z, x ) − xy } and x ∗ ( t, z, y ) := v x ( t, z, · ) − ( y ) , (4.1)where y v x ( t, z, · ) − ( y ) denotes the inverse function of x v x ( t, z, x ), and x ∗ = x ∗ ( t, z, y ) in (4.1)satisfies the equation: v x ( t, z, x ∗ ) = y, ( t, z ) ∈ [0 , T ] × R . (4.2)On the other hand, in view of Lemma 3.1 and Remark 3.2, the variable y in fact only takes values in theset (0 , t, z, y ) ∈ [0 , T ] × R × (0 , v ( t, z, y ) = v ( t, z, x ∗ ) − x ∗ y. (4.3)Taking the derivative w.r.t. y on both sides of (4.3), we deduce thatˆ v y ( t, z, y ) = v x ( t, z, x ∗ ) x ∗ y − x ∗ y y − z ∗ = yx ∗ y − x ∗ y y − z ∗ = − z ∗ . (4.4)By taking the derivative w.r.t. y on both sides of (4.2), we also have v xx ( t, z, x ∗ ) x ∗ y = 1 and hence x ∗ y = v xx ( t,z,x ∗ ) . Because of (4.4), we can obtain thatˆ v yy ( t, z, y ) = − x ∗ y = − v xx ( t, z, x ∗ ) , x ∗ z = − v xz ( t, z, x ∗ ) v xx ( t, z, x ∗ ) . (4.5)It follows by (4.2) and (4.3) thatˆ v t ( t, z, y ) = v t ( t, z, x ∗ ) , ˆ v z ( t, z, y ) = v z ( t, z, x ∗ ) , ˆ v zz ( t, z, y ) = v zz ( t, z, x ∗ ) − v xz ( t, z, x ∗ ) v xx ( t, z, x ∗ ) . (4.6)Moreover, by the second equality in (4.5) and (4.6), we further have thatˆ v yz ( t, z, y ) = v xz ( t, z, x ∗ ) x ∗ y = v xz ( t, z, x ∗ ) v xx ( t, z, x ∗ ) . (4.7)By virtue of (3.8) and (4.3), we obtain that v t ( t, z, x ∗ ) − ρv ( t, z, x ∗ ) − α v x ( t, z, x ∗ ) v xx ( t, z, x ∗ ) − σ Z ( z )2 v xz ( t, z, x ∗ ) v xx ( t, z, x ∗ ) − φ ( z ) v x ( t, z, x ∗ ) v xz ( t, z, x ∗ ) v xx ( t, z, x ∗ )+ µ Z ( z ) v z ( t, z, x ∗ ) + σ Z ( z )2 v zz ( t, z, x ∗ ) − f ( t, z ) v x ( t, z, x ∗ ) = 0 . (4.8)Plugging (4.2), (4.5), (4.6) and (4.7) into (4.8), we can derive that, for ( t, z, y ) ∈ [0 , T ) × R × (0 , v t ( t, z, y ) − ρ ˆ v ( t, z, y ) + ρy ˆ v y ( t, z, y ) + αy ˆ v yy ( t, z, y ) + µ Z ( z )ˆ v z ( t, z, y ) + σ Z ( z )2 ˆ v zz ( t, z, y ) − φ ( z ) y ˆ v yz ( t, z, y ) − f ( t, z ) y = 0 . (4.9)We next derive the terminal condition and the boundary condition of the linear PDE (4.9). By theterminal condition v ( T, z, x ) = 0 of the HJB equation (3.6), it follows thatˆ v ( T, z, y ) = sup x> { v ( T, z, x ) − xy } = sup x> {− xy } = 0 , ( z, y ) ∈ R × (0 , . (4.10)7ote that x ∗ y = v xx ( t,z,x ∗ ) <
0, and for each ( t, z ) ∈ [0 , T ] × R , the map y x ∗ ( t, z, y ) := v x ( t, z, · ) − ( y )is one to one. Moreover, by the Neumann boundary condition of the HJB equation (3.6), we deduce from(4.2) that v x ( t, z,
0) = 1 and x ∗ ( t, z,
1) = 0. Therefore, in view of (4.4), for all ( t, z ) ∈ [0 , T ] × R ,ˆ v y ( t, z,
1) = − x ∗ ( t, z,
1) = 0 . (4.11)In summary, the HJB equation (3.6) can be transformed into the linear dual PDE of ˆ v that ˆ v t + αy ˆ v yy + ρy ˆ v y − ρ ˆ v − φ ( z ) y ˆ v yz + µ Z ( z )ˆ v z + σ Z ( z )2 ˆ v zz − f ( t, z ) y = 0 , ( t, z, y ) ∈ [0 , T ) × R × (0 , v ( T, z, y ) = 0 , ∀ ( z, y ) ∈ R × [0 , v y ( t, z,
1) = 0 , ∀ ( t, z ) ∈ [0 , T ] × R . (4.12)We next study the existence and uniqueness of a classical solution to the Neumann boundary prob-lem (4.12) with the extra condition that ˆ v yy ≥ , T ) × R × (0 ,
1) using the probabilistic representationapproach. To this purpose, for ( t, z, u ) ∈ D T , let us define the function h ( t, z, u ) := − E "Z Tt e − ρs f ( s, M t,zs ) e − R t,us ds , (4.13)where the process ( M t,zs ) s ∈ [ t,T ] with ( t, z ) ∈ [0 , T ] × R satisfies the SDE, for s ∈ [ t, T ], M t,zs = z + Z st µ Z ( M t,zr ) dr + ̺ Z st σ Z ( M t,zr ) dB r + p − ̺ Z st σ Z ( M t,zr ) dB r . (4.14)The processes B = ( B t ) t ∈ [0 ,T ] and B = ( B t ) t ∈ [0 ,T ] are two standard Brownian motions with a specificcorrelation coefficient ̺ := ( σ − µ ) ⊤ | σ − µ | γ. (4.15)Moreover, the process ( R t,us ) s ∈ [ t,T ] with ( t, u ) ∈ [0 , T ] × [0 , ∞ ) is a reflected Brownian motion with driftdefined by R t,us := u + √ α Z st dB r + Z st ( α − ρ ) dr + Z st dL Rr ≥ , s ∈ [ t, T ] , (4.16)where t L Rt is a continuous and nondecreasing process that increases only on { t ∈ [0 , T ]; R ,ut = 0 } with L R = 0. By the solution representation of “the Skorokhod problem”, we obtain that, for ( s, u ) ∈ [ t, T ] × [0 , ∞ ), L Rs = 0 ∨ (cid:26) − u + max r ∈ [ t,s ] h −√ α ( B r − B t ) − ( α − ρ )( r − t ) i(cid:27) . (4.17)It follows from assumptions (A f ) and (A Z ) that, for all ( t, z, u ) ∈ D T , | h ( t, z, u ) | = E "Z Tt e − ρs f ( s, M t,zs ) e − R t,us ds ≤ C E "Z Tt e − ρs (1 + | M t,zs | ) ds C ( T − t ) + C ( T − t ) E " sup s ∈ [ t,T ] | M t,zs | ≤ C ( T − t )(1 + | z | ) , (4.18)for some constant C = C f >
0. Hence, the function h given in (4.13) is well-defined. We next study theregularity of the function h defined in (4.13) in the next result, and its proof is reported in Section 6. Proposition 4.1.
Let assumptions (A f ) and (A Z ) hold. We have that h ∈ C , , ( D T ) . Moreover, for ( t, z, u ) ∈ D T , we get h u ( t, z, u ) = E "Z Tt e − ρs f ( s, M t,zs ) e − R t,us { max r ∈ [ t,s ] [ −√ αB r − ( α − ρ ) r ] ≤ u } ds = E "Z τ tu ∧ Tt e − ρs f ( s, M t,zs ) e − R t,us ds , (4.19) where τ tu := inf { s ≥ t ; −√ αB s − ( α − ρ ) s = u } (we assume inf ∅ = + ∞ by convention). Building upon Proposition 4.1, we have the next important auxiliary result and provides its proof inSection 6.
Theorem 4.2.
Suppose that (A f ) and (A Z ) hold. Then, the function h defined in (4.13) solves theNeumann boundary problem: h t + αh uu + ( α − ρ ) h u + φ ( z ) h uz + µ Z ( z ) h z + σ Z ( z )2 h zz = f ( t, z ) e − u − ρt , ( t, z, u ) ∈ [0 , T ) × R × R + ; h ( T, z, u ) = 0 , ∀ ( z, u ) ∈ R × [0 , ∞ ); h u ( t, z,
0) = 0 , ∀ ( t, z ) ∈ [0 , T ] × R . (4.20) On the other hand, if a function h defined on D T with a polynomial growth is a classical solution of theNeumann boundary problem (4.20) , then h has the representation (4.13) . Remark 4.3.
Under assumptions (A f ) and (A Z ) , it follows by (4.19) , together with (6.6) in Section 6,that for all ( t, z, u ) ∈ D T , h u ( t, z, u ) + h uu ( t, z, u ) = E " e − ρτ tu f ( τ tu , M t,zτ tu )Γ( τ tu ) + 2 Z τ tu t e − ρs f ( s, M t,zs ) e − R t,us ds , (4.21) where the stopping time τ tu is given in Proposition 4.1 and the function Γ( t ) for t ∈ [0 , T ] is given by (6.5) in Section 6. Note that f > by the assumption (A f ) . Then, we have from (4.21) that h uu + h u ≥ ,while “ > ” holds when ( t, z, u ) ∈ [0 , T ) × R × [0 , ∞ ) . The well-posedness of the Neumann problem (4.12) is now given in the next result and its proof ispostponed to Section 6.
Corollary 4.4.
Let assumptions of Theorem 4.2 hold. The Neumann problem (4.12) admits a uniqueclassical solution ˆ v such that for ( t, z, y ) ∈ [0 , T ] × R × (0 , , (cid:12)(cid:12) ˆ v ( t, z, y ) (cid:12)(cid:12) ≤ C (1 + | z | p + | ln y | p ) , for some p > , (4.22) and the function h ( t, z, u ) := e − ρt ˆ v ( t, z, e − u ) , ( t, z, u ) ∈ D T , (4.23) has the probabilistic representation (4.13) . Moreover, for each ( t, z ) ∈ [0 , T ) × R , the solution (0 , ∋ y ˆ v ( t, z, y ) is strictly convex. Optimal Portfolio and Verification Theorem
In the previous section, Corollary 4.4 gives the existence and uniqueness of the classical solution ˆ v ( t, z, y )for ( t, z, y ) ∈ [0 , T ] × R × (0 ,
1] to the dual PDE (4.12). We next recover the classical solution v ( t, z, x ) ofthe primal HJB equation (3.8) via ˆ v ( t, z, y ) and prove the verification theorem of our original stochasticcontrol problem (3.5). Theorem 5.1 (Verification theorem) . Let assumptions (A f ) and (A Z ) hold. We have that: (i) The primal HJB equation (3.8) admits a solution v ∈ C , , ([0 , T ) × R × [0 , ∞ )) ∩ C ( D T ) . Moreover,for ( t, z, x ) ∈ D T , the solution v of HJB equation (3.8) can be written as: v ( t, z, x ) = inf y ∈ (0 , { ˆ v ( t, z, y ) + xy } , if ( t, z, x ) ∈ O T or x = 0 , , if ( t, z, x ) ∈ O cT ∩ D T , (5.1) where the region O T in (5.1) is given by O T := { ( t, z, x ) ∈ [0 , T ) × R × R + ; x ∈ (0 , ξ ( t, z )) } , (5.2) and the function ξ ( t, z ) with ( t, z ) ∈ [0 , T ) × R is defined by ξ ( t, z ) := E "Z Tt e − ρ ( s − t ) f ( s, M t,zs ) e √ α ( B s − B t )+( α − ρ )( s − t ) ds , (5.3) and the process ( M t,zs ) s ∈ [ t,T ] with ( t, z ) ∈ [0 , T ] × R is the strong solution of SDE (4.14) . Here, for ( t, z, y ) ∈ [0 , T ] × R × (0 , , the function ˆ v ( t, z, y ) = e ρt h ( t, z, − ln y ) solves the dual PDE (4.12) withNeumann boundary condition. (ii) Let us define the following feedback control function as, for ( t, z, x ) ∈ D T , θ ∗ ( t, z, x ) := − ( σσ ⊤ ) − v x ( t, z, x ) µ + v xz ( t, z, x ) σ Z ( z ) σγv xx ( t, z, x ) , if ( t, z, x ) ∈ O T or x = 0 , − µ ( σσ ⊤ ) − lim y ↓ y ˆ v yy ( t, z, y ) if ( t, z, x ) ∈ O cT ∩ D T . +( σσ ⊤ ) − σ Z ( z ) σγ lim y ↓ ˆ v yz ( t, z, y ) , (5.4) For the processes ( Z, X ) = ( Z t , X t ) t ∈ [0 ,T ] given by (3.4) , define θ ∗ t := θ ∗ ( t, Z t , X t ) for t ∈ [0 , T ] . Then θ ∗ = ( θ ∗ t ) t ∈ [0 ,T ] ∈ U t is an optimal strategy. Moreover, for all θ ∈ U t , it holds that ˜ J ( θ ; t, z, x ) ≤ e − ρt v ( t, z, x ) , where ( t, z, x ) ∈ [0 , T ) × R × [0 , ∞ ) . Remark 5.2.
We explain here the role of the function ξ ( t, z ) defined by (5.3) in Theorem 5.1. In fact, for ( t, z ) ∈ [0 , T ) × R and x ≥ ξ ( t, z ) , it follows from Theorem 5.1- (i) that the value function v ( t, z, x ) = 0 .Then, by Theorem 5.1- (ii) , we have that, for the strategy θ ∗ ∈ U t given by (5.4) , E t,z,x " − Z Tt e − ρs dL X ∗ s = 0 , where the process L X ∗ = ( L X ∗ t ) t ∈ [0 ,T ] is the reflected term of the process X ∗ = ( X ∗ t ) t ∈ [0 ,T ] which is givenin (3.4) with θ replaced by θ ∗ . This implies from integration by parts that e − ρT L X ∗ T + ρ R Tt e − ρs L X ∗ s ds = x , -a.s. and hence L X ∗ T = L X ∗ t = x , P -a.s. because ξ ( t, z ) > for ( t, z ) ∈ [0 , T ) × R . Therefore, with thestrategy θ ∗ ∈ U t , the (nonnegative) process X ∗ is given by X ∗ t = x − Z t f ( s, Z s ) ds + Z t ( θ ∗ s ) ⊤ µds + Z t ( θ ∗ s ) ⊤ σdW s . On the other hand, for ≤ x < ξ ( t, z ) , we have that v x ( t, z, x ) > and hence v ( t, z, x ) < . This impliesthat, with this initial value x , the reflected term L X ∗ t is strictly increasing in t ∈ [0 , T ) . Remark 5.3.
Recall the equivalence that u ( a, v , z ) = − v (0 , z, v − a ) when v > a , where u ( a, v , z ) isthe value function of the original optimal tracking problem (2.4) . According to Remark 5.2 above, if theinitial wealth v is sufficiently large such that v − a > ξ (0 , z ) for the given benchmark growth rate function f ( · , · ) and µ Z ( · ) , σ Z ( · ) in the definition of the stochastic factor process Z = ( Z t ) t ∈ [0 ,T ] , we can concludethat u ( a, v , z ) = 0 and the optimal singular control C ∗ t ≡ for t ∈ [0 , T ] and we always have that A t isdynamically superhedgeable that A t ≤ V θ ∗ t for t ∈ [0 , T ] .Proof of Theorem 5.1. We first focus on the proof of (i) . For this purpose, recall the function ξ ( t, z )with ( t, z ) ∈ [0 , T ) × R defined by (5.3). Then, by the assumption (A f ) , we have ξ ( t, z ) > t, z ) ∈ [0 , T ) × R . Moreover, in view of Proposition 4.1 and Theorem 4.2, for ( t, z ) ∈ [0 , T ] × R , ξ ( t, z ) = − lim y → ˆ v y ( t, z, y ), and the derivative ˆ v y satisfies that, for ( t, z ) ∈ [0 , T ] × R , ˆ v y ( t, z,
1) = − e ρt h u ( t, z,
0) = 0 , ˆ v y ( T, z, y ) = y − e ρt h u ( T, z, − ln y ) = 0 , z ∈ (0 , ∞ ) , ˆ v yy ( t, z, e − u ) = e ρt +2 u ( h u ( t, z, u ) + h uu ( t, z, u )) ≥ , u ∈ [0 , ∞ ) , (“ > ”holds for t ∈ [0 , T )) , lim y → ˆ v y ( t, z, y ) = lim u → + ∞ ˆ v y ( t, z, e − u ) = − lim u → + ∞ e ρt + u h u ( t, z, u ) = − ξ ( t, z ) , lim y → ˆ v yy ( t, z, y ) = lim u → + ∞ ˆ v yy ( t, z, e − u ) = lim u → + ∞ e ρt +2 u ( h u ( t, z, u ) + h uu ( t, z, u )) = + ∞ . According to the definition (5.2) and (5.3), the region O T has a boundary that is at least C . We nextconsider the original HJB equation (3.6), however, restricted to the domain ( t, y, z ) ∈ O T that v t + sup θ ∈ R n h v x θ ⊤ µ + v xx θ ⊤ σσ ⊤ θ + v xz σ Z ( z ) θ ⊤ σγ i + v z µ Z ( z ) + v zz σ Z ( z )2 − f ( t, z ) v x = ρv, ( t, z, x ) ∈ O T ; v x ( t, z,
0) = 1 , ∀ ( t, z ) ∈ [0 , T ) × R . (5.5)First of all, for ( t, z, x ) ∈ O T , let us define y ∗ = y ∗ ( t, z, x ) ∈ (0 ,
1] that satisfiesˆ v y ( t, z, y ∗ ) = − x. (5.6)Thanks to (5.6), we have that v ( t, z, x ) = inf y ∈ (0 , { ˆ v ( t, z, y ) + xy } = ˆ v ( t, z, y ∗ ( t, z, x )) + xy ∗ ( t, z, x ) , ( t, z, x ) ∈ O T . (5.7)Note that (0 , ∋ y → ˆ v y ( t, z, y ) is strictly increasing for fixed ( t, z ) ∈ [0 , T ] × R , as well as ˆ v y ( t, z,
1) = 0and lim y → ˆ v y ( t, z, y ) = − ξ ( t, z ), we have that x → y ∗ ( t, z, x ) is decreasing, lim x → y ∗ ( t, z, x ) = 1, andlim x → ξ ( t,z ) y ∗ ( t, z, x ) = 0. It follows from the implicit function theorem that y ∗ is C on O T . Therefore v
11n (5.13) is well defined, and it is C , , on O T . On the other hand, a direct calculation yields that, for( t, z, x ) ∈ O T , y ∗ ( t, z, x ) = v x ( t, z, x ) , ˆ v t ( t, z, y ∗ ( t, z, x )) = v t ( t, z, x ) , ˆ v z (cid:0) t, z, y ∗ ( t, z, x ) (cid:1) = v z ( t, z, x ) , ˆ v yy ( t, z, y ∗ ( t, z, x ))= − v xx ( t, z, x ) , ˆ v zy ( t, z, y ∗ ( t, z, x ))= v xz ( t, z, x ) v xx ( t, z, x ) , ˆ v zz ( t, z, y ∗ ( t, z, x ))= (cid:18) v zz − v xz v xx (cid:19) ( t, z, x ) . (5.8)Recall that v xx ( t, z, x ) < t, z, x ) ∈ O T . Plugging (5.8) into (4.12), we deduce that v defined in (5.13)solves the dual PDE (5.5). We next study the behavior of v on O cT ∩ ([0 , T ) × R × R + ). To this end, for( t, z ) ∈ [0 , T ) × R , let ( t n , z n , x n ) ∈ O T for n ≥ t n , z n , x n ) → ( t, z, ξ ( t, z )). Wethen claim that lim n → + ∞ y ∗ ( t n , z n , x n ) = 0 . (5.9)Let us verify (5.9) by contradiction. Suppose that, up to a subsequence, there exists a constant δ > n → + ∞ y ∗ ( t n , z n , x n ) = δ . Then, by (5.6), it yields thatˆ v y ( t, z, δ ) = lim n → + ∞ ˆ v y ( t n , z n , y ∗ ( t n , z n , x n )) = − lim n → + ∞ x n = − ξ ( t, z ) , which contradicts the definition (5.3) of ξ ( t, z ). Moreover, from (5.9), it follows thatlim n → + ∞ v ( t n , z n , x n ) = lim n → + ∞ { ˆ v ( t n , z n , y ∗ ( t n , z n , x n )) + x n y ∗ ( t n , z n , x n ) } = 0 , (5.10)and it holds that lim n → + ∞ v t ( t n , z n , x n ) = lim n → + ∞ ˆ v t ( t n , z n , y ∗ ( t n , z n , x n )) = 0 , lim n → + ∞ v z ( t n , z n , x n ) = lim n → + ∞ ˆ v z ( t n , z n , y ∗ ( t n , z n , x n )) = 0 , lim n → + ∞ v x ( t n , z n , x n ) = lim n → + ∞ y ∗ ( t n , z n , x n ) = 0 , (5.11)lim n → + ∞ v xx ( t n , z n , x n ) = − lim n → + ∞ ˆ v yy ( t n , z n , y ∗ ( t n , z n , x n )) − = 0 , lim n → + ∞ v xz ( t n , z n , x n ) = − lim n → + ∞ ˆ v yz ˆ v yy ( t n , z n , y ∗ ( t n , z n , x n )) = 0 , lim n → + ∞ v zz ( t n , z n , x n ) = lim n → + ∞ ˆ v zz − ˆ v yz ˆ v yy ! ( t n , z n , y ∗ ( t n , z n , x n )) = 0 . Let us define v ( t, z, x ) = 0 for ( t, z, x ) ∈ O cT ∩ ([0 , T ) × R × R + ). By (5.10) and (5.11), we have that v given by (5.13) and its partial derivatives up to order two are continuous on ∂ O T ∩ ([0 , T ) × R × R + ). Thisimplies that v is C , , on [0 , T ) × R × R + . Moreover, using (5.8) and (4.12) on [0 , T ] × R × R + , we havethat v given by (5.1) solves the following HJB equation: v t + sup θ ∈ R n h v x θ ⊤ µ + v xx θ ⊤ σσ ⊤ θ + v xz σ Z ( z ) θ ⊤ σγ i + v z µ Z ( z ) + v zz σ Z ( z )2 − f ( t, z ) v x = ρv, ∀ ( t, z, x ) ∈ [0 , T ) × R × R + ; v x ( t, z,
0) = 1 , ∀ ( t, z ) ∈ [0 , T ) × R . (5.12)12n the other hand, note that v x ≥ t, z, x ) ∈ [0 , T ) × R × R + , and v ( t, z, x ) → x → + ∞ . By (3.5),it is obvious to have v ( t, z, ≤
0. Therefore v ( t, z, x ) ≤ t, z, x ) ∈ [0 , T ) × R × [0 , ∞ ), and it followsfrom (4.18) that, there exists a constant C > T such that, for ( t, z, x ) ∈ [0 , T ) × R × [0 , ∞ ), | v ( t, z, x ) | = − v ( t, z, x ) = sup y ∈ (0 , {− ˆ v ( t, z, y ) − xy } ≤ sup y ∈ (0 , {− ˆ v ( t, z, y ) } = sup y ∈ (0 , {− e ρt h ( t, z, − ln y ) } ≤ e ρt C ( T − t )(1 + | z | ) , (5.13)where the function h ( t, z, u ) is given by (4.13).We next prove the continuity of v on the boundary of [0 , T ) × R × [0 , + ∞ ). Note that v ( t, z,
0) = ˆ v ( t, z, t, z ) ∈ [0 , T ) × R and ( t n , z n , x n ) ∈ [0 , T ) × R × R + satisfying ( t n , z n , x n ) → ( t, z,
0) as n → ∞ .By mimicking the proof to show (5.9), one can also attain thatlim n →∞ y ∗ ( t n , z n , x n ) = 1 . (5.14)An application of L’Hospital’s rule gives thatlim x ↓ x ( v ( t, z, x ) − v ( t, z, x ↓ x (cid:0) ˆ v ( t, z, y ∗ ( t, z, x )) + xy ∗ ( t, z, x ) − ˆ v ( t, z, (cid:1) = lim x ↓ y ∗ ( t, z, x ) − lim x ↓ ˆ v y ( t, z, y ∗ ( t, z, x )) − ˆ v ( t, z, y ∗ ( t, z, x ) − × lim x ↓ y ∗ ( t, z, x ) − x = 1 − ˆ v y ( t, z, × (cid:18) lim x ↓ y ∗ x ( t, z, x ) (cid:19) = 1 . Moreover, by noting lim n →∞ v x ( t n , z n , x n ) = lim n →∞ y ∗ ( t n , z n , x n ) = 1, it holds thatlim n →∞ v x ( t n , z n , x n ) = v x ( t, z, . (5.15)Similarly, we also have thatlim x ↓ x ( v x ( t, z, x ) − v x ( t, z, x ↓ x ( y ∗ ( t, z, x ) −
1) = lim x ↓ y ∗ x ( t, z, x ) = − ˆ v yy ( t, z, − , and lim n →∞ v xx ( t n , z n , x n ) = − lim n → + ∞ ˆ v yy ( t n , z n , y ∗ ( t n , z n , x n )) − = − ˆ v yy ( t, z, − . Thereforelim n → + ∞ v xx ( t n , z n , x n ) = v xx ( t, z, . (5.16)In a similar fashion, the limits (5.15) and (5.16) also hold for v z , v xz , and v zz . Hence, we have that v ∈ C , , ([0 , T ) × R × [0 , ∞ )). On the other hand, for ( z, x ) ∈ R × [0 , + ∞ ), we define v ( T, z, x ) = 0, andconsider ( t n , z n , x n ) ∈ [0 , T ) × R × [0 , + ∞ ) satisfying ( t n , z n , x n ) → ( T, z, x ) as n → + ∞ . In view of (5.13),we have lim n → + ∞ v ( t n , z n , x n ) = 0, which yields that v ∈ C ( D T ). By combining Eq. (5.12), we deducethat v ∈ C , , ([0 , T ) × R × [0 , + ∞ )) ∩ C ( D T ), and v satisfies that v t + sup θ ∈ R n h v x θ ⊤ µ + v xx θ ⊤ σσ ⊤ θ + v xz σ Z ( z ) θ ⊤ σγ i + v z µ Z ( z ) + v zz σ Z ( z )2 − f ( t, z ) v x = ρv, ∀ ( t, z, x ) ∈ [0 , T ) × R × R + ; v x ( t, z,
0) = 1 , ∀ ( t, z ) ∈ [0 , T ) × R ,v ( T, z, x ) = 0 , ∀ ( z, x ) ∈ R × [0 , + ∞ ) , (5.17)13nd also the estimate (5.13) for ( t, z, x ) ∈ [0 , T ] × R × [0 , + ∞ ).We next move on to the proof of (ii) . We first show the continuity of θ ∗ ( t, z, x ) on ( t, z, x ) ∈ D T , whichverifies the admissibility of θ ∗ t = θ ∗ ( t, Z t , X t ) for t ∈ [0 , T ] (i.e., θ ∗ ∈ U t ). Let us define y ∗ ( t, z,
0) = 1.Thanks to (5.14), y ∗ is continuous at ( t, z, t, z, x ) ∈ D T , we rewrite (5.4) by θ ∗ ( t, z, x )= − µ ( σσ ⊤ ) − y ∗ ( t, z, x )ˆ v yy ( t, z, y ∗ ( t, z, x ))+( σσ ⊤ ) − σ Z ( z ) σγ ˆ v yz ( t, z, y ∗ ( t, z, x )) . (5.18)It is easy to see that θ ∗ ( t, z, x ) is continuous for ( t, z, x ) ∈ O T ∪ { ( t, z, t, z ) ∈ [0 , T ) × R } . Therefore,it remains to show that lim n → + ∞ θ ∗ ( t n , z n , x n ) = θ ∗ ( t, z, x ) , (5.19)where x = ξ ( t, z ), and ( t n , z n , x n ) ∈ O T , lim n → + ∞ ( t n , z n , x n ) = ( t, z, x ). By virtue of (5.18), we have that θ ∗ ( t n , z n , x n ) = − µ ( σσ ⊤ ) − y ∗ ( t n , z n , x n )ˆ v yy ( t, z, y ∗ ( t n , z n , x n ))+ ( σσ ⊤ ) − σ Z ( z ) σγ ˆ v yz ( t, z, y ∗ ( t n , z n , x n )) . (5.20)Note that x ∗ ( t n , z n , x n ) → n → + ∞ . By sending n to + ∞ on both sides of (5.20), we deduce thatlim n →∞ θ ∗ ( t n , z n , x n ) = − µ ( σσ ⊤ ) − lim y ↓ x ˆ v yy ( t, z, y ) + ( σσ ⊤ ) − σ Z ( z ) σγ lim y ↓ ˆ v yz ( t, z, y ) = θ ∗ ( t, z, x ) . Following the same argument, we can establish the convergence (5.19) for any ( t, z, x ) ∈ ∂ O T ∩ ([0 , T ] × R × [0 , + ∞ )) and ( t n , z n , x n ) ∈ O T . Hence θ ∗ ( t, y, z ) is continuous for ( t, z, x ) ∈ [0 , T ] × R × [0 , + ∞ ).Moreover, one can see from (4.23), (5.4) and (5.18) that there exists constant C > | θ ∗ ( t, z, x ) | ≤ C (1 + | z | ) , ∀ ( t, z, x ) ∈ D T . (5.21)With the continuity of θ ∗ on D T and the estimate (5.21), we can apply Theorem 2 .
2, Theorem 2 . . t ∈ [0 , T ]: ˜ X t = − Z t f ( s, Z s ) ds + Z t θ ∗ ( s, Z s , Φ( ˜ X ) s ) ⊤ µds + Z t θ ∗ ( s, Z s , Φ( ˜ X ) s ) ⊤ σdW s ,dZ t = µ Z ( Z t ) dt + σ Z ( Z t ) dW γt , (5.22)where the mapping Φ : C ([0 , T ]; R ) → C ([0 , T ]; R ) satisfies that, for all ϕ ∈ C ([0 , T ]; R ),(i) Φ( ϕ ) t = ϕ t + η t for t ∈ [0 , T ], and Φ( ϕ ) = ϕ .(ii) Φ( ϕ ) t ≥ t ∈ [0 , T ].(iii) t → η t is continuous, nonnegative and nondecreasing, and η t = x ∨ R t { Φ( ϕ ) s =0 } dη s for t ∈ [0 , T ].Define X ∗ := Φ( ˜ X ) and L ∗ := Φ( ˜ X ) − ˜ X . Then ( X ∗ , L ∗ , W ) solves the following reflected SDE: X ∗ t = − Z t f ( s, Z s ) ds + Z t θ ∗ ( s, Z s , X ∗ s ) ⊤ µds + Z t θ ∗ ( s, Z s , X ∗ s ) ⊤ σdW s + L ∗ t , t ∈ [0 , T ] , where L ∗ satisfies (iii). Therefore, we have shown that θ ∗ ∈ U t is admissible.Let us fix any ( t, z, x ) ∈ [0 , T ) × R × [0 , ∞ ), and θ ∈ U t . For any n > T − , we define that τ tn := (cid:18) T − n (cid:19) ∧ inf { s ≥ t : | Z s | + | X s | > n } . (5.23)14t holds that τ tn ↑ T as n → ∞ , P -a.s.. By applying Itˆo’s formula, we arrive at E t,z,x " − Z τ tn t e − ρs dL Xs + e − ρτ tn v ( τ tn , Z τ tn , X τ tn ) − e − ρt v ( t, Z t , X t ) (5.24)= E t,z,x "Z τ tn t e − ρs (cid:0) v t + L θ s v (cid:1) ( s, Z s , X s ) ds + E t,z,x "Z τ tn t e − ρs ( v z ( s, Z s , X s ) − dL Xs , where, for θ ∈ R n , the operator L θt acted on C ( R × [0 , ∞ )) is defined by L θt ϕ ( z, x ) := ϕ x ( z, x ) θ ⊤ µ + ϕ xx ( z, x )2 θ ⊤ σσ ⊤ θ + ϕ xz ( z, x ) σ Z ( z ) θ ⊤ σγ + ϕ z ( z, x ) µ Z ( z )+ ϕ zz ( z, x ) σ Z ( z )2 − f ( t, z ) ϕ x ( z, x ) − ρϕ ( z, x ) , for all ϕ ∈ C ( R × [0 , ∞ )). The boundary condition in (5.17), together with (3.4), yields that E t,z,x "Z τ tn t e − ρs ( v x ( s, Z s , X s ) − dL Xs = E t,z,x "Z τ tn t e − ρs ( v x ( s, Z s , X s ) − { X s =0 } dL Xs = 0 . On the other hand, the HJB equation (5.17) satisfied by v also gives that, for all t ∈ [0 , T ], P -a.s. (cid:0) v t + L θt v (cid:1) ( t, Z t , X t ) ≤ , (5.25)where the equality holds in (5.25) if θ = θ ∗ . Hence, we arrive from (5.25) at E t,z,x " − Z τ tn t e − ρs dL Xs ≤ v ( t, z, x ) − E t,z,x h e − ρτ tn v ( τ tn , Z τ tn , X τ tn ) i . (5.26)By applying the estimate (5.13), we have | v ( τ tn , Z τ tn , X τ tn ) | ≤ e ρτ tn C ( T − τ tn ) { s ∈ [ t,T ] | Z s | p } , P -a.s.By sending n → + ∞ and noting that τ tn ↑ T , P -a.s., the dominated convergence theorem yields thatlim n → + ∞ E t,z,x (cid:2) e − ρτ n v ( τ tn , Z τ tn , X τ tn ) (cid:3) = 0 . (5.27)Therefore, as n tends to + ∞ in (5.26), we have that, for all θ ∈ U t ,˜ J ( θ ; t, z, x ) = E t,z,x " − Z Tt e − ρs dL Xs ≤ v ( t, z, x ) , ( t, z, x ) ∈ D T , (5.28)where the equality in (5.28) holds for θ = θ ∗ . This verifies that θ ∗ ∈ U t is an optimal strategy. This section collects the proofs of some important results in the main body of the paper.
Proof of Lemma 2.2.
It is clear that, for ( a, v , z ) ∈ [0 , ∞ ) × R , u ( a, v , z ) = inf θ inf C E " C + Z T e − ρt dC t , C + R T e − ρt dC t = e − ρT C T + ρ R T e − ρt C t dt . For each fixed θ = ( θ t ) t ∈ [0 ,T ] , we need to choose the optimal singular control C = ( C t ) t ∈ [0 ,T ] to minimizeinf C F ( C ) , where F ( C ) := E " e − ρT C T + ρ Z T e − ρt C t dt , subjecting to C t ≥ A t − V θt at each t ∈ [0 , T ]. Note that the cost functional F ( C ) is strictly increasing in C . That is, if C ≤ C and C = C , then we have F ( C ) < F ( C ). Therefore, the optimal choice of thecontrol C is the minimal nonnegative and nondecreasing process C t such that C t ≥ A t − V θt for t ∈ [0 , T ].We claim the minimal process is the nondecreasing envelope C ∗ t := 0 ∨ sup s ≤ t ( A s − V θs ). To wit, C ∗ t isnonnegative and satisfies the dynamic floor constraint. Let e C be another nonnegative and nondecreasingprocess satisfying e C t ≥ A t − V θt , t ∈ [0 , T ]. Suppose that e C ≤ C ∗ and e C = C ∗ . That is, there exists a t ∈ [0 , T ] and a set O with P ( O ) >
0, such that e C t ( ω ) < C t ( ω ) for ω ∈ O . By definition, we have A t ( ω ) − V θt ( ω ) ≤ e C t ( ω ) < C t ( ω ) = sup s ≤ t ( A s ( ω ) − V θs ( ω )) . For each fixed ω ∈ O , let t ∗ < t be the time such that A t ∗ ( ω ) − V θt ∗ ( ω ) = sup s ≤ t ( A s ( ω ) − V θs ( ω )). It followsthat e C t ( ω ) < A t ∗ ( ω ) − V θt ∗ ( ω ) ≤ e C t ∗ ( ω ). We obtain a contradiction that the process e C is nondecreasing.Therefore, the original problem can be written as u ( a, v , z ) = C ∗ + inf θ E "Z T e − ρt dC ∗ t = ( a − v) + + inf θ E "Z T e − ρt d (0 ∨ sup s ≤ t ( A s − V θs ) , which completes the proof. Proof of Proposition 4.1.
We first derive the representation of the partial derivative h u of the function h w.r.t. the variable u . Let ( t, z ) ∈ [0 , T ] × R be fixed. For any u > u ≥
0, it follows from (4.13) that h ( t, z, u ) − h ( t, z, u ) u − u = − Z Tt E " e − ρs f ( s, M t,zs ) e − R t,u s − e − R t,u s u − u ds. A direct calculation yields that, for s ∈ [ t, T ],lim u ↓ u e − R t,u s − ρs − e − R t,u s − ρs u − u = − e − R t,u s − ρs , max r ∈ [ t,s ] (cid:2) −√ αB r − ( α − ρ ) r (cid:3) ≤ u , , max r ∈ [ t,s ] (cid:2) −√ αB r − ( α − ρ ) r (cid:3) > u . As sup ( s,u ,u ) ∈ [ t,T ] × [0 , ∞ ) (cid:12)(cid:12)(cid:12)(cid:12) e − Rt,u s − ρs − e − Rt,u s − ρs u − u (cid:12)(cid:12)(cid:12)(cid:12) ≤
1, the dominated convergence theorem yields thatlim u ↓ u h ( t, z, u ) − h ( t, z, u ) u − u = E "Z Tt e − ρs f ( s, M t,zs ) e − R t,u s { max r ∈ [ t,s ] [ −√ αB r − ( α − ρ ) r ] ≤ u } ds = E "Z τ tu ∧ Tt e − ρs f ( s, M t,zs ) e − R t,u s ds , (6.1)where τ tu := inf { s ≥ t ; −√ αB s − ( α − ρ ) s = u } . On the other hand, for the case u > u ≥
0, similarto the computation of (6.1), we have that lim u ↑ u h ( t,z,u ) − h ( t,z,u ) u − u = lim u ↓ u h ( t,z,u ) − h ( t,z,u ) u − u . Thereforethe representation (4.19) holds. 16e next derive the representation of h uu . In fact, let ( t, z ) ∈ [0 , T ] × R be fixed. For any u , u n ≥ u n → u as n → ∞ , we have that, for n ≥ n = h u ( t, z, u n ) − h u ( t, z, u ) u n − u = E (cid:20) u n − u Z τ n τ e − ρs f ( s, M t,zs ) e − R t,u s ds (cid:21) + E (cid:20) u n − u Z τ t e − ρs f ( s, M t,zs ) (cid:16) e − R t,uns − e − R t,u s (cid:17) ds (cid:21) + E (cid:20) u n − u Z τ n τ e − ρs f ( s, M t,zs ) (cid:16) e − R t,uns − e − R t,u s (cid:17) ds (cid:21) := ∆ (1) n + ∆ (2) n + ∆ (3) n . (6.2)where τ := τ tu ∧ T and τ n := τ tu n ∧ T . In order to deal with ∆ (1) n , we first focus on the case where u n ↓ u as n → ∞ . To this end, we introduce ˜∆ (1) n := E h τ n − τ u n − u e − ρτ f ( τ , M t,yτ ) e − R t,u τ i . For any m >
0, it followsfrom the assumption (A f ) that | ∆ (1) n − ˜∆ (1) n |≤ E (cid:20) τ n − τ u n − u ξ n (cid:21) ≤ m E [ ξ n ]+ C ( E (cid:20) max s ∈ [ t,T ] (cid:12)(cid:12) M t,zs (cid:12)(cid:12) (cid:21) ) P (cid:18) τ n − τ u n − u > m (cid:19) , (6.3)where for n ≥ ξ n := max s ∈ [ τ ,τ n ] (cid:12)(cid:12)(cid:12) e − ρs f ( s, M t,zs ) e − R t,us − e − ρτ f ( τ , M t,yτ ) e − R t,uτ (cid:12)(cid:12)(cid:12) . Note that ξ n ≤ C (1 + sup s ∈ [ t,T ] | M t,zs | ) for all n ≥ (A f ) . For any m >
0, itfollows that τ n ↓ τ , P -a.s. as n → + ∞ . Therefore, we have that ξ n ↓
0, as n → ∞ , P -a.s., and hence E [ ξ n ] → n → ∞ . On the other hand, by setting ˜ µ := α − ρ , we have that P (cid:18) τ n − τ u n − u > m (cid:19) ≤ Z + ∞ m ( u n − u ) u n − u √ απt e − ( un − u − ˜ µt )24 αt dt ≤ √ u n − u Z + ∞ m √ απs ds. Letting n go to + ∞ in (6.3), we arrive at lim n → + ∞ | ∆ (1) n − ˜∆ (1) n | = 0 . (6.4)Moreover, using the strong Markov property of Brownian motion with drift, it follows that E (cid:20) τ n − τ u n − u (cid:12)(cid:12)(cid:12)(cid:12) F τ + (cid:21) = Z T − τ √ απs e − ( un − u − ˜ µs )24 αs ds + ( T − τ ) Z + ∞ T − τ √ απs e − ( un − u − ˜ µs )24 αs ds. Therefore, for n ≥ (1) n = E " e − ρτ f ( τ , M t,zτ ) e − R t,u τ Z T − τ √ απs e − ( un − u − ˜ µs )24 αs ds + E (cid:20) ( T − τ ) e − ρτ f ( τ , M t,yτ ) e − R t,u τ Z + ∞ T − τ √ απs e − ( un − u − ˜ µs )24 αs ds (cid:21) . This yields that lim n → + ∞ ∆ (1) n = lim n → + ∞ ˜∆ (1) n = E h e − ρτ f ( τ , M t,zτ ) e − R t,u τ Γ( τ ) i , where, for t ∈ [0 , T ], Γ( t ) := Z T − t √ απs e − ˜ µ α s ds + ( T − t ) Z + ∞ T − t √ απs e − ˜ µ α s ds. (6.5)17or the case where u > u n ↑ u as n → ∞ , we can follow the similar argument above to get thatlim n → + ∞ ∆ (1) n = lim n → + ∞ E " e − ρτ n f ( τ n , M t,zτ n ) e − R t,unτn Z T − τ n √ απs e − ( un − u − ˜ µs )24 αs ds + lim n → + ∞ E (cid:20) ( T − τ n ) e − ρτ n f ( τ n , M t,zτ n ) e − R t,unτn Z + ∞ T − τ n √ απs e − ( un − u − ˜ µs )24 αs ds (cid:21) = E h e − ρτ f ( τ , M t,zτ ) e − R t,u τ Γ( τ ) i . Similar to the derivation of (4.19), we also have thatlim n → + ∞ ∆ (2) n = lim n → + ∞ E (cid:20) u n − u Z τ t e − ρs f ( s, M t,zs ) (cid:16) e − R t,uns − e − R t,u s (cid:17) ds (cid:21) = E (cid:20)Z τ t e − ρs f ( s, M t,zs ) e − R t,u s ds (cid:21) . Finally, similar to (6.3) and (6.4), we can show by the assumption (A f ) that | ∆ (3) n | ≤ E (cid:20) u n − u Z τ n τ e − ρs f ( s, M t,zs ) (cid:12)(cid:12)(cid:12) e − R t,uns − e − R t,u s (cid:12)(cid:12)(cid:12) ds (cid:21) ≤ m E [ ˜ ξ n ] + C (cid:26) E (cid:20) max s ∈ [ t,T ] (cid:12)(cid:12) M t,zs (cid:12)(cid:12) (cid:21)(cid:27) P (cid:18) τ n − τ u n − u > m (cid:19) , where, for t ∈ [0 , T ], ˜ ξ n := max s ∈ [ τ ,τ n ] (cid:12)(cid:12)(cid:12) e − ρs f ( s, M t,zs ) e − R t,uns − e − ρs f ( s, M t,zs ) e − R t,u s (cid:12)(cid:12)(cid:12) . By the assumption (A f ) , we have E [ ˜ ξ n ] → n → ∞ . Hence lim n → + ∞ | ∆ (3) n | = 0. Putting all the piecestogether, we can derive from the decomposition (6.2) and R t,u τ = 0 that h uu ( t, z, u ) = E (cid:2) e − ρτ f ( τ , M t,zτ )Γ( τ ) (cid:3) + E (cid:20)Z τ t e − ρs f ( s, M t,zs ) e − R t,u s ds (cid:21) . (6.6)where we define that τ := τ tu ∧ T , and Γ( t ) for t ∈ [0 , T ] is given by (6.5).We next derive the representations of h y , h yy and h yu . Let ( t, u ) ∈ [0 , T ] × [0 , ∞ ). In view of theassumption (A Z ) , Theorem 3.3.2 in Kunita (2019) yields that, for s ∈ [ t, T ], the family ( M t,zs ) z ∈ R admitsa modification which is continuously differentiable w.r.t. z . Moreover, ∂ z M t,zs is continuous in z andsatisfies the following SDE for s ∈ [ t, T ] that ∂ z M t,zs = 1 + Z st µ ′ Z (cid:0) M t,zr (cid:1) ∂ z M t,zr dr + ̺ Z st σ ′ Z ( M t,zr ) ∂ z M t,zr dB r + p − ̺ Z st σ ′ Z ( M t,zr ) ∂ z M t,zr dB r , (6.7)and for any p ≥
2, the following moment estimate holds thatsup z ∈ R E (cid:20) max s ∈ [ t,T ] (cid:12)(cid:12) ∂ z M t,zs (cid:12)(cid:12) p (cid:21) < + ∞ . (6.8)In terms of (4.13), for distinct z, ˆ z ∈ R and some constant C >
0, we have that h ( t, z, u ) − h ( t, ˆ z, u ) z − ˆ z = − E "Z Tt e − ρs − R t,us f ( s, M t,zs ) − f ( s, M t, ˆ zs ) z − ˆ z ds . (6.9)18y the assumption (A f ) , for s ∈ [ t, T ], the next results hold P -a.s. that f ( s, M t,zs ) − f ( s, M t, ˆ zs ) z − ˆ z ˆ z → z −→ f ′ ( s, M t,zs ) ∂ z M t,zs , (cid:12)(cid:12)(cid:12)(cid:12) f ( s, M t,zs ) − f ( s, M t, ˆ zs ) z − ˆ z (cid:12)(cid:12)(cid:12)(cid:12) ≤ C (cid:12)(cid:12)(cid:12)(cid:12) M t,zs − M t, ˆ zs z − ˆ z (cid:12)(cid:12)(cid:12)(cid:12) . We have from (6.8) that, for any p ≥
2, sup ˆ z = z E h(cid:12)(cid:12)(cid:12) M t,zs − M t, ˆ zs z − ˆ z (cid:12)(cid:12)(cid:12) p i < + ∞ . This implies that ( M t,zs − M t, ˆ zs z − ˆ z ) ˆ z = z is uniformly integrable. Therefore, in view of (6.9), we arrive at h z ( t, z, u ) = − lim ˆ z → z E "Z Tt e − ρs − R t,us f ( s, M t,zs ) − f ( s, M t, ˆ zs ) z − ˆ z ds = − E "Z Tt e − ρs − R t,us f ′ ( s, M t,zs ) ∂ z M t,zs ds , (6.10)where f ′ ( t, z ) denotes the partial derivative of f w.r.t. z . Similarly, we can obtain that h zu ( t, z, u ) = E "Z Tt e − ρs − R t,us f ′ ( s, M t,zs ) ∂ z M t,zs { max r ∈ [ t,s ] [ −√ αB r − ( α − ρ ) r ] ≤ u } ds . (6.11)We next derive the expression of h yy . To this end, we need the dynamics of ∂ zz M t,zs for s ∈ [ t, T ].Following a similar argument of Theorem 3.4.2 of Kunita (2019), we can deduce that ∂ zz M t,zs = Z st (cid:8) µ ′′ Z ( M t,zr ) | ∂ z M t,zr | + µ ′ Z ( M t,zr ) ∂ zz M t,zr (cid:9) dr + ̺ Z st (cid:8) σ ′′ Z ( M t,zr ) | ∂ z M t,zr | + σ ′ Z ( M t,zr ) ∂ zz M t,zr (cid:9) dB r + p − ̺ Z st (cid:8) σ ′′ Z ( M t,zr ) | ∂ z M t,zr | + σ ′ Z ( M t,zr ) ∂ zz M t,zr (cid:9) dB r , and for all p ≥
1, it holds that sup z ∈ R E (cid:20) max s ∈ [ t,T ] (cid:12)(cid:12) ∂ zz M t,zs (cid:12)(cid:12) p (cid:21) < + ∞ . The chain rule with the assumption (A f ) yields that, P -a.s. f ′ ( s, M t,zs ) ∂ z M t,zs − f ′ ( s, M t, ˆ zs ) ∂ z M t, ˆ zs z − ˆ z ˆ z → z −→ f ′′ ( s, M t,zs ) (cid:12)(cid:12) ∂ z M t,zs (cid:12)(cid:12) + f ′ ( s, M t,zs ) ∂ zz M t,zs , and there exists a constant C > z = z , (cid:12)(cid:12)(cid:12)(cid:12) f ′ ( s, M t,zs ) ∂ z M t,zs − f ′ ( s, M t, ˆ zs ) ∂ z M t, ˆ zs z − ˆ z (cid:12)(cid:12)(cid:12)(cid:12) ≤ C (cid:12)(cid:12)(cid:12)(cid:12) M t,zs − M t, ˆ zs z − ˆ z ∂ z M t,zs (cid:12)(cid:12)(cid:12)(cid:12) + C (cid:12)(cid:12)(cid:12)(cid:12) ∂ z M t,zs − ∂ z M t, ˆ zs z − ˆ z (cid:12)(cid:12)(cid:12)(cid:12) . (6.12)Let p ≥ I ( s ; z, ˆ z ) := E (cid:2) max r ∈ [ t,s ] | ∂ z M t,zr − ∂ z M t, ˆ zr | p (cid:3) for ( s, z, ˆ z ) ∈ [ t, T ] × R . It followsfrom (6.7) and the assumption (A Z ) that, for some C > I ( s ; z, ˆ z ) ≤ C Z st (cid:26) I ( r ; z, ˆ z ) + sup u ∈ R E (cid:20) max s ∈ [ t,T ] | ∂ z M t,us | p (cid:21) E h(cid:12)(cid:12) M t,zr − M t, ˆ zr (cid:12)(cid:12) p i(cid:27) dr ≤ C Z st n I ( r ; z, ˆ z ) + | z − ˆ z | p o dr. s ∈ [ t, T ],sup ˆ z = z E " sup r ∈ [ t,s ] (cid:12)(cid:12)(cid:12)(cid:12) ∂ z M t,zr − ∂ z M t, ˆ zr z − ˆ z (cid:12)(cid:12)(cid:12)(cid:12) p < + ∞ , and hence the left hand side of (6.12) with ˆ z = z and s ∈ [ t, T ] is uniformly integrable. Then h zz ( t, z, u ) = − lim ˆ z → z E "Z Tt e − ρs − R t,us f ′ ( s, M t,zs ) ∂ z M t,zs − f ′ ( s, M t, ˆ zs ) ∂ z M t, ˆ zs z − ˆ z ds = − E "Z Tt e − ρs − R t,us (cid:16) f ′′ ( s, M t,zs ) (cid:12)(cid:12) ∂ z M t,zs (cid:12)(cid:12) + f ′ ( s, M t,zs ) ∂ zz M t,zs (cid:17) ds , (6.13)where f ′′ ( t, z ) denotes the second-order partial derivative of f w.r.t. z .Next, we derive the representation of h t . Let us consider the solutions M t,y = ( M t,zs ) s ∈ [ t,T ] and M ˆ t,z = ( M ˆ t,zs ) s ∈ [ˆ t,T ] of SDE (4.14) with parameters ( t, z ) ∈ [0 , T ] × R and (ˆ t, z ) ∈ [0 , T ] × R respectively.Moreover, for r ≥
0, we introduce F tr := F t + r , B ,tr := B t + r − B t , and B ,tr := B t + r − B t and define F ˆ tr , B i, ˆ tr , i = 1 , r ≥ M t,zt + r , B ,tr , B ,tr ) r ≥ d = ( M ˆ t,z ˆ t + r , B , ˆ tr , B , ˆ tr ) r ≥ . (6.14)It holds that, for any δ ∈ [0 , T − t ], h ( t + δ, z, u ) = − E "Z Tt + δ e − ρs f ( s, M t + δ,zs ) e − R t + δ,us ds = − E "Z T − δt e − ρs f ( s, M t,zs ) e − R t,us ds . It follows from the dominated convergence theorem thatlim δ ↓ δ ( h ( t + δ, z, u ) − h ( t, z, u )) = lim δ ↓ δ E "Z TT − δ e − ρs f ( s, M t,zs ) e − R t,us ds = E h e − ρT f ( T, M t,zT ) e − R t,uT i . Similarly, for t ∈ (0 , T ], we have thatlim δ ↓ δ ( h ( t, z, u ) − h ( t − δ, z, u )) = lim δ ↓ δ E "Z TT − δ e − ρs f ( s, M t,zs + δ ) e − R t,us + δ ds = E h e − ρT f ( T, M t,zT ) e − R t,uT i . Therefore, we conclude that, for ( t, z, u ) ∈ D T , h t ( t, z, u ) = E h e − ρT f ( T, M t,zT ) e − R t,uT i . (6.15)At last, we verify the continuity of h t , h uu , h yu and h yy in ( t, y, u ) using expressions (6.15), (6.6),(6.11) and (6.13). In fact, by Theorem 3.4.3 of Kunita (2019), we have that M t,zs , ∂ z M t,zs and ∂ zz M t,zs for s ∈ [ t, T ] admit the respective modifications which are continuous in ( t, z, s ), P -a.s.. Moreover, by (4.16)and (4.17), R t,us is also continuous in ( t, u, s ), P -a.s.. Then, we follow from the dominated convergencetheorem that h t , h zu and h zz are continuous in ( t, z, u ). For the continuity of h uu , in view of (6.14), forany ǫ ∈ R satisfying τ ǫ := τ tu ∧ ( T − ǫ ) ∈ [ t, T ], h uu ( t + ǫ, z, u ) = E h e − ρτ ǫ f ( τ ǫ , M t,zτ ǫ )Γ( τ ǫ ) i + E "Z τ ǫ t e − ρs f ( s, M t,zs ) e − R t,u s ds . (6.16)Note that τ ǫ is continuous in ( u , ǫ ), P -a.s.. The continuity of h uu in ( t, z, u ) follows from (6.16) and thedominated convergence theorem, which completes the whole proof.20 roof of Theorem 4.2. For ( t, u ) ∈ [0 , T ] × [0 , ∞ ), we define B t,us := u + √ α ( B s − B t ) + ( α − ρ )( s − t )for s ∈ [ t, T ] and M t,z = ( M t,zs ) s ∈ [ t,T ] for ( t, z ) ∈ [0 , T ] × R is the unique (strong) solution of SDE (4.14)under the assumption (A Z ) . By Remark 4.17 of Chapter 5 in Karatzas and Shreve (1991), the assumption (A Z ) guarantees that the time-homogeneous martingale problem on ( M t,z , B t,u ) = ( M t,zs , B t,us ) s ∈ [ t,T ] iswell posed. By applying Theorem 5.4.20 in Karatzas and Shreve (1991), ( M t,z , B t,u ) is a strong Markovprocess with ( t, z, u ) ∈ [0 , T ] × R × R + . For ε ∈ (0 , u ), let us define that τ tε := inf (cid:8) s ≥ t ; (cid:12)(cid:12) B t,us − u (cid:12)(cid:12) ≥ ε or (cid:12)(cid:12) M t,zs − z (cid:12)(cid:12) ≥ ε (cid:9) ∧ T. (6.17)Because the paths of ( M t,z , B t,u ) are continuous, we have τ tε > t , P -a.s.. For any ˆ t ∈ [ t, T ], it holds that B t,u ˆ t ∧ τ tε = R t,u ˆ t ∧ τ tε . (6.18)In fact, if τ tε ≤ ˆ t , the following two cases may happen:(i) | B t,uτ tε − u | < ε : this yields that 0 < − ε + u < B t,uτ tε < ε + u , and hence (6.18) holds.(ii) | B t,uτ tε − u | ≥ ε : this implies that B t,uτ tε = u + ε > B t,uτ tε = u − ε >
0, and hence (6.18) holds.If ˆ t < τ tε , then | B t,u ˆ t − u | < ε and | M t,z ˆ t − z | < ε . This results in that 0 < − ε + u < B t,u ˆ t < ε + u , andhence (6.18) holds. It follows from (6.18) and the strong Markov property that − E "Z T ˆ t ∧ τ tε f ( t, M t,zs ) e − R t,us − ρs ds (cid:12)(cid:12)(cid:12)(cid:12) F ˆ t ∧ τ tε = h (cid:16) ˆ t ∧ τ tε , M t,z ˆ t ∧ τ tε , B t,u ˆ t ∧ τ tε (cid:17) , where, thanks to (4.13), the function h is given by h ( t, z, u ) = − E hR Tt e − ρs f ( s, M t,zs ) e − B t,us − L Rs ds i .Therefore, for ( t, y, u ) ∈ D T , it holds that h ( t, z, u ) = E " h (cid:16) ˆ t ∧ τ tε , M t,z ˆ t ∧ τ tε , B t,u ˆ t ∧ τ tε (cid:17) − Z ˆ t ∧ τ tε t e − ρs f ( s, M t,zs ) e − B t,us − L Rs ds . (6.19)By Proposition 4.1 and Itˆo’s formula, we have that1ˆ t − t E "Z ˆ t ∧ τ tε t e − ρs f ( s, M t,zs ) e − B t,us − L Rs ds = 1ˆ t − t E h h (cid:16) ˆ t ∧ τ tε , M t,z ˆ t ∧ τ tε , B t,u ˆ t ∧ τ tε (cid:17) − h ( t, z, u ) i = 1ˆ t − t E "Z ˆ t ∧ τ tε t ( h t + L h ) (cid:0) s, M t,zs , B t,us (cid:1) ds + 1ˆ t − t E "Z ˆ t ∧ τ tε t h u (cid:0) s, M t,zs , B t,us (cid:1) dL Rs , (6.20)where the operator L acted on C ( R × [0 , ∞ )) is defined for g ∈ C ( R × [0 , ∞ )) that L g := αg uu + ( α − ρ ) g u + φ ( y ) g uz + µ Z ( z ) g z + σ Z ( z )2 g zz . (6.21)By (6.17), the assumption (A f ) implies that ( e − ρs f ( s, M t,zs ) e − B t,us − L Rs ) s ∈ [ t, ˜ t ∧ τ tε ] is bounded. The boundedconvergence theorem yields thatlim ˆ t ↓ t t − t E "Z ˆ t ∧ τ tε t e − ρs f ( s, M t,zs ) e − B t,us − L Rs ds = f ( t, z ) e − u − ρt . The definition of well-posedness of a time-homogeneous martingale problem can be found in Definition 4.15 of Chapter 5in Karatzas and Shreve (1991), page 320. ˆ t ↓ t t − t E "Z ˆ t ∧ τ tε t ( h t + L h ) (cid:0) s, M t,zs , B t,us (cid:1) ds = ( h t + L h ) ( t, z, u ) . Note that R t,us > s ∈ [ t, ˆ t ∧ τ tε ] for all ( t, u ) ∈ [0 , T ] × [0 , ∞ ). We then have that1ˆ t − t E "Z ˆ t ∧ τ tε t h u (cid:0) s, M t,zs , B t,us (cid:1) dL Rs = 0 . By applying (6.20), we obtain that ( h t + L h )( t, z, u ) = f ( t, z ) e − u − ρt on ( t, z, u ) ∈ [0 , T ) × R × R + .We next verify that the function h in (4.13) satisfies the boundary conditions of the Neumann prob-lem (4.20). By the representation form (4.13), it is easy to see that h ( T, z, u ) = 0 for all ( z, u ) ∈ R × [0 , ∞ ).It remains to show the validity of homogeneous Neumann boundary condition. In fact, for s ∈ [ t, T ], wehave that, for any positive sequence ( u n ) n ≥ satisfying u n ↓ n → ∞ , [ n ≥ A u n s := [ n ≥ (cid:26) max r ∈ [ t,s ] h −√ αB r − ( α − ρ ) r i > u n (cid:27) ∈ F s , and P [ n ≥ A u n s = 1 . (6.22)In view of (4.19) in Proposition 4.1 and the assumption (A Z ) , it follows from the dominated convergencetheorem that h u ( t, z,
0) = lim n →∞ Z Tt E h f ( s, M t,zs ) e − R t,us − ρs { ( A uns ) c } i ds = 0 , ( t, z ) ∈ [0 , T ] × R . (6.23)That is, the Neumann boundary condition in (4.20) holds.We next assume that the Neumann problem (4.20) admits a classical solution h with a polynomialgrowth. For n ∈ N and t ∈ [0 , T ], we define τ tn := inf { s ≥ t ; | M t,zs | ≥ n or | R t,us | ≥ n } ∧ T . Itˆo’s formulagives that, for ( t, z, u ) ∈ D T , E h h (cid:16) τ tn , M t,zτ tn , R t,uτ tn (cid:17)i = h ( t, z, u ) + E "Z τ tn t ( h t + L h ) (cid:0) r, M t,zr , R t,ur (cid:1) dr (6.24)+ E "Z τ tn t h u ( r, M t,zr , R t,ur ) { R t,ur =0 } dL Rr = h ( t, z, u ) + E "Z τ tn t f ( t, M t,zr ) e − R t,ur − ρr dr . Moreover, the polynomial growth of h implies the existence of a constant C = C T > p ≥ | h ( τ tn , M t,zτ tn , R t,uτ tn ) | ≤ C (cid:8) r ∈ [ t,T ] | M t,zr | p +max r ∈ [ t,T ] | R t,ur | p (cid:9) . Note that lim n →∞ h ( τ tn , M t,zτ tn , R t,uτ tn ) = h ( T, M t,zT , R t,uT ) = 0 in view of (4.20). Letting n → ∞ on the both sides of (6.24), by dominated conver-gence theorem and monotone convergence theorem, we obtain the representation (4.13) for the solution h ,which completes the proof. Proof of Corollary 4.4.
We first show the existence of a classical solution to the Neumann problem (4.12).By Theorem 4.2, the function h defined by (4.13) solves the Neumann problem (4.20). It readily followsthat for ( t, z, y ) ∈ [0 , T ] × R × (0 , v ( t, z, y ) := e ρt h ( t, z, − ln y ) solves the Neumann problem (4.12).The existence of a classical solution to the Neumann problem (4.12) then follows by Proposition 4.1. Forthe uniqueness, let ˆ v ( i ) for i = 1 , h ( i ) ( t, z, u ) := e − ρt ˆ v ( i ) ( t, z, e − u ) for ( t, z, u ) ∈ D T satisfies the polynomial growth for i = 1 ,
2. Theorem 4.2implies that both h (1) and h (2) admit the probabilistic representation (4.13), and hence h = h on D T .Therefore ˆ v (1) ( t, z, y ) = e ρt h (1) ( t, z, − ln y ) = e ρt h (2) ( t, z, − ln y ) = ˆ v (2) ( t, z, y ) for ( t, z, y ) ∈ [0 , T ] × R × (0 , , ∋ y → ˆ v ( t, z, y ) for fixed ( t, z ) ∈ [0 , T ) × R follows fromRemark 4.3. The corollary is proved as desired. 22 cknowledgements : L. Bo and H.F. Liao are supported by Natural Science Foundation of China under grantno. 11971368 and the Key Research Program of Frontier Sciences of the Chinese Academy of Science under grantno. QYZDB-SSW-SYS009. X. Yu is supported by the Hong Kong Early Career Scheme under grant no. 25302116.
References
G. Barles, C. Daher and M. Romano (1994): Optimal control on the L ∞ norm of a diffusion process. SIAM J.Contr. Optim. (3), 612-634.E.N. Barron and H. Ishii (1989): The bellman equation for minimizing the maximum cost. Nonlinear Anal.: Theor.Meth. Appl. (9), 1067-1090.E.N. Barron (1993): The bellman equation for control of the running max of a diffusion and applications to look-backoptions. Appl. Anal. , 205-222.E. Bayraktar and M. Egami (2008): An analysis of monotone follower problems for diffusion processes. Math. Oper.Res. (2), 336-350.O. Bokanowski, A. Picarelli and H. Zidani (2015): Dynamic programming and error estimates for stochastic controlproblems with maximum cost. Appl. Math. Optim. , 125-163.B. Bouchard, R. Elie and C. Imbert (2010): Optimal control under stochastic target constraints. SIAM J. Contr.Optim. (5), 3501-3531.S. Browne (2000): Risk-constrained dynamic active portfolio management Manage. Sci. (9), 1188-1199.Y. Chow, X. Yu and C. Zhou (2020): On dynamic programming principle for stochastic control under expectationconstraints. J. Optim. Theor. App. (3), 803-818.S. Deng, X. Li, H. Pham and X. Yu (2020): Optimal consumption with reference to past spending maximum.
Preprint, arXiv:2006.07223.
A. Gaivoronski, S. Krylov and N. Wijst (2005): Optimal portfolio selection and dynamic benchmark tracking.
Euro.J. Oper. Res. , 115-131.M. Di Giacinto, S. Federico and F. Gozzi (2011): Pension funds with a minimum guarantee: a stochastic controlapproach.
Finance Stoch. , 297-342.M. Di Giacinto, S. Federico, F. Gozzi and E. Vigna (2014): Income drawdown option with minimum guarantee. Euro. J. Oper. Res. , 610-624.P. Guasoni, G. Huberman and D. Ren (2020): Shortfall aversion.
Math. Financ. forthcoming.M. Harrison (1985):
Brownian Motion and Stochastic Flow Systems . John Wiley and Son, New York.N. Ikeda and S. Watanabe (1992):
Stochastic Differential Equations and Diffusion Processes . North-Holland Math-ematical Library, North Holland.I. Karatzas, J. Lehoczky, S.E. Shreve and G. Xu (1991): Martingale and duality methods for utility maximizationin an incomplete market.
SIAM J. Contr. Optim. (3), 702-730.I. Karatzas and S.E. Shreve (1984): Connections between optimal stopping and singular stochastic control: i.monotone follower problems. SIAM J. Contr. Optim. (6), 856-877.I. Karatzas and S.E. Shreve (1991): Brownian Motion and Stochastic Calculus, 2nd Ed.
Springer-Verlag, New York.N. El Karoui, M. Jeanblanc and V. Lacoste (2005): Optimal portfolio manangement with American capital guar-antee.
J. Econ. Dyn. Contr. , 449-468. . El Karoui and A. Meziou (2006): Constrained optimization with respect to stochastic dominance: applicationto portfolio insurance. Math. Financ. (1), 103-117.A. Kr¨oner, A. Picarelli and H. Zidani (2018): Infinite horizon stochastic optimal control problems with runningmaximum cost. SIAM J. Contr. Optim. (5), 3296-3319.H. Kunita (2019): Stochastic Flows and Jump-Diffusions.
Springer-Verlag, New York.J. Sekine (2012): Long-term optimal portfolios with floor.
Finance Stoch. , 369-401.O. Strub and P. Baumann (2018): Optimal construction and rebalancing of index-tracking portfolios. Euro. J.Oper. Res. , 370-387.A. Weerasinghe and C. Zhu (2016): Optimal inventory control with path-dependent cost criteria.
Stoch. Process.Appl. , 1585-1621.D. Yao, S. Zhang and X. Zhou (2006): Tracking a financial benchmark using a few assets.
Oper. Res. (2), 232-246.(2), 232-246.