Explicit solutions in one-sided optimal stopping problems for one-dimensional diffusions
EExplicit solutions in one-sided optimal stoppingproblems for one-dimensional diffusions
Fabi´an Crocce ∗ and Ernesto Mordecki ∗ Abstract
Consider the optimal stopping problem of a one-dimensional diffusion with posit-ive discount. Based on Dynkin’s characterization of the value as the minimal excessivemajorant of the reward and considering its Riesz representation, we give an explicitequation to find the optimal stopping threshold for problems with one-sided stop-ping regions, and an explicit formula for the value function of the problem. Thisrepresentation also gives light on the validity of the smooth fit principle. The resultsare illustrated by solving some classical problems, and also through the solution of:optimal stopping of the skew Brownian motion, and optimal stopping of the stickyBrownian motion, including cases in which the smooth fit principle fails.
Consider a non-terminating and regular one-dimensional (or linear) diffusion X = { X t : t ≥ } , in the sense of Itˆo and McKean [9] (see also Borodin and Salminen [2]). The state spaceof X is denoted by I , an interval of the real line R with left endpoint (cid:96) = inf I and rightendpoint r = sup I , where −∞ ≤ (cid:96) < r ≤ ∞ . We exclude the possibility of absorbing andkilling boundaries; if some of the boundaries belong to I we assume it to be both entranceand exit (i.e. reflecting boundary). Denote by P x the probability measure associated with X when starting from x , and by E x the corresponding mathematical expectation. Denoteby T the set of all stopping times with respect to {F t : t ≥ } , the usual augmentation ofthe natural filtration generated by X (see I.14 in [2]).Given a non-negative lower semicontinuous reward function g : I → R and a discountfactor α >
0, consider the optimal stopping problem consisting in finding a function V α and a stopping time τ ∗ ∈ T , such that V α ( x ) = E x (cid:16) e − ατ ∗ g ( X τ ∗ ) (cid:17) = sup τ ∈ T E x (cid:16) e − ατ ∗ g ( X τ ∗ ) (cid:17) . (1)The value function V α and the optimal stopping time τ ∗ are the solution of the problem.The first problems in optimal stopping appeared in the framework of statistics, moreprecisely, in the context of sequential analysis (see the book by Wald [30]). For continuoustime processes a relevant reference is the book of Shiryaev [28] that also has applications ∗ Igu´a 4225, Centro de Matem´atica, Facultad de Ciencias, Universidad de la Rep´ublica, Montevideo,Uruguay a r X i v : . [ m a t h . P R ] J u l o statistics. A new impulse to these problems is related to mathematical finance, wherearbitrage considerations give that in order to price an American option one has to solvean optimal stopping problem. The first results in this direction were provided by Mc Kean[15] in 1965 and Merton [16] in 1973, who respectively solved the perpetual put and calloption pricing problem, by solving the corresponding optimal stopping problems in thecontext of the Black and Scholes model [1]. For the state of the art in the subject see thebook by Peskir and Shiryaev [21] and the references therein. A new approach for solvingone-dimensional optimal stopping problems for very general reward functions is providedin the recent paper [20].When considering optimal stopping problems we typically find two classes of results.The first one consists in the explicit solution to a concrete optimal stopping problem (1).Usually in this case one has to –somehow– guess the solution and prove that this guessin fact solves the optimization problem; we call this approach “verification”. For examplewe can consider the papers [15], [16], [29], [26]. The second class consists of generalresults, for wide classes of processes and reward functions. We call this the “theoretical”approach. It typically include results about properties of the solution. In this class wemention for example [3], [8], [6]. But these two classes not always meet, as frequently inconcrete problems the assumptions of the theoretical studies are not fulfilled, and, whatis more important, many of these theoretical studies do not provide concrete ways to findsolutions. In what concerns the first approach, a usual procedure is to apply the principleof smooth fit , that generally leads to the solution of two equations: the continuous fit equation and the smooth fit equation. Once these equations are solved, a verificationprocedure is needed in order to prove that the candidate is the effective solution of theproblem (see chapter IV in [21]). This approach, when an explicit solution can be found, isvery effective. In what concerns the second approach, maybe the most important result isDynkin’s characterization of the solution of the value function V α as the least α -excessive(or α -superharmonic) majorant of the payoff function g [3]. Other ways of classifyingapproaches in order to study optimal stopping problems include the martingale-Markoviandichotomy as exposed in [21].In the present work we provide an explicit solution of a right-sided optimal stoppingproblem for a one-dimensional diffusion process, i.e., when the optimal stopping time hasthe form τ ∗ = inf { t ≥ X t ≥ x ∗ } (2)for some optimal threshold x ∗ ∈ I , under mild regularity conditions. Right-sided problemsare also known as call-type optimal stopping problems. Analogous results are valid for left-sided problems.An important byproduct of our results has to do with the smooth fit principle. Ourresults are independent of this principle, but they give sufficient conditions in order toguarantee it.In section 2 some necessary definitions and preliminary results are given. Our mainresults are presented in section 3. In section 4 we discuss the consequences of our resultsrelated to the smooth fit principle. Finally, in section 5 we present some examples, includ-ing the optimal stopping of the skew Brownian motion and of the sticky Brownian motion(suggested by Paavo Salminen), where particular attention to the smooth fit principle isgiven. 2 Definitions and preliminary results
Denote by L the infinitesimal generator of the diffusion X , and by D L its domain. Forany stopping time τ and for any f ∈ D L the following discounted version of the Dynkin’sformula holds: f ( x ) = E x (cid:18)(cid:90) τ e − αt ( α − L ) f ( X t ) dt (cid:19) + E x (cid:0) e − ατ f ( X τ ) (cid:1) . (3)The resolvent of the process X is the operator R α defined by R α u ( x ) = (cid:90) ∞ e − αt E x ( u ( X t )) dt, applied to a function u ∈ C b ( I ) = { u : I → R , u is continuous and bounded } . The range ofthe operator R α is independent of α > D L . Moreover, for any f ∈ D L , R α ( α − L ) f = f , and for any u ∈ C b ( I ),( α − L ) R α u = u . In other terms, R α and ( α − L ) are inverse operators. Denoting by s and m the scale function and the speed measure of the diffusion X respectively, we havethat, for any f ∈ D L , the lateral derivatives with respect to the scale function exist forevery x ∈ ( (cid:96), r ). Furthermore, they satisfy ∂ + f∂s ( x ) − ∂ − f∂s ( x ) = m ( { x } ) Lf ( x ) , (4)and the following identity holds for z > y : ∂ + f∂s ( z ) − ∂ + f∂s ( y ) = (cid:90) ( y,z ] Lf ( x ) m ( dx ) . (5)This last formula allows us to compute the infinitesimal generator of f at x ∈ ( (cid:96), r ) asFeller’s differential operator [7] Lf ( x ) = ∂∂m ∂ + ∂s f ( x ) . (6)The infinitesimal generator at (cid:96) and r (if they belong to I ) can be computed as Lf ( (cid:96) ) =lim x → (cid:96) + Lf ( x ) and Lf ( r ) = lim x → r − Lf ( x ) respectively.Given a function u : I → R , and x ∈ ( (cid:96), r ) we give to Lu ( x ) the meaning given in(6) if it makes sense. We also define, if (cid:96) ∈ I , Lu ( (cid:96) ) = lim x → (cid:96) + Lu ( x ) and if r ∈ I , Lu ( r ) = lim x → r − Lu ( x ), if the limit exists.There exist two continuous functions ϕ α : I (cid:55)→ R + decreasing, and ψ α : I (cid:55)→ R + in-creasing, solutions of αu = Lu , such that any other continuous function u is a solutionof the differential equation if and only if u = aϕ α + bψ α , with a, b in R . Denoting by τ z = inf { t : X t = z } the hitting time of level z ∈ I , we have E x (cid:0) e − ατ z (cid:1) = ψ α ( x ) ψ α ( z ) , x ≤ z, ϕ α ( x ) ϕ α ( z ) , x ≥ z. (7)The functions ϕ α and ψ α , though not necessarily in D L , also satisfy (4) for all x ∈ ( (cid:96), r ) , which allow us to conclude that in case m ( { x } ) = 0, the derivative at x of both functions3ith respect to the scale exists. From (5) applied to ψ α , and taking into account αψ α = Lψ α we obtain for z > y∂ + ψ α ∂s ( z ) − ∂ + ψ α ∂s ( y ) = (cid:90) ( y,z ] αψ α ( x ) m ( dx );the right hand side is strictly positive since αψ α is positive and m charge every openset. We conclude that ∂ + ψ α ∂s is strictly increasing. In an analogous way it can be proventhat ∂ + ϕ α ∂s is increasing as well. The previous consideration, together with the fact that ∂ + ψ α ∂s ( x ) ≥ ∂ + ϕ α ∂s ( x ) ≤ x ∈ ( (cid:96), r ): −∞ < ∂ − ϕ α ∂s ( x ) ≤ ∂ + ϕ α ∂s ( x ) < < ∂ − ψ α ∂s ( x ) ≤ ∂ + ψ α ∂s ( x ) < ∞ . The
Green function of the process X with discount factor α is defined by G α ( x, y ) = (cid:90) ∞ e − αt p ( t ; x, y ) dt, where p ( t ; x, y ) is the transition density of the diffusion with respect to the speed measure m ( dx ) (this density always exists, see [9] or [2]). The Green function may be expressed interms of ϕ α and ψ α as follows: G α ( x, y ) = (cid:40) w − α ψ α ( x ) ϕ α ( y ) , x ≤ y,w − α ψ α ( y ) ϕ α ( x ) , x ≥ y, (8)where w α is the Wronskian , given by w α = ∂ψ + α ∂s ( x ) ϕ α ( x ) − ψ α ( x ) ∂ϕ + α ∂s ( x ) . Observe that the Wronskian is positive and independent of x [9],[2]. Given u : I → R ,under the condition (cid:82) I G α ( x, y ) | u ( y ) | m ( dy ) < ∞ , an application of Fubini’s Theoremgives R α u ( x ) = (cid:90) I G α ( x, y ) u ( y ) m ( dy ) . (9)A non-negative Borel function u : I → R is called α -excessive for the process X if e − αt E x ( u ( X t )) ≤ u ( x ) for all x ∈ I and t ≥
0, and lim t → E x ( u ( X t )) = u ( x ) for all x ∈ I . A 0-excessive function is said to be excessive .Consider the process killed at an independent exponential time of parameter α , i.e. Y t = (cid:40) X t , t < e α ∆ , elsewith e α a random variable with exponential distribution of parameter α independent of X , and ∆ the cemetery state, at which any function is defined to be zero. It is easy tosee that the Green function G Y of the process Y = { Y t : t ≥ } coincides with G α ; a Borelfunction u : I (cid:55)→ R is excessive for Y if and only if it is α -excessive for X . In fact, thenon-discounted optimal stopping problem for the process Y has the very same solution(value function and optimal stopping time) as the α -discounted optimal stopping problemfor X .For general reference on diffusions and Markov processes see [2, 9, 22, 5, 11].4 Main results
Our departing point, inscribed in the Markovian approach, is Dynkin’s characterizationof the optimal stopping problem solution. Dynkin’s characterization [3] states that, if thereward function is lower semi-continuous, V is the value function of the non-discountedoptimal stopping problem with reward g if and only if V is the least excessive functionsuch that V ( x ) ≥ g ( x ) for all x ∈ I . Applying this result for the killed process Y , andtaking into account the relation between X and Y , we obtain that V α , the value functionof the problem with discount α , is characterized as the least α -excessive majorant of g .The second step uses Riesz’s decomposition of an α -excessive function. We recall thisdecomposition in our context (see [12, 13, 4]). A function u : I → R is α -excessive if andonly if there exist a non-negative Radon measure µ and an α -harmonic function such that u ( x ) = (cid:90) ( (cid:96),r ) G α ( x, y ) µ ( dy ) + ( α -harmonic function) . (10)Furthermore, the previous representation is unique. The measure µ is called the repres-enting measure of u .The third step is based on the fact that the resolvent and the infinitesimal generatorof a Markov process are inverse operators. Suppose that we can write V α ( x ) = (cid:90) I G α ( x, y )( α − L ) V α ( y ) m ( dy ) , (11)where L is the infinitesimal generator and m ( dy ) is the speed measure of the diffusion.Assuming that the stopping region has the form I ∩ { x ≥ x ∗ } , and taking into accountthat V α is α -harmonic in the continuation region and V α = g in the stopping region weobtain as a suitable candidate to be the representing measure µ ( dy ) = , if y < x ∗ ,kδ x ∗ ( dy ) , if y = x ∗ , ( α − L ) g ( y ) m ( dy ) , if y > x ∗ , (12)This approach was initiated by Salminen in [23] (see also [17]). According to Salminen’sapproach, once the excessive function is represented as an integral with respect to theMartin kernel M α ( x, y ), V α ( x ) = (cid:90) I M α ( x, y ) κ ( dy ) (13)one has to find the representing measure κ . Martin and Green kernels are related by M α ( x, y ) = G α ( x,y ) G α ( x ,y ) , where x is a reference point. Therefore, Riesz’s representation of V α is related with the one in (13) by considering ν ( dy ) = κ ( dy ) G α ( x ,y ) and M α ( x, (cid:96) ) κ ( { (cid:96) } ) + M α ( x, r ) κ ( { r } ) as the α -harmonic function.It is useful to observe that when the optimal stopping problem (1) is right-sided withoptimal threshold x ∗ it has a value function V α of the form V α ( x ) = (cid:40) E x ( e − ατ x ∗ ) g ( x ∗ ) , x < x ∗ ,g ( x ) , x ≥ x ∗ . V α ( x ) ≥ g ( x ) for all x ∈ I and, in virtue of equation (7), we have V α ( x ) = (cid:40) g ( x ∗ ) ψ α ( x ∗ ) ψ α ( x ) , x < x ∗ ,g ( x ) , x ≥ x ∗ . (14)The state space of the process can include or not the left endpoint (cid:96) and the rightendpoint r . In order to simplify, with a slight abuse of notation, we write [ (cid:96), x ], [ (cid:96), x ),[ x, r ], ( x, r ] to denote respectively I ∩ { y ≤ x } , I ∩ { y < x } , I ∩ { y ≥ x } , I ∩ { y > x } .We say that the function g : I (cid:55)→ R satisfies the right regularity condition (RRC) ifthere exist a point x ∈ I and a function ˜ g : I → R (not necessarily non-negative) suchthat ˜ g ( x ) = g ( x ) for x ≥ x and˜ g ( x ) = (cid:90) I G α ( x, y )( α − L )˜ g ( y ) m ( dy ) ( x ∈ I ) . (15)Proposition 3.5 gives conditions in order to verify the inversion formula (15). Informallyspeaking, the RRC is fulfilled by functions g that satisfy all the local conditions –regularityconditions– to belong to D L for x ≥ x , and does not increase as quick as ψ α does whenapproaching r (in the case r / ∈ I ). Observe that if g satisfies the RRC for certain x italso satisfies it for any greater value; and of course, if g itself satisfy (15) then it satisfiesthe RRC for all x in I . To take full advantage of the following result it is desirable tofind the least x such that the RRC holds.The main result follows. Theorem 3.1.
Consider a diffusion X and a reward function g that satisfies the RRCfor some x . The optimal stopping problem is right-sided with optimal threshold x ∗ ≥ x if and only if: g ( x ∗ ) ≥ w − α ψ α ( x ∗ ) (cid:90) ( x ∗ ,r ] ϕ α ( y )( α − L ) g ( y ) m ( dy ) , (16)( α − L ) g ( x ) ≥ , x ∈ I : x > x ∗ (17) and g ( x ∗ ) ψ α ( x ∗ ) ψ α ( x ) ≥ g ( x ) , x ∈ I : x < x ∗ . (18) Furthermore, in the previous situation: • Riesz’s representation of the value function V α has representing measure as given in (12) with k = g ( x ∗ ) − w − α ψ α ( x ∗ ) (cid:82) ( x ∗ ,r ] ϕ α ( y )( α − L ) g ( y ) m ( dy ) w − α ψ α ( x ∗ ) ϕ α ( x ∗ ) , while the α -harmonic part vanishes; • if x ∗ > x and the inequality (16) is strict, then x ∗ is the smallest number satisfyingthis strict inequality and (17) , in particular g ( x ∗ ) ≤ w − α ψ α ( x ∗ ) (cid:90) [ x ∗ ,r ] ϕ α ( y )( α − L ) g ( y ) m ( dy ); (19) which implies that k ≤ ( α − L ) g ( x ∗ ) m ( { x ∗ } ) ; emark 3.2. From this theorem we obtain an algorithm to solve right-sided optimalstopping problems which works in most cases: (i) Find the largest root x ∗ of the equation g ( x ∗ ) = w − α ψ α ( x ∗ ) (cid:90) ( x ∗ ,r ] ϕ α ( y )( α − L ) g ( y ) m ( dy ); (20)(ii) Verify ( α − L ) g ( y ) ≥ x ≥ x ∗ ; (iii) Verify g ( x ) ≤ g ( x ∗ ) ψ α ( x ∗ ) ψ α ( x ). If these steps arefulfilled, the problem is right-sided with optimal threshold x ∗ . Observe that if m ( { x ∗ } ) =0, then inequalities (16) and (19) are equalities; Proof.
We start by observing that if the problem is right-sided with threshold x ∗ then (17)holds. In general ( α − L ) g is non-negative in the stopping region (this can be seen withthe help of the Dynkin’s operator, see ex. 3.17 p. 310 in [22], see also equation (10.1.35)in [18]). Under the made assumption the value function V α is given by (14), which implies(18) since the value function dominates the reward. To finish the proof of the “only-if”part it remains to prove (16). Consider W α : I (cid:55)→ R defined by W α ( x ) := (cid:90) ( x ∗ ,r ] G α ( x, y )( α − L ) g ( y ) m ( dy );observe that W α is α -excessive in virtue of (17) and Riesz’s representation. Let ˜ V α : I (cid:55)→ R be defined by ˜ V α ( x ) := W α ( x ) + kG α ( x, x ∗ ) , where k is such that ˜ V α ( x ∗ ) = g ( x ∗ ), i.e. k = ( g ( x ∗ ) − W α ( x ∗ )) /G α ( x ∗ , x ∗ ). Observethat, by (8), W α ( x ∗ ) is the right-hand side of (16). In fact, (16) holds if and only if k ≥ V α and the representation (8) of G α we get for x ≤ x ∗ ˜ V α ( x ) = ψ α ( x ) ψ α ( x ∗ ) ˜ V α ( x ∗ ) = ψ α ( x ) ψ α ( x ∗ ) g ( x ∗ ) = V α ( x ) . Let us compute ˜ V α ( x ) − g ( x ) for x ≥ x ∗ . In this region we have g = ˜ g , where ˜ g isthe extension given by the RRC. For ˜ g can use the inversion formula (15). Denoting by ν ˜ g ( dy ) = ( α − L )˜ g ( y ) m ( dy ) we have˜ V α ( x ) − g ( x ) = kG α ( x, x ∗ ) − (cid:90) [ (cid:96),x ∗ ] G α ( x, y ) ν ˜ g ( dy )= kw − α ϕ α ( x ) ψ α ( x ∗ ) − w − α ϕ α ( x ) (cid:90) [ (cid:96),x ∗ ] ψ α ( y ) ν ˜ g ( dy )= ϕ α ( x ) ϕ α ( x ∗ ) ( ˜ V α ( x ∗ ) − g ( x ∗ )) = 0 , because ˜ V α ( x ∗ ) = g ( x ∗ ). So far, we have proved that ˜ V α ( x ) = V α ( x ) for all x ∈ I . Weare ready to prove that k ≥
0, based on the uniqueness of Riesz’s decomposition: the α -excessive function W α has Riesz’s representation given by its definition, and, if k < W α ( x ) = − kG α ( x, x ∗ ) + V α ( x )would give another Riesz’s representation (the representing measure being − kδ { x ∗ } ( dx ) + µ ( dx ), where µ is the representing measure of V α ). An easy way of verifying that themeasures are not the same is to observe that the former does not charge { x ∗ } , while thelatter do. 7o prove the “if” statement observe that, assuming (16) (17) and (18), function ˜ V α ,already defined, is α -excessive (by Riezs’s representation, bearing in mind that k ≥ g . By Dynkin’s characterization, the value function V α is the minimal α -excessive function that dominates g . Therefore ˜ V α ≥ V α . Since ˜ V α satisfies (14) (wehave proved this in the first part of this proof), it follows that ˜ V α is the expected rewardassociated with the hitting time of the set [ x ∗ , r ], then˜ V α ( x ) ≤ V α ( x ) = sup τ E x (cid:0) e − ατ g ( X τ ) (cid:1) , concluding that the problem is right-sided with threshold x ∗ .The consideration about Riesz’s representation of V α stated in the “furthermore” partare a direct consequence of the made proof.To prove the minimality of x ∗ , suppose that there exists x ∗∗ such that x < x ∗∗ < x ∗ satisfying the strict inequality in (16) and (17). Let us check V α ( x ∗∗ ) − g ( x ∗∗ ) <
0, incontradiction with the fact that V α is a majorant of g . Considering the extension ˜ g givenby the RRC and denoting ν ˜ g ( dy ) = ( α − L )˜ g ( y ) m ( dy ) we have V α ( x ∗∗ ) − g ( x ∗∗ ) = − (cid:90) [ (cid:96),x ∗ ] G α ( x ∗∗ , y ) ν ˜ g ( dy ) + kG α ( x ∗∗ , x ∗ ) = s + s + s + s , where s = − (cid:90) [ (cid:96),x ∗∗ ] G α ( x ∗∗ , y ) ν ˜ g ( dy ) ,s = − w − α ψ α ( x ∗∗ ) (cid:90) ( x ∗∗ ,x ∗ ] ϕ α ( y ) ν ˜ g ( dy ) ,s = ψ α ( x ∗∗ ) ψ α ( x ∗ ) ϕ α ( x ∗ ) ϕ α ( x ∗∗ ) (cid:90) [ (cid:96),x ∗∗ ] G α ( x ∗∗ , y ) ν ˜ g ( dy ) ,s = ψ α ( x ∗∗ ) ψ α ( x ∗ ) (cid:90) ( x ∗∗ ,x ∗ ] G α ( x ∗ , y ) ν ˜ g ( dy ) . To check that s + s = kG α ( x ∗∗ , x ∗ ), use k = 1 G α ( x ∗ , x ∗ ) (cid:90) [ (cid:96),x ∗ ] G α ( x ∗ , y ) ν ˜ g ( dy ) , and G α ( x ∗∗ , x ∗ ) G α ( x ∗ , x ∗ ) = ψ α ( x ∗∗ ) ψ α ( x ∗ ) . Finally, observe that s + s = (cid:18) ψ α ( x ∗∗ ) ψ α ( x ∗ ) ϕ α ( x ∗ ) ϕ α ( x ∗∗ ) − (cid:19) (cid:90) [ (cid:96),x ∗∗ ] G α ( x ∗∗ , y ) ν ˜ g ( dy ) < , because the first factor is negative and the second one positive, by the assumption about x ∗∗ , and s + s = w − α ψ α ( x ∗∗ ) ψ α ( x ∗ ) (cid:90) ( x ∗∗ ,x ∗ ] ( ψ α ( y ) ϕ α ( x ∗ ) − ψ α ( x ∗ ) ϕ α ( y )) ν ˜ g ( dy ) < , because the measure ν ˜ g ( dy ) is positive and the integrand non-positive (it is increasing andvanishes in y = x ∗ ). We have obtained that V α ( x ∗∗ ) − g ( x ∗∗ ) = s + s + s + s < Theorem 3.3.
Consider a diffusion X and a reward function g such that (15) is fulfilledfor ˜ g = g . Suppose that there exists a root c ∈ ( (cid:96), r ) of the equation ( α − L ) g ( x ) = 0 ,such that ( α − L ) g ( x ) < if x < c and ( α − L ) g ( x ) > if x > c , and that (cid:82) I ψ α ( y )( α − L ) g ( y ) m ( dy ) ∈ (0 , ∞ ] . Then the optimal stopping problem (1) is right-sided, with optimalthreshold x ∗ = min { x : b ( x ) ≥ } , (21) where b ( x ) = (cid:90) [ (cid:96),x ] ψ α ( y )( α − L ) g ( y ) m ( dy ) ( x ∈ I ) . Proof.
The idea is to apply Theorem 3.1, with x ∗ defined in (21). By the assumptions on( α − L ) g and the fact that m ( dy ) is strictly positive in any open set, we obtain that thefunction b ( x ) is decreasing in [ (cid:96), c ) and increasing in ( c, r ). Moreover b ( x ) < (cid:96) < x ≤ c .Since b is right continuous and increasing in ( c, r ), the set { x : b ( x ) ≥ } = [ x ∗ , r ) with x ∗ > c . Observe that, by (15) and (8) we get g ( x ) = w − α ϕ α ( x ) b ( x ) + (cid:90) ( x,r ] G α ( x, y )( α − L ) g ( y ) m ( dy ) , and b ( x ∗ ) ≥ x ∗ ≥ c we have ( α − L ) g ( y ) > x > x ∗ . Itonly remains to verify (18). By definition of x ∗ there exists a signed measure σ (cid:96) ( dy ) whosesupport is contained in [ (cid:96), x ∗ ], and σ (cid:96) ( dy ) = ( α − L ) g ( y ) m ( dy ) for y < x ∗ and such that (cid:90) [ (cid:96),x ∗ ] ψ α ( y ) σ (cid:96) ( dy ) = 0 . Furthermore σ r ( dy ) = ( α − L ) g ( y ) m ( dy ) − σ (cid:96) ( dy ) is a positive measure supported in [ x ∗ , r ].Using the inversion formula for g and (8), we have for x < x ∗ g ( x ) − ψ α ( x ) ψ α ( x ∗ ) g ( x ∗ ) = (cid:90) [ (cid:96),x ∗ ] G α ( x, y ) σ (cid:96) ( dy ) ≤ G α ( x, c ) ψ α ( c ) (cid:90) [ (cid:96),x ∗ ] ψ α ( y ) σ (cid:96) ( dy ) = 0 , where the inequality follows from the following facts: if y < c then σ (cid:96) ( dy ) ≤ ψ α ( y ) G α ( x, c ) ψ α ( c ) ≤ G α ( x, y ) , while if y > c then σ (cid:96) ( dy ) ≥ ψ α ( y ) G α ( x, c ) ψ α ( c ) ≥ G α ( x, y ) . We can now apply Theorem 3.1 completing the proof.
In order to apply the previous results it is necessary to verify the inversion formula (15).As we have seen in the preliminaries, if f ∈ D L we have R α ( α − L ) f = f , and if equation(9) holds for ( α − L ) f , we have (15). This is the content of the following result.9 emma 3.4. Assume f ∈ D L , and (cid:90) I G α ( x, y ) | ( α − L ) f ( y ) | m ( dy ) < ∞ for all x ∈ I . Then f satisfies equation (15) . The conditions of the previous lemma are very restrictive in order to solve concreteproblems, as reward functions typically satisfy lim x → r g ( x ) = ∞ . The following resultextends the previous one to unbounded functions. Proposition 3.5.
Consider the case r / ∈ I . Suppose u : I (cid:55)→ R is such that Lu ( x ) in (6) can be defined for all x ∈ I . Assume (cid:90) I G α ( x, y ) | ( α − L ) u ( y ) | m ( dy ) < ∞ , (22) and lim z → r − u ( z ) ψ α ( z ) = 0 . (23) Suppose also that for each y ∈ I there exist a function u y ∈ D L such that u y ( x ) = u ( x ) for x ≤ y . Then u satisfies (15) .Proof. By (9) we have (cid:82) I G α ( x, y )( α − L ) u ( y ) m ( dy ) = R α ( α − L ) u ( x ). Consider a strictlyincreasing sequence r n → r ( n → ∞ ) and denote by u n the function u r n +1 ∈ D L ofthe hypothesis. By the continuity of the sample paths, by our assumptions on the rightboundary r , we have τ r n → ∞ ( n → ∞ ). Applying formula (3) to u n and the stoppingtime τ r n we obtain, for x < r n , u n ( x ) = E x (cid:18)(cid:90) τ rn e − αt ( α − L ) u n ( X t ) dt (cid:19) + E x (cid:0) e − ατ rn (cid:1) u n ( r n ) , using u n ( x ) = u ( x ) and ( α − L ) u ( x ) = ( α − L ) u n ( x ) for x < r n +1 we have u ( x ) = E x (cid:18)(cid:90) τ rn e − αt ( α − L ) u ( X t ) dt (cid:19) + E x (cid:0) e − ατ rn (cid:1) u ( r n ) . Taking limits when n → ∞ , by (7) and (23) we have E x (cid:0) e − ατ rn (cid:1) u ( r n ) = ψ α ( x ) ψ α ( r n ) u ( r n ) → . To compute the limit of the first term above we use dominated convergence theorem and(22). The result is u ( x ) = (cid:90) ∞ E x (cid:0) e − αt ( α − L ) u ( X t ) (cid:1) dt = (cid:90) I G α ( x, y )( α − L ) u ( y ) m ( dy )concluding the proof. The principle of smooth fit (SF) holds when condition V (cid:48) ( x ∗ ) = g (cid:48) ( x ∗ ) is satisfied, beinga helpful tool to find candidate solutions to optimal stopping problems. In [23] Salminen10roposes an alternative version of this principle, considering derivatives with respect tothe scale function. We say that there is scale smooth fit (SSF) when the value functionhas derivative at x ∗ with respect to the scale function. Note that if g also has derivativewith respect to the scale function they coincide, since g = V α in [ x ∗ , r ]. In [19] Peskirpresents two interesting examples: one of them consists on the optimal stopping problemof a regular diffusion with a differentiable payoff function in which the principle of SF doesnot hold, but the alternative principle of SSF does; while in the other the principle of SFholds but the principle of SSF fails. Later, Samee [25] analysed the validity of the principleof smooth fit for killed diffusions and introduced other alternative principles consideringderivatives of gψ α and gϕ α with respect to the scale function s . See also the paper by Jacka[10] for a study of the principle of smooth fit related to the Snell envelope.We now analyse the relation between Riez’s representation of V α , stated in the previoussection, and the principle of smooth fit. We start by proving that k = ν ( { x ∗ } ) = 0 in (12)implies that the reward function has derivatives with respect to the function ψ α . Then wefollow by stating some corollary results. Theorem 4.1.
Given a diffusion X and a reward function g , if the value function asso-ciated with the problem (1) satisfies V α ( x ) = (cid:90) ( x ∗ ,r ] G α ( x, y )( α − L ) g ( y ) m ( dy ) , for x ∗ ∈ ( (cid:96), r ) , then V α is differentiable at x ∗ with respect to ψ α .Proof. For x ≤ x ∗ V α ( x ) = w − α ψ α ( x ) (cid:90) ( x ∗ ,r ] ϕ α ( y ) ν g ( dy ) , and the left derivative of V α with respect to ψ α in x ∗ is ∂V − α ∂ψ α ( x ∗ ) = w − α (cid:90) ( x ∗ ,r ] ϕ α ( y ) ν g ( dy ) . For x > x ∗ V α ( x ) = ϕ α ( x ) w − α (cid:90) ( x ∗ ,x ) ψ α ( y ) ν g ( dy ) + ψ α ( x ) w − α (cid:90) [ x,r ] ϕ α ( y ) ν g ( dy ) . Computing the difference between V α ( x ) and V α ( x ∗ ) we obtain V α ( x ) − V α ( x ∗ ) = w − α ( ψ α ( x ) − ψ α ( x ∗ )) (cid:90) [ x,r ] ϕ α ( y ) ν g ( dy )+ w − α (cid:90) ( x ∗ ,x ) ( ϕ α ( x ) ψ α ( y ) − ψ α ( x ∗ ) ϕ α ( y )) ν g ( dy ) . Then ∂V + α ∂ψ α ( x ∗ ) = lim x → x ∗ + V α ( x ) − V α ( x ∗ ) ψ α ( x ) − ψ α ( x ∗ )= w − α lim x → x ∗ + (cid:90) [ x,r ] ϕ α ( y ) ν g ( dy )+ w − α lim x → x ∗ + (cid:82) ( x ∗ ,x ) ϕ α ( x ) ψ α ( y ) − ψ α ( x ∗ ) ϕ α ( y ) ν g ( dy ) ψ α ( x ) − ψ α ( x ∗ ) .
11f the last limit vanishes, we obtain that the right derivative exists, and ∂V + α ∂ψ α ( x ∗ ) = ∂V − α ∂ψ α ( x ∗ ) = w − α (cid:90) ( x,r ] ϕ α ( y ) ν g ( dy ) . This means that we have to provelim x → x ∗ + (cid:90) ( x ∗ ,x ) ϕ α ( x ) ψ α ( y ) − ψ α ( x ∗ ) ϕ α ( y ) ψ α ( x ) − ψ α ( x ∗ ) ν g ( dy ) = 0 . (24)Denoting by f ( y ) the numerator of the integrand in (24), observe that f ( y ) = ϕ α ( x )( ψ α ( y ) − ψ α ( x ∗ )) + ψ α ( x ∗ )( ϕ α ( x ) − ϕ α ( y )) . For the first term, we have (observe that x ∗ < y < x )0 ≤ ϕ α ( x )( ψ α ( y ) − ψ α ( x ∗ )) ≤ ϕ α ( x )( ψ α ( x ) − ψ α ( x ∗ )) , while for the second0 ≥ ψ α ( x ∗ )( ϕ α ( x ) − ϕ α ( y )) ≥ ψ α ( x ∗ )( ϕ α ( x ) − ϕ α ( x ∗ )) . We conclude that ψ α ( x ∗ )( ϕ α ( x ) − ϕ α ( x ∗ )) ≤ f ( y ) ≤ ϕ α ( x )( ψ α ( x ) − ψ α ( x ∗ )) . Dividing by ψ α ( x ) − ψ α ( x ∗ ) we see that the integrand has a lower bound b ( x ) given by b ( x ) := ψ α ( x ∗ ) ϕ α ( x ) − ϕ α ( x ∗ ) ψ α ( x ) − ψ α ( x ∗ ) = ψ α ( x ∗ ) ϕ α ( x ) − ϕ α ( x ∗ ) s ( x ) − s ( x ∗ ) s ( x ) − s ( x ∗ ) ψ α ( x ) − ψ α ( x ∗ ) , while ϕ α ( x ) is an upper bound. We obtain the integral in (24) satisfies b ( x ) ν g ( x ∗ , x ) ≤ (cid:90) ( x ∗ ,x ) ϕ α ( x ) ψ α ( y ) − ψ α ( x ∗ ) ϕ α ( y ) ψ α ( x ) − ψ α ( x ∗ ) ν g ( dy ) ≤ ϕ α ( x ) ν g ( x ∗ , x ) . Taking limits when x → x ∗ + we obtain ϕ α ( x ) → ϕ α ( x ∗ ), ν g ( x, x ∗ ) →
0, andlim x → x ∗ + b ( x ) = ψ α ( x ∗ ) ∂ϕ + α ∂s ( x ∗ ) (cid:30) ∂ψ + α ∂s ( x ∗ ) , concluding that (24) holds.As we have seen in section 3, if the speed measure does not charge x ∗ neither does therepresenting measure. This means that if representation (12) holds, then m ( { x ∗ } ) = 0is enough to guarantee the differentiability of V α with respect to ψ α . We also have thefollowing result. Corolary 4.2.
Assume the conditions of Theorem 4.1. If the speed measure does notcharge x ∗ there is scale smooth fit.Proof. By the previous theorem we know that V α is differentiable with respect to ψ α .Condition m ( { x ∗ } ) = 0 implies that ψ α has derivative with respect to the scale function.We conclude ∂V α ∂s ( x ∗ ) = ∂V α ∂ψ α ( x ∗ ) (cid:30) ∂ψ α ∂s ( x ∗ )12he previous result, under the additional assumption that ψ α and ϕ α are differentiablewith respect to s , could be derived from Corollary 3.7 in [23]. This result states that therepresenting measure of V α does not charge x ∗ if and only if V α is differentiable with respectto s . Also Theorem 4.1 can be derived from the mentioned result under the additionalassumption by using the chain rule.As a consequence of the previous results, we obtain, by using the chain rule, conditionsunder which the principle of SF holds. Corolary 4.3.
Assume that g is differentiable at x ∗ . Under the conditions of Theorem4.1, if ψ α is differentiable at x ∗ and ψ (cid:48) α ( x ∗ ) (cid:54) = 0 (or under the conditions of Corollary 4.2,if s is differentiable at x ∗ and s (cid:48) ( x ∗ ) (cid:54) = 0 ) then the principle of SF holds. The previous result is closely related with Theorem 2.3 in [19], which states that, inthe non-discounted problem, there is smooth fit if the reward and the scale function aredifferentiable at x ∗ . Theorem 2.3 in [25] ensures the validity of the smooth fit principle forthe discounted problem under the assumption that g , ψ α and ϕ α are differentiable at x ∗ .It should be noticed that these results are valid in general, not only in one-sided problems. In this section we show how to solve some optimal stopping problems using the previousresults. We present two classical examples (American and Russian options), and alsoinclude some new examples in which the smooth fit principle is not useful to find thesolution.
Consider a geometric Brownian motion given by X t = x exp( σW t + ( µ − σ / t ), where { W t } is a standard Brownian motion, µ ∈ R and σ >
0. The state space is I = (0 , ∞ ).We refer to [2], p. 132 for the basic characteristics of this process. The infinitesimalgenerator is Lf = σ x f (cid:48)(cid:48) + µxf (cid:48) , with domain D L = { f : f, Lf ∈ C b ( I ) } . Consider the payoff function g ( x ) = ( x − K ) + ( x ∈ R ), where K is a positive constant, anda positive discount factor α satisfying α > µ . The reward function g satisfies the RRC for x = K : it is enough to consider ˜ g ∈ C , bounded in (0 , K ) and such that ˜ g ( x ) = x − K for x ≥ K . This function ˜ g satisfies the inversion formula (15) as a consequence of Proposition3.5. Observe that equation (22) holds. Equation (23) is in this caselim z →∞ ˜ g ( z ) ψ α ( z ) = lim z →∞ z − γ , with ψ α ( x ) = x γ , and γ = 12 − µσ + (cid:115)(cid:18) − µσ (cid:19) + 2 ασ . The last limit vanishes if 1 − γ <
0, which is equivalent to µ < α . To find x ∗ we solveequation (20). After computations, we find x ∗ = K (cid:18) γ γ − (cid:19) .
13t is not difficult to verify (17) and (18) in order to apply Theorem 3.1. We concludethat the problem is right-sided with optimal threshold x ∗ . Observe that the hypothesesof Theorem 4.1 and corollaries 4.2 and 4.3 are fulfilled. In consequence all the variants ofsmooth fit principle hold. This problem was solved by Merton in [16]. The Russian Option was introduced by Shepp and Shiryaev in 1993 in [26], where theoption pricing problem is solved by reduction to an optimal stopping problem of a two-dimensional Markov process. Later, in [27], the authors give an alternative approach tothe same problem solving a one-dimensional optimal stopping problem. In 2000, Salminen[24], making use of a generalization of L´evy’s theorem for Brownian motion with drift,shortened the derivation of the valuation formula in [27] and solved the optimal stoppingproblem related.Consider α > r > σ >
0. Let { X t } be a Brownian motion on I = [0 , ∞ ),with drift − δ <
0, where δ = r + σ / σ and reflected at 0. In [24] it is shown that theoptimal stopping problem to be solved has underlying process { X t } and reward function g ( x ) = e σx . For the basic characteristics of the process we refer to [2], p. 129. Theinfinitesimal generator is Lf ( x ) = f (cid:48)(cid:48) ( x ) / − δf (cid:48) ( x ) for x > Lf (0) = lim x → + Lf ( x ),with domain D L = { f : f, Lf ∈ C b ( I ) , lim x → + f (cid:48) ( x ) = 0 } . The payoff function g ( x ) satisfies the RRC for every x >
0: for x > g with continuous second derivatives such that ˜ g = g in [ x , ∞ ) and such that theright derivative at 0 is 0. By the application of Proposition 3.5 we obtain that ˜ g satisfiesthe inversion formula (15), then the RRC holds. We obtain( α − L ) g ( x ) = ( α − σ / δσ ) e σx = ( α + r ) e σx > . In order to apply Theorem 3.1 we solve equation (20) which in this case is e σx = 1 γ − δ (cid:18) γ − δ γ e ( γ + δ ) x + γ + δ γ e − ( γ − δ ) x (cid:19) (cid:90) ∞ x α + r ) e ( − γ − δ + σ ) y dy, with γ = √ α + δ , obtaining (observe that − γ − δ + σ < x ∗ = 12 γ ln (cid:18)(cid:18) γ + δγ − δ (cid:19) (cid:18) γ − δ + σγ + δ − σ (cid:19)(cid:19) . It is easy to verify conditions (17) and (18), to obtain, by application of Theorem 3.1, thatthe problem is right-sided with threshold x ∗ , as proved in [27]. We consider a Brownian motion skew at zero. This process behaves like a standardBrownian motion outside the origin, but has an asymmetric behaviour when hitting x = 0,modeling a permeable barrier. The behaviour at x = 0 is regulated by a parameter β ∈ (0 , skewness parameter . The state space of this process is I = R .For details on this process and its basic characteristics we refer to see [2], p. 126 or [14].14he infinitesimal generator is Lf ( x ) = f (cid:48)(cid:48) ( x ) / x (cid:54) = 0 and Lf (0) = lim x → Lf ( x ), withdomain D L = { f : f, Lf ∈ C b ( I ) , βf (cid:48) (0 + ) = (1 − β ) f (cid:48) (0 − ) } . Consider the payoff function g ( x ) = x + . Function g satisfies the RRC for x = 0: to seethis it is necessary to construct ˜ g such that ˜ g = g in [0 , ∞ ) ˜ g with second derivative boundedin ( −∞ ,
0) and such that ˜ g (cid:48) (0 − ) = β/ (1 − β ) (so that ˜ g satisfies the local conditions tobelong to D L ). Applying Proposition 3.5 it can be concluded that ˜ g satisfies (15), so theRRC holds. We have ( α − L ) g ( x ) = αx ( x ≥ . Equation (20) is in this case x ∗ = 1 √ α (cid:18) − ββ sinh( √ α x ∗ ) + e √ α x ∗ (cid:19) (cid:90) ∞ x ∗ e −√ α t αt βdt or equivalently x ∗ = 12 √ α (cid:16) (2 β − e −√ α x ∗ ( √ α x ∗ + 1) + √ α x ∗ + 1 (cid:17) . (25)In general, this equation can not be solved analytically. If we consider the particular case β = , in which the process is the ordinary Brownian motion, we obtain the known result x ∗ = √ α (see [29]). Consider a particular case, in which α = 1 and β = 0 .
9. Solvingnumerically equation (25) we obtain x ∗ (cid:39) . . Checking (17) and (18) we concludethat the problem is right-sided with optimal threshold x ∗ . Consider again the Skew Brownian motion with parameter β = 1 /
3, a payoff function g ( x ) = ( x + 1) + and a discount α = 1 /
8. We have ( α − L ) g ( x ) = α ( x + 1) ( x ≥ x ∗ = 0 is a solution of (16) (with equality). It is easy to see that theassumptions of Theorem 3.1 are fulfilled. We conclude that the problem is right-sidedwith threshold x ∗ = 0. Moreover, we know V α ( x ) = (cid:40) x + 1 , x ≥ ,ψ α ( x ) , x ≤ . where ψ α ( x ) = (cid:40) e √ α x , x ≤ , − ββ sinh( x √ α ) + e √ α x , x ≥ . Unlike the previous examples, the value function V α is not differentiable at x ∗ . As wesee in Figure 1, the graph of V α shows an angle at x = 0. By application of Theorem 4.1and Corollary 4.2 we conclude that V α is differentiable with respect to ψ α and SSF hold.An example considering a regular diffusion with non-differentiable scale function, inwhich the the SF fails to hold, was provided for the first time by Peskir [19]. Consider a Brownian motion, sticky at 0. We refer to [2] p. 123 for the basic characteristicsof this process. It behaves like a Brownian motion out of 0, but spends a positive time at15
Figure 1: Solution of the OSP for the skew Brownian with α = 1 / β = 1 / g = ( x + 1) + . Black graph corresponds to g , while the gray one corresponds to ψ α . V α is the dotted line. x = 0; this time depends on a positive parameter that we assume to be 1 in this example.The state space of this process is I = R . The scale function is s ( x ) = x and the speedmeasure is m ( dx ) = 2 dx +2 δ { } ( dx ). The infinitesimal generator is Lf ( x ) = f (cid:48)(cid:48) ( x ) / x (cid:54) = 0, and Lf (0) = lim x → Lf ( x ), with domain D L = { f : f, Lf ∈ C b ( I ) , f (cid:48)(cid:48) (0 + ) = f (cid:48) (0 + ) − f (cid:48) (0 − ) } . Consider the reward function g ( x ) = ( x + 1) + , that satisfies the RRC for x = − α such that the optimal threshold is the sticky point. We use (20) ina different way: we fix x = 0 and solve the resulting equation in α . We obtain1 = w − α (cid:90) (0 , ∞ ) e −√ α y α ( y + 1)2 dy and we find that α = ( − √ (cid:39) .
19 is the unique solution. It can be seen, by applic-ation of Theorem 3.1, that if α = α the problem is right-sided with threshold x ∗ = 0.In this case the representing measure of V α does not charge x ∗ despite the speed meas-ure does. Furthermore, Theorem 4.1 can be applied to conclude that V α is differentiablewith respect to ψ α . It also should be noticed that both SF and SSF fail to hold in thiscase. This was expectable because the sufficient conditions given in [25] and [23] for thedifferent types of smooth fit are not fulfilled. Another interesting thing to remark is that d ( V α /ϕ α ) /ds (and also d ( V α /ψ α ) /ds ) exists at 0, which is part of the conclusion of The-orem 2.1 in [25], despite this result is not applicable in this case. This last fact seems tobe related with the existence of dV α /dψ α Another approach to obtain x ∗ = 0 is when the strict inequality holds in (16) and also(19) holds. We solve (in α ) equation (19) (with equality), which is g (0) = w − α ψ α (0) (cid:90) [0 , ∞ ) ψ α ( y )( α − L ) g ( y ) m ( dy ) . (26)Since the measure m ( dy ) has an atom at y = 0, the solution of the previous equation differsfrom α . Solving (26) we find the root α = 1 /
2. It is easy to see that for α ∈ [ α , α ], the16BM: solution of the OSP depending on αα x ∗ (16) (19) SF & SSF ∃ dV α /dψ α ∃ d ( V α /ϕ α ) /ds Fig. α ∈ (0 , α ) x ∗ > α = α x ∗ = 0 = < no yes yes 2(b) α ∈ ( α , α ) x ∗ = 0 > < no no no 2(c) α = α x ∗ = 0 > = yes no no 2(d) α ∈ ( α , + ∞ ) x ∗ < x satisfying the inequality (16) is 0. Theorem 3.1 can be applied to conclude thatthe problem is right-sided with threshold x ∗ = 0. For α ∈ ( α , α ] we cannot apply anyof the results of section 4, and in fact, for α ∈ ( α , α ) none of the smooth fit principlesis fulfilled. With α = α there is SF (and also SSF, since s ( x ) = x ), but this is not aconsequence of (26), it is due to the particular reward function. This example shows thatthe theorems on smooth fit in section 4 only gives sufficient conditions. In table 5.4 wesummarize the information about the solution of the optimal stopping problem. We alsogive, in Figure 2, some graphics showing the solution for different values of α . References [1] F. Black and M. Scholes,
The pricing of options and corporate liabilities , The Journalof Political Economy (1973), pp. 637–654.[2] A.N. Borodin and P. Salminen,
Handbook of Brownian motion—facts and formulae ,2nd ed., Birkh¨auser Verlag, Basel, 2002.[3] E.B. Dynkin,
Optimal choice of the stopping moment of a Markov process , Dokl.Akad. Nauk SSSR 150 (1963), pp. 238–240.[4] E.B. Dynkin,
The exit space of a Markov process , Uspehi Mat. Nauk 24 (1969), pp.89–152.[5] E.B. Dynkin,
Theory of Markov processes , Dover Publications Inc., Mineola, NY,2006, translated from the Russian by D. E. Brown and edited by T. K¨ov´ary, Reprintof the 1961 English translation.[6] N. El Karoui, J. Lepeltier, and A. Millet,
A probabilistic approach of the reduite ,Probability and Mathematical Statistics 13 (1992), pp. 97–121.[7] W. Feller,
Generalized second order differential operators and their lateral conditions ,Illinois journal of mathematics 1 (1957), pp. 459–504.[8] B.I. Grigelionis and A.N. Shiryaev,
On Stefan’s problem and optimal stopping rules forMarkov processes , Theory of Probability & Its Applications 11 (1966), pp. 541–558.[9] K. Itˆo and J.H.P. McKean,
Diffusion processes and their sample paths , Springer-Verlag, Berlin, 1974, second printing, corrected, Die Grundlehren der mathematischenWissenschaften, Band 125.[10] S. Jacka,
Local times, optimal stopping and semimartingales , The Annals of Probab-ility (1993), pp. 329–339.[11] I. Karatzas and S.E. Shreve,
Brownian motion and stochastic calculus , Graduate Textsin Mathematics, Vol. 113, 2nd ed., Springer-Verlag, New York, 1991.17 (a) α = 0 . -0.4 -0.2 0 0.2 0.4 0.6 0.80.60.811.21.41.61.8 (b) α = α (cid:39) . -0.4 -0.2 0 0.2 0.4 0.6 0.80.60.811.21.41.61.8 (c) α = 0 . -0.4 -0.2 0 0.2 0.4 0.6 0.80.60.811.21.41.61.8 (d) α = α = 0 . -1 -0.8 -0.6 -0.4 -0.2 0.2 0.40.511.5x* (e) α = 2 Figure 2: Solution of the OSP for the sticky Brownian motion with different values of α with reward function ( x + 1) + (graphic in black). The graphic in gray corresponds to( cte ) ψ α . The value function V α is indicated with dots, it coincides with ( cte ) ψ α for x ≤ x ∗ and with g for x ≥ x ∗ . 1812] H. Kunita and T. Watanabe, Markov processes and Martin boundaries , Bull. Amer.Math. Soc. 69 (1963), pp. 386–391.[13] H. Kunita and T. Watanabe,
Markov processes and Martin boundaries. I , Illinois J.Math. 9 (1965), pp. 485–526.[14] A. Lejay,
On the constructions of the skew Brownian motion , Probab. Surv. 3 (2006),pp. 413–466, URL http://dx.doi.org/10.1214/154957807000000013 .[15] H. McKean Jr,
Appendix: A free boundary problem for the heat equation arising froma problem in mathematical economics , Industrial Management Review 6 (1965), pp.32–39.[16] R.C. Merton,
Theory of rational option pricing , Bell J. Econom. and ManagementSci. 4 (1973), pp. 141–183.[17] E. Mordecki and P. Salminen,
Optimal stopping of Hunt and L´evy pro-cesses , Stochastics 79 (2007), pp. 233–251, URL http://dx.doi.org/10.1080/17442500601100232 .[18] B. Øksendal,
Stochastic differential equations , sixth ed., Springer-Verlag, Berlin, 2003,an introduction with applications.[19] G. Peskir,
Principle of smooth fit and diffusions with angles , Stochastics 79 (2007),pp. 293–302, URL http://dx.doi.org/10.1080/17442500601040461 .[20] G. Peskir,
A duality principle for the Legendre transform , Journal of Convex Analysis19 (2012), pp. 609–630.[21] G. Peskir and A. Shiryaev,
Optimal stopping and free-boundary problems , Birkh¨auserVerlag, Basel, 2006.[22] D. Revuz and M. Yor,
Continuous martingales and Brownian motion , Grundlehrender Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sci-ences], Vol. 293, 3rd ed., Springer-Verlag, Berlin, 1999.[23] P. Salminen,
Optimal stopping of one-dimensional diffusions , Math. Nachr. 124(1985), pp. 85–101, URL http://dx.doi.org/10.1002/mana.19851240107 .[24] P. Salminen,
On Russian options , Theory of Stochastic Processes 6 (2000), pp. 3–4.[25] F. Samee,
On the principle of smooth fit for killed diffusions , Electronic Communic-ations in Probability 15 (2010), pp. 89–98.[26] L.A. Shepp and A.N. Shiryaev,
The Russian option: reduced regret , Ann. Appl.Probab. 3 (1993), pp. 631–640.[27] L.A. Shepp and A.N. Shiryaev,
A new look at the “Russian option” , Teor. Veroyatnost.i Primenen. 39 (1994), pp. 130–149, URL http://dx.doi.org/10.1137/1139004 .[28] A.N. Shiryaev,
Optimal stopping rules , Stochastic Modelling and Applied Probability,Vol. 8, Springer-Verlag, Berlin, 2008, translated from the 1976 Russian second editionby A. B. Aries, Reprint of the 1978 translation.[29] H.M. Taylor,
Optimal stopping in a Markov process , Ann. Math. Statist. 39 (1968),pp. 1333–1344.[30] A. Wald,