On the One-Dimensional Optimal Switching Problem
aa r X i v : . [ m a t h . O C ] M a y ON THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM
ERHAN BAYRAKTAR AND MASAHIKO EGAMI
Abstract.
We explicitly solve the optimal switching problem for one-dimensional diffusions by directlyemploying the dynamic programming principle and the excessive characterization of the value function.The shape of the value function and the smooth fit principle then can be proved using the properties ofconcave functions. Introduction
Stochastic optimal switching problems (or starting and stopping problems) are important subjects bothin mathematics and economics. Switching problems were introduced into the study of real options byBrennan and Schwarz (1985) to determine the manager’s optimal decision making in resource extractionproblems, and by Dixit (1989) to analyze production facility problems. A switching problem in the caseof a resource extraction problem can be described as follows: The controller monitors the price of naturalresources and wants to optimize her profit by operating an extraction facility in an optimal way. Shecan choose when to start extracting this resource and when to temporarily stop doing so, based uponprice fluctuations she observes. The problem is concerned with finding an optimal starting/stopping(switching) policy and the corresponding value function.There has been many recent developments in understanding the nature of the optimal switching prob-lems. When the underlying state variable is geometric Brownian motion and for some special reward/coststructure Brekke and Øksendal (1994), Duckworth and Zervos (2001), Zervos (2003) apply a verificationapproach for solving the variational inequality associated with the optimal switching problem. By usinga viscosity solution approach, Pham and Ly Vath (2007) generalize the previous results by solving theoptimal switching problem for more general reward functions. They do not assume a specific form butonly H¨older continuity of the reward function. In contrast, our aim is to obtain general results thatapplies to all one-dimensional diffusions (in some switching problems a mean reverting process might bemore reasonable model for the underlying state process). Also, we will not assume the H¨older continuityof the running reward function.The verification approach applied in the above papers is indirect in the sense that one first conjecturesthe form of the value function and the switching policy and next verifies the optimality of the candidatefunction by proving that the candidate satisfies the variational inequalities. In finding the specific form
Mathematics Subject Classification.
Key words and phrases.
Optimal switching problems, optimal stopping problems, Itˆo diffusions.E. Bayraktar is supported in part by the National Science Foundation. M. Egami is supported in part by Grant-in-Aidfor Scientific Research (C) No. 20530340, Japan Society for the Promotion of Science. An earlier version of this article isavailable at Arxiv, see Bayraktar and Egami (2007). We thank Savas Dayanik for his feedback in the early stage of thiswork. of the candidate function, appropriate boundary conditions, including the smooth-fit principle, are em-ployed. This formation shall lead to a system of non-linear equations that are often hard to solve andthe existence of the solution to these system of equations is difficult to prove. Moreover, this indirectsolution method is specific to the underlying process and reward/cost structure of the problem. Hencea slight change in the original problem often causes a complete overhaul in the highly technical solutionprocedures.Our solution method is direct in the sense that we work with the value function itself. First wecharacterize the value function as the solution of two coupled optimal stopping problems. In otherwords we prove a dynamic programming principle. A proof of a dynamic programming principle forswitching problems was given by Tang and Yong (1993) assuming a H¨older continuity condition onthe reward function. We give a new proof using a sequential approximation method (see Lemma 2.1and Proposition 2.1) and avoid making this assumption. The properties of the essential supremumand optimal stopping theory for Markov processes play a key role in our proof. Second, we give asufficient condition which guarantees that the switching regions hitting times of certain closed sets (seeProposition 2.2). Next, making use of our sequential approximation we show when the optimal switchingproblem reduces to an ordinary stopping problem (see Proposition 2.3). Finally, in the non-degeneratecases we construct an explicit solution (see Proposition 2.5) using the excessive characterization of thevalue functions of optimal stopping problem (which corresponds to the concavity of the value functionafter a certain transformation) Dayanik and Karatzas (2003) (also see Dynkin (1965), Alvarez (2001;2003)), see Lemma 2.3. In Proposition 2.5, we see that the continuation regions do not necessarily haveto be connected. We give two examples, one of which illustrates this point. In the next example, weconsider an problem in which the underlying state variable is an Ornstein-Uhlenbeck process.It is worth mentioning the work of Pham (2007), which provides another direct method to solve optimalswitching problems through the use of viscosity solution technique. Pham shows that the value functionof the optimal switching problem is continuously differentiable and is the classical solution of its quasi-variational inequality under the assumption that the reward function is Lipschitz continuous. Johnsonand Zervos (2009), on the other hand, by using a verification theorem, determine sufficient conditionsthat guarantee that the problem has connected continuation regions or is degenerate (see Section 5 andTheorem 7 of that paper). A somewhat related problem to the optimal switching problem we study hereis the infinite horizon optimal multiple stopping problem of Carmona and Dayanik (2008), which wasintroduced to give a complete mathematical analysis of energy swing contracts. This problem is posedin the context of pricing American options when the holder of the option has multiple n exercise rights.To make the problem non-trivial it is assumed that the holder chooses the consecutive stopping timeswith a strictly positive break period (otherwise the holder would use all his rights at the same time). Itis difficult to explicitly determine the solution and Carmona and Dayanik describe a recursive algorithmto calculate the value of the American option. In the switching problems, however, there are no limits onhow many times the controller can switch from one state to another and one does not need to assume astrictly positive break period. Moreover, we are able to construct explicit solutions. Other related worksinclude, Hamad`ene and Jeanblanc (2007), which analyzes a finite time horizon optimal switching problemwith a general adapted observation process using the recently developed theory of reflected stochastic N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 3 backward differential equations. Carmona and Ludkovski (2008) focus on a numerical resolution basedon Monte-Carlo regressions. Recently an interesting connection between the singular and the switchingproblems was given by Guo and Tomecek (2008).The rest of the paper is organized as follows: In Section 2.1 we define the optimal switching problem.In Section 2.2 we study the problem in which the controller only can switch finitely many times. Usingthe results of Section 2.2, in Section 2.3 we give a characterization of the optimal switching problemas two coupled optimal stopping problems. In Section 2.4, we show that the usual hitting times of thestopping regions are optimal. In Section 2.5 we give an explicit solution. In Section 2.6 we give twoexamples illustrating our solution.2.
The Optimal Switching Problem
Statement of the Problem.
Let (Ω , F , P ) be a complete probability space hosting a Brownianmotion W = { W t ; t ≥ } . Let F = ( F t ) t ≥ be natural filtration of W . The controlled stochastic processes, X with state space ( c, d ) ( −∞ ≤ c < d ≤ ∞ ), is a continuous process, which is defined as the solution of(2.1) dX t = µ ( X t , I ( t )) dt + σ ( X t , I ( t )) dW t , X = x, in which the right-continuous switching process I is defined as(2.2) I ( t ) = I { t<τ } + I { τ ≤ t<τ } + · · · + I n { τ n ≤ t<τ n +1 } + · · · where I i ∈ { , } and I i +1 = 1 − I i for all i ∈ N . Here, the sequence ( τ n ) n ≥ is an increasing sequence of F -stopping times with lim n →∞ τ n = τ c,d , almost surely (a.s.). Here, τ c,d , inf { t ≥ X t = c or X t = d } .The stopping time τ c,d = ∞ when both c and d are natural boundaries. We will denote the set of suchsequences by S . We will assume that the boundaries are either absorbing or natural.We are going to measure the performance of a strategy T = ( τ , τ · · · , τ n , · · · )by(2.3) J T ( x, i ) = E x,i Z τ c,d e − αs f ( X s , I s ) ds − X j e − ατ j H ( X τ j , I j − , I j ) , in which H : ( c, d ) × { , } → R is the immediate benefit/cost of switching from I j − to I j . We assumethat H is continuous in its first variable and(2.4) | H ( x, i, − i ) | ≤ C H (1 + | x | ) for x, y ∈ ( c, d ) and i ∈ { , } , for some strictly positive constants C H < ∞ . Moreover, we assume that(2.5) H ( x, ,
1) + H ( x, , > . We also assume that the running benefit f : ( c, d ) × { , } → R is a continuous function and satisfies thelinear growth condition:(2.6) | f ( x, i ) | ≤ C f (1 + | x | ) , ERHAN BAYRAKTAR AND MASAHIKO EGAMI for some strictly positive constant C f < ∞ . This assumption will be crucial in what follows, for exampleit guarantees that(2.7) E x,i (cid:20)Z τ c,d e − αs | f ( X s , I s ) | ds (cid:21) < B (1 + | x | ) , for some B , if we assume that the discount rate is large enough, which will be a standing assumption inthe rest of our paper (see page 5 of Pham (2007)).The goal of the switching problem then is to find v ( x, i ) , sup T ∈S J T ( x, i ) , x ∈ ( c, d ) , i ∈ { , } , (2.8)and also to find an optimal T ∈ S if it exists.2.2. When the Controller Can Switch Finitely Many Times.
For any F stopping time σ let usdefine(2.9) S nσ , { ( τ , · · · , τ n ) : τ i is an F stopping time for all i ∈ { , · · · , n } and σ ≤ τ ≤ · · · ≤ τ n < τ c,d } . In this section, we will consider switching processes of the form(2.10) I ( n ) ( t ) = I { t<τ } + · · · + I n − { τ n − ≤ t<τ n } + I n { t ≥ τ n } , in which the stopping times ( τ , · · · , τ n ) ∈ S n . By X ( n ) we will denote the solution of (2.1) when wereplace I with I ( n ) . So with this notation we have that(2.11) dX (0) t = µ (cid:16) X (0) t , I (cid:17) dt + σ (cid:16) X (0) t , I (cid:17) dW t , X (0)0 = x. We assume a strong solution to (2.11) exits and that(2.12) | µ ( x, i ) | + | σ ( x, i ) | ≤ C (1 + | x | ) , for some positive constant C < ∞ , which guarantees the uniqueness of the strong solution. We shouldnote that(2.13) X ( n ) t = X (0) t , t ≤ τ ; · · · X ( n ) t = X ( n − t , t ≤ τ n . The value function of the problem in which the controller chooses n switches is defined as(2.14) q ( n ) ( x, i ) , sup ( τ , ··· ,τ n ) ∈S n E x,i Z τ c,d e − αs f ( X ( n ) s , I ( n ) s ) ds − n X j =1 e − ατ j H ( X ( n ) τ j , I j − , I j ) . We will denote the value of making no switches by q (0) , which we define as(2.15) q (0) ( x, i ) , E x,i (cid:20)Z τ c,d e − αs f ( X (0) s , i ) ds (cid:21) , which is well defined due to our assumption in (2.7). N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 5
Let τ y be the first hitting time of y ∈ I by X (0) , and let c ∈ I be a fixed point of the state space. Weset: ψ i ( x ) = E x,i [ e − ατ c { τ c < ∞} ] , x ≤ c, / E c,i [ e − ατ x { τ x < ∞} ] , x > c, ϕ i ( x ) = / E c,i (cid:2) e − ατ x { τ x < ∞} (cid:3) , x ≤ c, E x,i [ e − ατ c { τ c < ∞} ] , x > c, It should be noted that ψ i ( · ) and ϕ i ( · ) consist of an increasing and a decreasing solution of the second-order differential equation ( A i − α ) u = 0 in I where A i is the infinitesimal generator of X (0)) when I = i in (2.11). They are linearly independent positive solutions and uniquely determined up to multiplication.For the complete characterization of the functions ψ i ( · ) and ϕ i ( · ) corresponding to various types ofboundary behavior see Itˆo and McKean (1974). For future use let us define the increasing functions(2.16) F i ( x ) , ψ i ( x ) ϕ i ( x ) , and G i ( x ) , − ϕ i ( x ) ψ i ( x ) , x ∈ ( c, d ) , i ∈ { , } . In terms of the Wronskian of ψ i ( · ) and ϕ i ( · ) by(2.17) W i ( x ) := ψ ′ i ( x ) ϕ i ( x ) − ψ i ( x ) ϕ ′ i ( x ) . we can express q (0) ( x, i ) as q (0) ( x, i ) = (cid:20) ψ i ( x ) − ψ i ( c ) ϕ i ( c ) ϕ i ( x ) (cid:21) Z dx h ϕ i ( y ) − ϕ i ( d ) ψ i ( d ) ψ i ( y ) i σ ( y, i ) W i ( y ) f ( y, i ) dy + (cid:18) ϕ i ( x ) − ϕ i ( d ) ψ i ( d ) ψ i ( x ) (cid:19) Z xc h ψ i ( y ) − ψ i ( l ) ϕ ( c ) ϕ ( y ) i σ ( y, i ) W i ( y ) f ( y, i ) dy, (2.18) x ∈ ( c, d ), see e.g. Karlin and Taylor (1981) pages 191-204 and Alvarez (2004) page 272.Now, consider the following sequential optimal stopping problems:(2.19) w ( n ) ( x, i ) , sup τ ∈S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) w ( n − ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) where w (0) ( x, i ) = q (0) ( x, i ), x ∈ ( c, d ) and i ∈ { , } . Lemma 2.1.
For n ∈ N , we have that q ( n ) ( x, i ) = w ( n ) ( x, i ) , for all x ∈ ( c, d ) and i ∈ { , } . Moreover, q ( n ) is continuous in the x -variable.Proof. See Appendix (cid:3)
Characterization of the Optimal Switching Problem as Two Coupled Optimal StoppingProblems.
Using the results of the previous section, here we will show that the optimal switchingproblem can be converted into two coupled optimal stopping problems.
Corollary 2.1.
For all x ∈ ( c, d ) and i ∈ { , } , the increasing sequence ( q ( n ) ( x, i )) n ∈ N converges: (2.20) lim n →∞ q ( n ) ( x, i ) = v ( x, i ) . Moreover, v is continuous in the x -variable. ERHAN BAYRAKTAR AND MASAHIKO EGAMI
Proof.
Since S nσ j S n +1 σ j S , it follows that ( q ( n ) ( x, i )) n ∈ N is a non-decreasing sequence and(2.21) lim n →∞ q ( n ) ( x, i ) ≤ v ( x, i ) , x ∈ ( c, d ) , i ∈ { , } . Assume that v ( x, i ) < ∞ . Let us fix x and i . For a given ε >
0, let T = ( τ , · · · , τ n , · · · ) ∈ S be an ε -optimal strategy, i.e.,(2.22) J T ( x, i ) ≥ v ( x, i ) − ε. Note that T depends on x . Now T ( n ) , ( τ , · · · , τ n ) ∈ S ( n )0 , and(2.23) X ( n ) t = X t , and I ( n ) t = I t , t ≤ τ n . Let τ c,d be the smallest time that X reaches c or d , and τ ( n ) c,d be the smallest time X ( n ) reaches c or d .Since τ n → τ c,d as n → ∞ , almost surely, it follows from the growth assumptions on f and H that(2.24) E x,i "Z τ c,d τ n e − αt | f ( X t , I t ) | dt + Z τ ( n ) c,d τ n e − αt | f ( X t , I t ) | dt < ε, and(2.25) E x,i X j>n e − ατ j H ( X τ j , I j − , I j ) < ε. for large enough n . It follows from (2.24) and (2.25) thatlim inf n →∞ J T ( n ) ( x, i ) = lim inf n →∞ E x,i Z τ c,d e − αs f ( X ( n ) s , I ( n ) s ) ds − n X j =1 e − ατ j H ( X ( n ) τ j , I j − , I j ) ≥ J T ( x, i ) − ε. (2.26)Therefore, using (2.22) we get(2.27) lim inf n →∞ q ( n ) ( x, i ) ≥ lim inf n →∞ J T ( n ) ( x, i ) ≥ v ( x, i ) − ε. Since ε is arbitrary, this along with (2.21) yields the proof of the corollary when v ( x, i ) < ∞ .When v ( x, i ) = ∞ , then for each positive constant B < ∞ , there exists T ∈ S such that J T ( x, i ) ≥ B .Then, if we choose T ( n ) ∈ S n as before with ε = 1, we get J T ( n ) ≥ B −
2, which leads to(2.28) lim inf n →∞ q ( n ) ( x, i ) ≥ lim inf n →∞ J T ( n ) ≥ B − . Since B is arbitrary, we have that(2.29) lim n →∞ q ( n ) ( x, i ) = ∞ . It is clear from our proof that q ( n ) ( x, i ) converges to v ( x, i ) locally uniformly. Since x → q ( n ) ( x, i ) iscontinuous, the continuity of x → v ( x, i ) follows. (cid:3) The next result shows that the optimal switching problem is equivalent to solving two coupled optimalstopping problems.
N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 7
Proposition 2.1.
The value function of the optimal switching problem has the following representationfor any x ∈ ( c, d ) and i ∈ { , } : (2.30) v ( x, i ) = sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) v ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) , which can also be written as (2.31) v ( x, i ) = q (0) ( x, i ) + sup τ ∈ S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , i ) + v ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)i , due to the strong Markov property of X (0) .Proof. First note that(2.32) w ( n ) ( x, i ) ↑ v ( x, i ) , as n → ∞ , as a result of Proposition 2.1 and Lemma 2.1. Therefore, it follows from (2.19) that(2.33) w ( n ) ( x, i ) ≤ sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) v ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) . To obtain the opposite inequality let us choose e τ such that E x,i "Z e τ e − αs f ( X (0) s , i ) ds + e − α e τ (cid:16) v ( X (0) e τ , − i ) − H ( X (0) e τ , i, − i ) (cid:17) ≥ sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) v ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) − ε. (2.34)Then by the monotone convergence theorem v ( x, i ) = lim n →∞ w ( n ) ( x, i ) ≥ lim n →∞ E x,i "Z e τ e − αs f ( X (0) s , i ) ds + e − α e τ (cid:16) w ( n − ( X (0) e τ , − i ) − H ( X (0) e τ , i, − i ) (cid:17) = E x,i "Z e τ e − αs f ( X (0) s , i ) ds + e − α e τ (cid:16) v ( X (0) e τ , − i ) − H ( X (0) e τ , i, − i ) (cid:17) ≥ sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) v ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) − ε. (2.35)This proves the statement of the proposition. (cid:3) Remark 2.1. (i)
It is clear that the result of the previous proposition holds even for finite horizonproblems, which can be shown by making slight modifications (by setting the cost functions to beequal to zero after the maturity) to the proofs above. (ii)
Also, if there are more than two regimes the controller can choose from (2.31) can be modified toread (2.36) v ( x, i ) = q (0) ( x, i ) + sup τ ∈ S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , i ) + M v ( X (0) τ , i ) (cid:17)i , ERHAN BAYRAKTAR AND MASAHIKO EGAMI where (2.37) M v ( x, i ) = max j ∈ ( I−{ i } ) ( v ( x, j ) − H ( x, i, j )) , and I is the set of regimes. A Class of Optimal Stopping Times.
In this section, using the classical theory of optimalstopping times, we will show that hitting times of certain kind are optimal. We will first show that theassumed growth condition on f and H leads to a growth condition on the value function v , from whichwe can conclude that v is finite on ( c, d ). Lemma 2.2.
There exists a constant C v such that (2.38) v ( x, i ) ≤ C v (1 + | x | ) , x ∈ ( c, d ) , i ∈ { , } . In fact, the same holds for all q ( n ) , n ∈ N .Proof. As in Pham (2007) due to the linear growth condition on b and σ , the process X defined in (2.1)satisfies the second moment estimate(2.39) E x,i (cid:2) X t (cid:3) ≤ Ce Ct (1 + | x | ) , for some positive constant C . Due to the linear growth assumption on f we have that E x,i (cid:20)Z ∞ e − αt | f ( X t , I t ) | dt (cid:21) ≤ C f E x,i (cid:20)Z ∞ e − αt (1 + | X t | ) dt (cid:21) ≤ √ CC f Z ∞ e − αt e Ct/ (1 + | x | ) dt ≤ C v (1 + | x | ) , (2.40)for some large enough constant C v . Here the second inequality follows from the Jensen’s inequality andthe fact that p (1 + | x | ) ≤ | x | . Also recall that we have assumed the discount factor α to be largeenough. (This is similar to the assumption in Pham (2007)). Taking the supremum over T ∈ S in (2.40)we obtain that(2.41) v ( x, i ) ≤ sup T ∈S E x,i (cid:20)Z ∞ e − αt | f ( X t , I t ) | dt (cid:21) ≤ C v (1 + | x | ) . The linear growth of q ( n ) can be shown similarly. (cid:3) Proposition 2.2.
Let us define (2.42) Γ i , { x ∈ ( c, d ) : v ( x, i ) = v ( x, − i ) − H ( x, i, − i ) } , i ∈ { , } . Let us assume that c = 0 and d = ∞ and the following one of the two hold: (1) c is absorbing, and d is natural, (2) Both c and d are natural.Then if for i ∈ { , } , lim x →∞ x/ψ i ( x ) = 0 , the stopping times (2.43) τ ∗ ,i , inf { t ≥ X (0) t ∈ Γ i } , are optimal. Note that X (0) in (2.11) depends on I = i , through its drift and volatility. N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 9
Proof.
Let us prove the statement for Case 1. First, we define(2.44) l id , lim x → d ( v ( x, − i ) − q (0) ( x, i ) − H ( x, i, − i )) + ψ i ( x ) , i ∈ { , } . By Lemma 2.2 v and q (0) satisfy a linear growth condition. We assumed that H also satisfies a lineargrowth condition. Therefore the assumption on ψ i guarantees that l id = 0, for i ∈ { , } . But then fromProposition 5.7 of Dayanik and Karatzas (2003) the result follows.For Case 2, we will also need to show that(2.45) l ic , lim x → c ( v ( x, − i ) − q (0) ( x, i ) − H ( x, i, − i )) + ϕ i ( x ) = 0 , and use Proposition 5.13 of Dayanik and Karatzas (2003). But the result is immediate since v , q (0) and H are bounded in a neighborhood of c = 0 and lim x → c ϕ i ( x ) = ∞ , since c is a natural boundary. (cid:3) Remark 2.2.
If both c and d are absorbing it follows from Proposition 4.4 of Dayanik and Karatzas(2003) that the stopping times in (2.43) are optimal, since H , q (0) and v are continuous. Also, observethat when c is absorbing (2.45) still holds since v ( c, i ) = 0 , i ∈ { , } . Similarly, when d is absorbing l id in (2.44) is equal to zero. Remark 2.3.
Since H ( · , i, − i ) + H ( · , − i, i ) is strictly positive, it can easily seen from the definitionthat Γ ∩ Γ = ∅ . Explicit Solutions.
In this section, we let c = 0 and d = ∞ , and assume that c is either naturalor absorbing, and that d is natural. Proposition 2.3.
Let us introduce the functions (2.46) h ( x ) , q (0) ( x, − q (0) ( x, − H ( x, , and h ( x ) , q (0) ( x, − q (0) ( x, − H ( x, , . (i) If for all x ∈ (0 , ∞ ) we have that h ( x ) ≤ and h ( x ) ≤ , then Γ = Γ = ∅ . (ii) Let us assume that the dynamics of (2.1) do not depend on I ( t ) (as a result X (0) = X and wewill denote E x, = E x, by E x ). Then h ( x ) ≤ for all x ∈ (0 , ∞ ) implies that Γ = ∅ . Similarly,if h ( x ) ≤ for all x ∈ (0 , ∞ ) , then Γ = ∅ . (Observe that, in this case, the optimal switchingproblem reduces to an ordinary optimal stopping problem.)Proof. (i) For any n ≥
1, let us introduce(2.47) u ( n ) ( x, i ) , w ( n ) ( x, i ) − q (0) ( x, i ) , x ∈ (0 , ∞ ) , i ∈ { , } . Using the strong Markov property of X (0) and (2.19) we can write(2.48) u ( n ) ( x, i ) = sup τ ∈S E x,i h e − ατ (cid:16) u ( n − ( X (0) τ , − i ) + h i ( X (0) τ ) (cid:17)i . Since u (0) ( x, i ) = 0 and h i ( x ) ≤
0, it follows from (2.48) that u (1) ( x, i ) = 0 , x ∈ (0 , ∞ ) , i ∈ { , } . If we assume that for m ∈ { , · · · , n − } we have that u ( m ) ( x, i ) = 0, x ∈ (0 , ∞ ), i ∈ { , } ; it followsfrom (2.48) that u ( m +1) ( x, i ) = 0, x ∈ (0 , ∞ ), i ∈ { , } . For a given n , we can carry out this inductionargument to show that u ( n ) ( x, i ) = 0 , x ∈ (0 , ∞ ) , i ∈ { , } . Now using Lemma 2.1 and Corollary 2.1 we have that v ( x, i ) = q (0) ( x, i ), which yields the desired result. (ii) We will only prove the first statement since the proof of the second statement is similar. As in theproof of (i) h ( x ) ≤ x ∈ (0 , ∞ ) implies that u (1) ( · ,
1) = 0. On the other hand, u (1) ( x,
0) = sup τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) . Let us assume that for m ∈ { , · · · , n − } u ( m ) ( x,
1) = 0 , and that u ( m ) ( x,
0) = sup τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) . Since H ( x, ,
1) + H ( x, , > h ( x ) + h ( x ) <
0, which in turn implies that(2.49) u ( m ) ( x, ≤ − inf τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) . Using (2.49) we can write u ( m +1) ( x,
1) = sup τ ∈S E x h e − ατ (cid:16) u ( m ) ( X τ ,
0) + h ( X τ ) (cid:17)i ≤ sup τ ∈S E x h e − ατ u ( m ) ( X τ , i + inf τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) = u ( m ) ( x,
0) + inf τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) ≤ . (2.50)The second equality in the above equation follows from the assumption that the dynamics of (2.1) do notdepend on I ( t ): Indeed, since the function u ( m ) is a value function of an optimal stopping problem, then itis positive and F concave (see e.g. Proposition 5.11 of Dayanik and Karatzas (2003)). On the other hand,by the same proposition of Dayanik and Karatzas (2003) we note that sup τ ∈S E x (cid:2) e − ατ u ( m ) ( X τ , − i ) (cid:3) isthe smallest non-negative F concave majorant of the function u ( m ) , which is non-negative and F -concave,it follows that sup τ ∈S E x h e − ατ u ( m ) ( X τ , i = u ( m ) ( x, . Since the function u ( m +1) is non-negative, it follows from (2.50) that u ( m +1) ≡
0. On the other hand, u ( m +1) ( x,
0) = sup τ ∈S E x h e − ατ (cid:16) u ( m ) ( X τ ,
1) + h ( X τ ) (cid:17)i = sup τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) thanks to our induction hypothesis. Using this induction on m , we see that u ( n ) ( x,
1) = 0 , and that u ( n ) ( x,
0) = sup τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) , for any given n ∈ N + . At this point applying Lemma 2.1 and Corollary 2.1 we obtain that v ( x,
1) = q (0) ( x, , v ( x,
0) = sup τ ∈S E x (cid:2) e − ατ h ( X τ ) (cid:3) + q (0) ( x, . N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 11 (cid:3)
In the rest of this section we will assume that the dynamics of (2.1) do not depend on I ( t ) . We willdenote F i by F and G i by G . We will also assume that x → H ( x, i, − i ), is continuously differentiablefor i ∈ { , } . Proposition 2.4.
Let us define (2.51) K ( y ) , h ( F − ( y )) ϕ ( F − ( y )) , y ∈ (0 , ∞ ) , and (2.52) K ( y ) , h ( G − ( y )) ψ ( G − ( y )) , y ∈ ( −∞ , , . Here F − and G − are functional inverses of F and G , respectively. If y → K ( y ) is non-negative concaveon (0 , ∞ ) then Γ = (0 , ∞ ) and Γ = ∅ . Similarly, if y → K ( y ) is non-negative concave on (0 , ∞ ) then Γ = (0 , ∞ ) and Γ = ∅ .Proof. We will only prove the first statement. In this case the function P in (2.56) is non-negativeconcave and its non-negative concave majorant in (2.54) satisfies V = P , which implies that v ( x,
0) = v ( x, − H ( x, ,
1) for all x . Therefore Γ = (0 , ∞ ). Thanks to Remark 2.3 we necessarily have that Γ = ∅ . (cid:3) Lemma 2.3.
Let us define (2.53) u ( x, i ) , v ( x, i ) − q (0) ( x, i ) , x ∈ (0 , ∞ ) , i ∈ { , } , and (2.54) V ( y ) , u ( F − ( y ) , ϕ ( F − ( y )) , y ∈ (0 , ∞ ) , (2.55) V ( y ) , u ( G − ( y ) , ψ ( G − ( y )) , y ∈ ( −∞ , . Then the following statements hold: (i) y → V ( y ) and y → V ( y ) are the smallest positive concave majorants of (2.56) P ( y ) , u ( F − ( y ) , ϕ ( F − ( y )) + K ( y ) , y ∈ (0 , ∞ ) , and (2.57) P ( y ) , u ( G − ( y ) , ψ ( G − ( y )) + K ( y ) , y ∈ ( −∞ , , respectively. (ii) V (0) = V (0) = 0 . (iii) V is piecewise linear on { y ∈ R + : V ( y ) > P ( y ) } and (2.58) C := { y ∈ R + : V ( y ) = P ( y ) } ⊂ K := (cid:26) y ∈ R + : ∂ K ( y ) ∂y < (cid:27) . Moreover, the function P is concave on K . (iv) V is piecewise linear on { y ∈ R − : V ( y ) > P ( y ) } and (2.59) C := { y ∈ R − : V ( y ) = P ( y ) } ⊂ K := (cid:26) y ∈ R − : ∂ K ( y ) ∂y < (cid:27) . The function P is concave on K .Proof. (i). It follows from (2.31) that u ( · , i ) satisfies(2.60) u ( x, i ) = sup τ ∈S E x h e − ατ (cid:16) u ( X (0) τ , − i ) + q (0) ( X (0) τ , − i ) − q (0) ( X (0) τ , i ) − H ( X (0) τ , i, − i ) (cid:17)i . The statement follows from Theorem 16.4 of Dynkin (1965) (also see Proposition 5.11 of Dayanik andKaratzas (2003)). (ii).
The result follows from (2.45) and (2.44). We use Remark 2.2 when 0 is absorbing. (iii), (iv).
First, we want to show that P is concave on K . It is enough to show that u ( F − ( y ) , ϕ ( F − ( y ))is concave on K . But this can be shown using item (i). Now the rest of the statement follows since V is the smallest concave majorant of P . Proof of (iv) follows similarly. (cid:3) In the next proposition we will give sufficient conditions under which the switching regions are con-nected and provide explicit solutions for the value function of the switching problem. We will also showthat the value functions of the switching problem x → v ( x, i ), i ∈ { , } are continuously differentiableunder our assumptions.In what follows we will assume that H ( x, i, − i ) ≡ H ( i, − i ), i ∈ { , } . Proposition 2.5.
Let us assume that the function h and h defined in Proposition 2.3 satisfy (2.61) lim x →∞ h ( x ) > , sup x> h ( x ) > , and that K = ( M , ∞ ) for some M > . Then (2.62) v ( x,
0) = β ψ ( x ) + q (0) ( x, , x ∈ (0 , a ) β ϕ ( x ) + q (0) ( x, − H (0 , , x ∈ [ a, ∞ ) , for some positive constants a , β and β . Moreover the following statements hold: (i) If K = ( −∞ , − M ) for some M > , then x → v ( x, has the following form (2.63) v ( x,
1) = β ψ ( x ) + q (0) ( x, − H (1 , , x ∈ (0 , b ] β ϕ ( x ) + q (0) ( x, , x ∈ ( b, ∞ ) , for a positive constant b < a . N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 13 (ii) If K = ( − L, − N ) , for some L, N > , v ( x, is of the form (2.64) v ( x,
1) = ˆ β ψ ( x ) + q (0) ( x, x ∈ (0 , ˜ b ) β ψ ( x ) + q (0) ( x, − H (1 , , x ∈ [˜ b, c ] e β ϕ ( x ) + q (0) ( x, , x ∈ ( c, ∞ ) . for positive constant c < a and ˜ b < c .In both cases the value functions are continuously differentiable. As a result the positive a, b, ˜ b, c, β , β , ˆ β and e β can be determined from the continuous and the smooth fit conditions. Before, we give the proof we will make two quick remarks.
Remark 2.4.
Note that h ( x ) + h ( x ) < for all x ∈ R + . So when h ( x ) > , we have that h ( x ) < .The assumption that lim x →∞ h ( x ) > ensures that the controller prefers state 0 to state 1, which canalso be seen from the form of the value function v ( · , in (2.62) . Remark 2.5.
The following two identities can be checked to see when the functions K i , i ∈ { , } areconcave: (2.65) d K ( y ) dy · ( A − α ) h ( x ) ≥ , where y = F ( x ) , and (2.66) d K ( z ) dz · ( A − α ) h ( x ) ≥ , where z = G ( x ) , where A is the infinitesimal generator of X .It follows from (2.65) and (2.66) that if d K ( y ) dy ≤ , then d K ( z ) dz ≥ . Let us prove this statement.First, due to (2.65) , d K ( y ) dy ≤ implies ( A − α ) ( q (0) ( x, − q (0) ( x, αH (0 , ≤ . As a result ( A − α ) ( − q (0) ( x,
1) + q (0) ( x, αH (1 , ≥ αH (0 ,
1) + αH (1 , > , which implies d K ( z ) dz ≥ thanks to (2.66) . Proof of Proposition 2.5.
The proof is a corollary of Lemma 2.3. Let us denote k := inf C . We willargue that k < ∞ . If we assume that k = ∞ , then it follows that Γ = R + and hence v ( x,
0) = v ( x, − H (0 ,
1) = q (0) ( x, − H (0 , q (0) ( x, ≤ v ( x, h ( x ) = q (0) ( x, − q (0) ( x, − H (1 , ≤ − ( H (0 ,
1) + H (1 , < x ∈ R + . Thiscontradicts our assumption on h in (2.61).Since V ( k ) = P ( k ) and y → P ( y ) is concave on ( k, ∞ ) (by Lemma 2.3 (iii)) it follows that C =[ k, ∞ ), due to the fact that V is the smallest concave majorant of P . As a result, thanks also toLemma 2.3 (ii), we have that(2.67) V ( y ) = α y y ∈ (0 , k ) P ( y ) y ∈ [ k, ∞ ) , for some constants α > α k = P ( k ) , which proves (2.62). (i) Similarly, if we let l := sup C , then we have that this quantity is a finite negative number and that C = ( −∞ , l ). As a result,(2.69) V ( y ) = P ( y ) y ∈ ( −∞ , l ] , − β y y ∈ ( l, , for some constant β > − β l = P ( l ) . Next, we are going to determine α , β , k and l making use of the fact that V and V are smallest non-negative majorants of P and P further. First observe that y → K ( y ) is continuously differentiablesince by (2.18), x → q (0) ( x, i ) is continuously differentiable. Second, by using (2.53) and (2.54) we obtain(2.71) u ( x,
0) = αψ ( x ) , x ∈ [0 , F − ( k )) ,βϕ ( x ) + q (0) ( x, − q (0) ( x, − H (0 , , x ∈ [ F − ( k ) , ∞ ) , and(2.72) u ( x,
1) = αψ ( x ) + q (0) ( x, − q (1) ( x, − H (1 , , x ∈ [0 , G − ( l )] ,βϕ ( x ) , x ∈ ( G − ( l ) , ∞ ) . It follows from Remark 2.3 that(2.73) G − ( l ) < F − ( k ) . As a result, we have that the function x → u ( x,
1) is differentiable on ( F − ( k ) − ǫ, ∞ ), for some ǫ > K , this observation yields that(2.74) y → P ( y ) is differentiable on [ k − δ , ∞ ) , for some δ >
0. Similarly, the differentiability of x → u ( x,
0) on (0 , G − ( l )) implies that(2.75) y → P ( y ) is differentiable on ( −∞ , l + δ ] , for some δ >
0. ¿From (2.74) and (2.75) together with the fact that V and V are the smallest non-negative majorants of P and P , we can determine α , β , k and l from the following additional equationsthey satisfy α = ∂P ( y ) ∂y (cid:12)(cid:12)(cid:12)(cid:12) y = k , β = − ∂P ( y ) ∂y (cid:12)(cid:12)(cid:12)(cid:12) y = l . (2.76) N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 15
Using (2.53) we can write the value functions v ( · , i ), i ∈ { , } as(2.77) v ( x,
0) = αψ ( x ) + q (0) ( x, , x ∈ (0 , F − ( k )) ,βϕ ( x ) + q (0) ( x, − H (0 , , x ∈ [ F − ( k ) , ∞ ) , and(2.78) v ( x,
1) = αψ ( x ) + q (0) ( x, − H (1 , , x ∈ (0 , G − ( l )] ,βϕ ( x ) + q (0) ( x, , x ∈ ( G − ( l ) , ∞ ) . Now, a direct calculation shows that the left derivative and the right derivative of x → v ( x,
0) are equalat x = F − ( k ). Similarly, one can show the same holds for the function x → v ( x,
1) at x = G − ( l ). Thiscompletes the proof of (i). (ii) Let us denote s := inf C and t := sup C . The function P is concave on the interval [ s, t ],by Lemma 2.3 (iv). Moreover, because V is the smallest non-negative majorant of P it follows that C = [ s, t ]. Using the facts that V is piecewise linear on { y ∈ R − : V ( y ) > P ( y ) } , V (0) = 0 andlim y →−∞ K ( y ) <
0, the last being equivalent to lim x → h ( x ) < h in (2.61), the relation between h and h pointed out in Remark 2.4, and our assumption on the set K ), we can write V as V ( y ) = γ , y ∈ ( −∞ , s ) ,P ( y ) , y ∈ [ s, t ] , − e β y, y ∈ ( t, , from which (2.64) follows. Note that s = t since the function V is the smallest positive concave majorantof the function P . The proof that the smooth fit property is satisfied at the boundaries follows similarline of arguments to the proof of item (i). (cid:3) Examples.Example 2.1.
In this example we will show how changing the switching costs we can move from havingone continuation regions (item (i) in Proposition 2.5) to disconnected continuation regions (item (ii) inProposition 2.5). Let the running reward function in (2.3) be given by f i ( x ) = k i x γi for i = 0 , < γ < γ < k > , k ∈ R + . We assume that dynamics of the underlying state variable follow dX t = mX t dt + βX t dW t where m and β are some given constants and(2.79) mγ + β γ < α. Case 1. A Connected continuation region.
We assume that H (1 , < H (0 , >
0. (Recallthat H (1 ,
0) + H (0 , > ψ , ϕ , F , G , q (0) ( · , i ), i ∈ { , } , in terms of which we stated our assumptions. The increasing and decreasing solutions of the ordinary differential equation (
A − α ) u = 0 are given by ψ ( x ) = x µ + and ϕ ( x ) = x µ − , where µ + , − = 1 β − m + 12 β ± r ( m − β ) + 2 αβ ! . Note that under the assumption α > m , we have µ + > ν − <
0. Observe that lim x →∞ x/ψ = 0, i ∈ { , } (the main assumption of Proposition 2.2). It follows that F = x /β and G = − x − /β , inwhich ∆ = r ( m − β ) + 2 αβ . We can calculate q (0) ( · , i ), i ∈ { , } explicitly: q (0) ( x, i ) = E x,i (cid:20)Z ∞ e − αs f i ( X (0) s )d s (cid:21) = k i x γ i C i , where C i := α − ( mγ i − β γ i (1 − γ i )) > < γ i < α > m . On the other hand, h ( x ) = k x γ C − k x γ C − H (0 , , and h ( x ) = k x γ C − k x γ C − H (1 , . The limits(2.80) lim x →∞ h ( x ) = ∞ , lim x → h ( x ) = − H (1 , > . Hence (2.61) in Proposition 2.5 is satisfied.Let us show that K = ( M , ∞ ) and K = ( −∞ , − M ) for some M , M >
0. Remark 2.5 will be usedto achieve this final goal.(2.81) (
A − α ) h ( x ) = k C (cid:18) mγ + β γ − α (cid:19) x γ + αH (0 , − k C (cid:18) mγ + β γ − α (cid:19) x γ , which is negative only for large enough x (since we assumed (2.79)). On the other hand,(2.82) ( A − α ) h ( x ) = k C (cid:18) mγ + β γ − α (cid:19) x γ − k C (cid:18) mγ + β γ − α (cid:19) x γ + αH (1 , , which is negative only for small enough non-negative x .Thanks to Proposition 2.5 the value function v ( x, i ) is given by v ( x,
0) = β x µ + + k M x γ , x ∈ [0 , a ) ,β x µ − + k M x γ − H (0 , , x ∈ [ a, ∞ ) , ; v ( x,
1) = β x µ + + k M x γ − H (1 , , x ∈ [0 , b ] ,β x µ − + k M x γ , x ∈ [ b, ∞ ) , in which the positive constants β , β , a and b can be determined from continuous and smooth fitconditions. Figure 2.1 illustrates a numerical example. Note that since the closing cost C is negative, v ( x, ≥ v ( x,
0) for all x ∈ R + . Case 2. Multiple continuation regions.
We will change the value of switching from 1 to 0 andassume that it is positive, i.e., H (1 , > H (0 , >
0. Clearly,
N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 17 H x, i L Figure 2.1.
A numerical example illustrating Case 1. Here, ( m, β, α, L, C ) =(0 . , . , . , , −
2) and ( γ , γ , k , k ) = (0 . , . , . , . a, b, β , β ) = (3 . , . , . , . v ( x,
0) is plotted in a blue lineon x ∈ (0 , a ] and in a dashed line on x ∈ ( a, ∞ ). v ( x,
1) is plotted in a dashed line on x ∈ (0 , b ]and in a red line on x ∈ ( b, ∞ ). sup x> h ( x ) >
0. Moreover, the analysis of (2.82) easily shows that (
A − α ) h (0) = Cα > x →∞ ( A − α ) h ( x ) = ∞ >
0. As a result (
A − α ) h ( x ) = 0 has two real roots. This fact translatesinto the fact that K = ( − L, − N ) for some L, N >
0. See Figure 2.2 for the shape of K ( y ) for aparticular set of parameters. - - - - - - - - - - H y L (a) - ´ - ´ - ´ - ´ - ´ y - - - H y L (b) Figure 2.2.
A numerical example illustrating Case 2. Here, ( m, β, α, H (0 , , H (1 , . , . , . , ,
5) and ( γ , γ , k , k ) = (0 . , . , . , . K ( y ) function (a) in the neigh-borhood of the origin in the transformed space and (b) in the large negative value. Thanks to Proposition 2.5, the value function v ( x,
1) is given by (2.64). Figure 2.3 displays the solutionfor a particular set of parameters.
Example 2.2. Ornstein-Uhlenbeck process.
The purpose of this exercise is to solve give an exampleof an optimal switching problem for a mean-reverting process. Let the dynamics of the state variable be H x, i L (a) H x, i L (b) Figure 2.3. (a) A numerical example illustrating Case 2. Here, ( m, β, α, H (0 , , H (1 , . , . , . , ,
5) and ( γ , γ , k , k ) = (0 . , . , . , . β , e β , ˆ β ) = (0 . , . , .
6) and ( a, ˜ b, c ) = (4 . , . , . v ( x,
0) is plotted in a blue line on x ∈ (0 , a ) and in a dashed line on x ∈ [ a, ∞ ). v ( x,
1) is plottedin a red line in x ∈ (0 , ˜ b ) and ( c, ∞ ). It is plotted in a dashed line on x ∈ [˜ b, c ]. (b) To show themultiple continuation regions more clearly, we magnify the left picture (a) near the origin. given by dX t = δ ( m − X t )d t + σdW t . Let the value function be defined by v ( x, i ) = sup T ∈S E x,i "Z τ e − αt ( X t − K ) I t dt − X τ i <τ e − ατ i H ( X τ i , I i , I i +1 ) , in which τ = inf { t > X t = 0 } . Our assumptions in Section 2.1, (2.12), (2.4) and (2.6) are satisfiedby our model. Let us introduce(2.83) ˜ ψ ( x ) , e δx / D − α/δ ( − x √ δ ) and ˜ ϕ ( x ) , e δx / D − α/δ ( x √ δ ) , where D ν ( · ) is the parabolic cylinder function; (see Borodin and Salminen (2002)(Appendices 1.24 and2.9), which is given in terms of the Hermite function as(2.84) D ν ( z ) = 2 − ν/ e − z / H ν ( z/ √ , z ∈ R . Recall that Hermite function H ν of degree ν and its integral representation(2.85) H ν ( z ) = 1Γ( − ν ) Z ∞ e − t − tz t − ν − dt, Re( ν ) < , (see for example, Lebedev (1972) (pages 284, 290)). In terms of the functions in (2.83) the fundamentalsolutions of ( A − α ) u = 0 and ( A − α ) u = 0 are given by ψ ( x ) = ˜ ψ (( x − m ) /σ ) , ϕ ( x ) = ˜ ϕ (( x − m ) /σ ) . N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 19
Observe that lim x →∞ x/ψ ( x ) = 0 (the main assumption of Proposition 2.2). Since E x [ X (0) t ] = e − δt x + m (1 − e − δt ), we have(2.86) q (0) ( x,
0) = 0 and q (0) ( x,
1) = x − mδ + α + m − Kα .
Note that the limits of the functions h ( x ) = q (0) ( x, − q (0) ( x, − H (0 ,
1) = x − mδ + α + m − Kα − H (0 , , and h ( x ) = − (cid:18) x − mδ + α + m − Kα + H (1 , (cid:19) are given by lim x →∞ h ( x ) = ∞ , and lim x → h ( x ) = − (cid:18) − mδ + α + m − Kα + H (1 , (cid:19) , When Kα − δmα ( α + δ ) > H (1 , x → h ( x ) > K = ( M , ∞ ) and K = ( −∞ , − M ) for some M , M >
0. For this purpose we willagain use Remark 2.5.(2.87) (
A − α ) h ( x ) = ( A − α ) (cid:18) x − mδ + α + m − Kα − H (0 , (cid:19) = − x + δmδ + α + K − αH (0 , , which implies that the function K is concave only ( M , ∞ ), for some M >
0. On the other hand,(2.88) (
A − α ) h ( x ) = − ( A − α ) (cid:18) x − mδ + α + m − Kα + H (1 , (cid:19) = x − δmδ + α − K + αH (1 , , which implies that K ( · ) is concave only on ( −∞ , − M ) for some M >
0. Now, as a result of Proposi-tion 2.5, we have that v ( x,
0) = ˆ v ( x ) , x ∈ (0 , a ) , ˆ v ( x ) − L, x ∈ [ a, ∞ ) , ; v ( x,
1) = ˆ v ( x ) − C, x ∈ (0 , b ] , ˆ v ( x ) , x ∈ ( b, ∞ ) , in which ˆ v ( x ) = β ( ψ ( x ) − F (0) ϕ ( x )) + q (0) ( x, β e δ x − m )2 σ (cid:26) D − α/δ (cid:18) − (cid:18) x − mσ (cid:19) √ δ (cid:19) − F (0) D − α/δ (cid:18)(cid:18) x − mσ (cid:19) √ δ (cid:19)(cid:27) , and ˆ v ( x ) = β ϕ ( x ) + q (0) ( x, β e δ ( x − m )22 σ D − α/δ ( x − m ) √ δσ ! + x − mδ + α + m − Kα .
The parameters, a , b , β and β can now be obtained from continuous and smooth fit since we knowthat the value functions v ( · , i ), i ∈ { , } are continuously differentiable. See Figure 2.4 for a numericalexample. H x, 0 L , v H x, 1 L Figure 2.4.
A numerical solution illustrating Example 2.2. Here,( m, α, σ, δ, K, H (0 , , H (1 , . , . , . , . , . , . , − . a = 0 . b = 0 . β = 2 . β = 1 . v ( x,
0) is plotted in a blue line on x ∈ (0 , a ] and in a dashed blackline on x ∈ ( a, ∞ ). v ( x,
1) is plotted in a dashed line on x ∈ (0 , b ] and in a red line on x ∈ ( b, ∞ ). Appendix A. Proof of Lemma 2.1
We will approximate the switching problem by iterating optimal stopping problems. This approach ismotivated by Davis (1993) (especially the section on impulse control) and Øksendal and Sulem (2005).To establish our goal we will use the properties of the essential supremum (see Karatzas and Shreve(1998), Appendix A) and the optimal stopping theory for Markov processes in Fakeev (1971). A similarproof, in the context of “multiple optimal stopping problems”, is carried out by Carmona and Dayanik(2008).For any F stopping time σ , let us define(A.1) Z ( n ) σ , ess sup ( τ , ··· ,τ n ) ∈S nσ E x,i Z τ c,d σ e − αs f ( X ( n ) s , I ( n ) s ) ds − n X j =1 e − ατ j H ( X ( n ) τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ ≥ , for n ≥
1, and(A.2) Z (0) σ , E x,i (cid:20)Z τ c,d σ e − αs f ( X (0) s , I ) ds (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) ≥ . We will perform the proof of the lemma in four steps.
Step 1 . If we can show that the family(A.3) Z , E x,i Z τ c,d σ e − αs f ( X ( n ) s , I ( n ) s ) ds − n X j =1 e − ατ j H ( X ( n ) τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ : ( τ , · · · , τ n ) ∈ S nσ , N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 21 is directed upwards, it follows from the properties of the essential supremum (see Karatzas and Shreve(1998,Appendix A)) that for all n ∈ N (A.4) Z ( n ) σ = lim k → τ c,d ↑ E x,i Z τ c,d σ e − αs f ( X ( n ) ,ks , I ( n ) ,ks ) ds − n X j =1 e − ατ j H ( X ( n ) ,kτ kj , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ for some sequence (cid:8)(cid:0) τ k , · · · τ kn (cid:1)(cid:9) k ∈ N ⊂ S nσ . Here, X ( n ) ,k is the solution of (2.1) when we replace I by I ( n ) ,k which is defined as(A.5) I ( n ) ,k ( t ) , I { t<τ k } + · · · + I n − { τ kn − ≤ t<τ kn } + I n { t ≥ τ kn } . We will now argue that (A.3) is directed upwards (see Karatzas and Shreve (1998) Appendix A for thedefinition of this concept ): For any ( τ , · · · , τ n ) , ( τ , · · · , τ n ) ∈ S nσ , let us define the event A , ( E x,i (cid:20) Z τ c,d σ e − αs f ( X ( n ) , s , I ( n ) , s ) ds − n X j =1 e − ατ j H ( X ( n ) , τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) ≥ E x,i (cid:20) Z τ c,d σ e − αs f ( X ( n ) , s , I ( n ) , s ) ds − n X j =1 e − ατ j H ( X ( n ) , τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21)) , (A.6)and the stopping times(A.7) τ i , τ i A + τ i Ω − A , i ∈ { , · · · , n } . Then ( τ , · · · , τ n ) ∈ S nσ and E x,i Z τ c,d σ e − αs f ( X ( n ) , s , I ( n ) , s ) ds − n X j =1 e − ατ j H ( X ( n ) , τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ = max ( E x,i (cid:20) Z τ c,d σ e − αs f ( X ( n ) , s , I ( n ) , s ) ds − n X j =1 e − ατ j H ( X ( n ) , τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) + E x,i (cid:20) Z τ c,d σ e − αs f ( X ( n ) , s , I ( n ) , s ) ds − n X j =1 e − ατ j H ( X ( n ) , τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21)) , (A.8) and therefore Z is directed upwards. Step 2.
In this step we will show that(A.9) Z ( n ) σ = ess sup τ ∈ S σ E x,i (cid:20)Z τσ e − αs f ( X (0) s , I ) ds − e − ατ (cid:16) H ( X (0) τ , I , I ) + Z ( n − τ (cid:17) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) . Let us fix τ ∈ S σ . It follows from Step 1 that there exists a sequence { (cid:0) τ k , · · · , τ kn (cid:1) } k ∈ N ∈ S n − τ suchthat(A.10) Z ( n − τ = lim k →∞ ↑ E x,i Z τ c,d τ e − αs f ( X ( n − ,ks , I ( n − ,ks ) ds − n X j =2 e − ατ kj H ( X ( n − ,kτ kj , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F τ , Here, X ( n − ,k is the solution of (2.1) when we replace I by I ( n − ,k which is defined as(A.11) I ( n − ,k ( t ) , I { t<τ k } + · · · + I n − { τ kn − ≤ t<τ kn } + I n { t ≥ τ kn } . For every k ∈ N , we have that (cid:0) τ , τ k , · · · , τ kn (cid:1) ∈ S nσ , and that(A.12) Z ( n ) σ ≥ lim sup k →∞ E x,i Z τ c,d σ e − αs f ( X ( n ) ,ks , I ( n ) ,ks ) ds − n X j =1 e − ατ j H ( X ( n ) ,kτ kj , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ in which we take τ k = τ and X ( n ) ,k is the solution of (2.1) when we replace I by I ( n ) ,k which is definedas(A.13) I ( n ) ,k ( t ) , I { t<τ } + I { τ ≤ t<τ k } + · · · + I n − { τ kn − ≤ t<τ kn } + I n { t ≥ τ kn } . We can then write Z ( n ) σ ≥ lim sup k →∞ ( E x,i (cid:20)Z τ σ e − αs f ( X ( n ) ,ks , I ( n ) ,ks ) ds − e − ατ H ( X ( n ) τ , I , I ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) + E x,i " Z τ c,d τ e − αs f ( X ( n ) ,ks , I ( n ) ,ks ) ds − n X j =2 e − ατ kj H ( X ( n ) ,kτ kj , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ = E x,i (cid:20)Z τ σ e − αs f ( X (0) s , I ) ds − e − ατ H ( X (0) τ , I , I ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) + E x,i " lim k →∞ E X (0) τ ,I (cid:20) Z τ c,d τ e − αs f ( X ( n − ,ks , I ( n − ,ks ) ds − n X j =2 e − ατ kj H ( X ( n − ,kτ kj , I j − , I j ) (cid:21)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F σ = E x,i (cid:20)Z τ σ e − αs f ( X (0) s , I ) ds − e − ατ (cid:16) H ( X (0) τ , I , I ) + Z ( n − τ (cid:17) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) . (A.14)Here, the first equality follows from the Monotone Convergence Theorem (here we used the boundednessassumption on H , see (2.4)). Since τ is arbitrary this implies that the left-hand-side of (A.14) isgreater than the right-hand-side of (A.9). Let us now try to show the reverse inequality. Let for any( τ , · · · , τ n ) ∈ S nσ let I ( n ) be given by (2.10) and let X ( n ) be the solution of (2.1) when I is replaced by I ( n ) . And let us define I ( n − by(A.15) I ( n − ( t ) , I { t<τ } + · · · + I n − { τ n − ≤ t<τ n } + I n { t ≥ τ n } , and let X ( n − be the solution of (2.1) when I is replaced by I ( n − . Then E x,i (cid:20)Z τ σ e − αs f ( X ( n ) s , I ( n ) s ) ds − e − ατ H ( X ( n ) τ , I , I ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) + E x,i Z τ c,d τ e − αs f ( X ( n ) s , I ( n ) s ) ds − n X j =2 e − ατ j H ( X ( n ) τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F σ = E x,i (cid:20)Z τ σ e − αs f ( X (0) s , I ) ds − e − ατ H ( X (0) τ , I , I ) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) + E x,i " E x,i (cid:20) Z τ c,d τ e − αs f ( X ( n − s , I ( n − s ) ds − n X j =2 e − ατ j H ( X ( n − τ j , I j − , I j ) (cid:12)(cid:12)(cid:12)(cid:12) F τ (cid:21)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F σ ≤ E x,i (cid:20)Z τ σ e − αs f ( X (0) s , I ) ds − e − ατ (cid:16) H ( X (0) τ , I , I ) + Z ( n − τ (cid:17) (cid:12)(cid:12)(cid:12)(cid:12) F σ (cid:21) , (A.16) N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 23 now taking the essential supremum on the right-hand-side over all the sequences in S nσ we establishthe desired inequality. Our proof in this step can be contrasted with the approach of Hamad´ene andJeanblanc (2007) which uses the recently developed theory of Reflected Backward Stochastic DifferentialEquations to establish a similar result. The proof method we use above is more direct. On the otherhand, as pointed out on page 14 of Carmona and Ludkovski (2008), it may be difficult to generalize themethod of Hamad´ene and Jeanblanc (2007) to the cases when there are more than two regimes. Step 3.
In this step we will argue that(A.17) Z ( n ) t = e − αt q ( n ) ( X ( n ) t , I ( n ) t ) , t ≥ , in which I (0) t = I , t ≥ q ( n ) is continuous in the x -variable. We will carry out the proof usinginduction. First, let us write q (1) as q (1) ( x, i ) = sup τ ∈S E x,i (cid:20)Z τ c,d e − αs f ( X (1) s , I (1) s ) − e − ατ H ( X (0) τ , I , I ) (cid:21) = sup τ ∈S E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds + Z τ c,d τ e − αs f ( X (1) s , I ) ds − e − ατ H ( X (1) τ , I , I ) (cid:21) = sup τ ∈S E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds + E X (0) τ ,I (cid:20)Z τ c,d τ e − αs f ( X (0) s , I ) ds − e − ατ H ( X (0) τ , I , I ) (cid:21)(cid:21) = q (0) ( x, i ) + sup τ ∈S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , I ) + q (0) ( X (0) τ , I ) − H ( X (0) τ , I , I ) (cid:17)i . (A.18)Let θ be the shift operator . The third inequality in (A.18) follows from the strong Markov property of( X (0) s ) s ≥ and ( X (1) s , I (1) s ) s ≥ and the fact that(A.19) τ c,d = τ + τ c,d ◦ θ τ , for any τ ∈ S , using which we can write(A.20) E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds (cid:21) = q (0) ( x, i ) − E x,i h e − ατ q (0) ( X (0) τ , I ) i , and E x,i (cid:20)Z τ c,d τ e − αs f ( X (1) s , I ) ds (cid:21) = E x,i (cid:20) e − ατ E x,i (cid:20)Z τ c,d e − αs f ( X (0) s , I ) ds (cid:12)(cid:12)(cid:12)(cid:12) F τ (cid:21)(cid:21) = E x,i (cid:20) e − ατ E X (0) τ ,I (cid:20)Z τ c,d e − αs f ( X (0) s , I ) ds (cid:21)(cid:21) = E x,i h e − ατ q (0) ( X (0) τ , I ) i (A.21)It is well known in the optimal stopping theory that (A.17) holds for n = 1, if(A.22) A , E x,i (cid:20) sup t ≥ e − αt (cid:16) − q (0) ( X (0) t , I ) + q (0) ( X (0) t , I ) − H ( X (0) t , I , I ) (cid:17) − (cid:21) < ∞ , and(A.23) x → − q (0) ( x, I ) + q (0) ( x, I ) − H ( x, I , I ) , x ∈ ( c, d ) is continuous , see Theorem 1 of Fakeev (1971). (Fakeev requires C continuity of − q (0) ( X (0) t , I ) + q (0) ( X (0) t , I ) − H ( X (0) t , I , I ). But this requirement is readily satisfied in our case since X (0) is continuous and since − q (0) ( X (0) t , I ) + q (0) ( X (0) t , I ) − H ( X (0) t , I , I ) is continuous, by the continuity assumption of H and(2.18).But the growth conditions (2.4), (2.6) guarantee that (A.22) holds (using (2.7)).Now let us assume that (A.17) when n is replaced by n − q ( n − is continuous in the x -variable and show that (A.17) holds and q ( n ) is continuous in the x -variable. ¿From Step 2 and theinduction hypothesis we can write q ( n ) as q ( n ) ( x, i ) = sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds − e − ατ H ( X (0) τ , I , I ) + Z ( n − τ (cid:21) = sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds − e − ατ (cid:16) H ( X (0) τ , I , I ) + q ( n − ( X ( n − τ , I ( n − τ ) (cid:17)(cid:21) = sup τ ∈ S E x,i (cid:20)Z τ e − αs f ( X (0) s , I ) ds − e − ατ (cid:16) H ( X (0) τ , I , I ) + q ( n − ( X (0) τ , I (0) τ ) (cid:17)(cid:21) , = q (0) ( x, i ) + sup τ ∈S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , I ) + q ( n − ( X (0) τ , I ) − H ( X (0) τ , I , I ) (cid:17)i (A.24)where the third equality follows since X ( n − t = X (0) t for t ≤ τ , and the last equality can be derivedusing the strong Markov property of ( X (0) ) t ≥ and ( X ( n − t , I ( n − t ) t ≥ . The functions H and q (0) arecontinuous in the x -variable and q ( n − is assumed to satisfy the same property. On the other hand, wehave that(A.25) B , E x,i (cid:20) sup t ≥ e − αt (cid:16) − q (0) ( X (0) t , I ) + q ( n − ( X (0) t , I ) − H ( X (0) t , I , I ) (cid:17) − (cid:21) < ∞ , satisfies B ≤ A < ∞ , in which A is defined in (A.22), since ( q ( n ) ) n ∈ N is an increasing sequence offunctions. Therefore, Theorem 1 of Fakeev (1971) implies that (A.17) holds. On the other hand, Lemma4.2, Proposition 5.6 and Proposition 5.13 of (2003) guarantee that q ( n ) is continuous. This concludes ourinduction argument and hence Step 3. Step 4.
In this step we will show that the statement of the lemma holds using the results proved in theprevious steps.By definition we already have that(A.26) q (0) ( x, i ) = w (0) ( x, i ) . N THE ONE-DIMENSIONAL OPTIMAL SWITCHING PROBLEM 25
Let us assume that the statement holds for n replaced by n −
1. From the previous step and theinduction hypothesis we have that q ( n ) ( x, i ) = q (0) ( x, i ) + sup τ ∈S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , I ) + q ( n − ( X (0) τ , I ) − H ( X (0) τ , I , I ) (cid:17)i = q (0) ( x, i ) + sup τ ∈S E x,i h e − ατ (cid:16) − q (0) ( X (0) τ , I ) + w ( n − ( X (0) τ , I ) − H ( X (0) τ , I , I ) (cid:17)i = sup τ ∈S E x,i (cid:20)Z τ e − αs f ( X (0) s , i ) ds + e − ατ (cid:16) w ( n − ( X (0) τ , − i ) − H ( X (0) τ , i, − i ) (cid:17)(cid:21) = w ( n ) ( x, i ) , (A.27)where the last equality follows from (A.21). This completes the proof. References
Alvarez, L. H. R. (2001). Singular stochastic control, linear diffusions, and optimal stopping: A class ofsolvable problems,
SIAM Journal on Control and Optimization
39 (6) : 1697–1710.Alvarez, L. H. R. (2003). On the properties of r -excessive mappings for a class of diffusions, Annals ofApplied Probability
13 (4) : 1517–1533.Alvarez, L. H. R. (2004). A class of solvable impulse control problems,
Appl. Math. and Optim. : 265–295.Bayraktar, E. and Egami, M. (2007). On the one-dimensional optimal switching problem. URL:
Borodin, A. N. and Salminen, P. (2002).
Handbook of Brownian Motion Facts and Formulae , Birkh¨auser,Boston.Brekke, K. A. and Øksendal, B. (1994). Optimal switching in an economic activity under uncertainty,
SIAM Journal on Control and Optimization
32 (4) : 1021–1036.Brennan, M. and Schwarz, E. (1985). Evaluating natural resource investments,
Journal of Business : 135–157.Carmona, R. and Dayanik, S. (2008). Optimal multiple-stopping of linear diffusions and swing options, Mathematics of Operations Research
33 (2) : 446–460.Carmona, R. and Ludkovski, M. (2008). Pricing asset scheduling flexibility using optimal switchingagreements,
Applied Mathematical Finance
15 (6) : 405–447.Davis, M. H. A. (1993).
Markov models and optimization , Vol. 49 of
Monographs on Statistics and AppliedProbability , Chapman & Hall, London.Dayanik, S. and Karatzas, I. (2003). On the optimal stopping problem for one-dimensional diffusions,
Stochastic Processes and their Applications
107 (2) : 173–212.Dixit, A. (1989). Entry and exit decisions under uncertainty,
Journal of Political Economy : 620–638.Duckworth, K. and Zervos, M. (2001). A model for investment decisions with switching costs., Annals ofApplied Probability
11 (1) : 239–260.Dynkin, E. (1965).
Markov processes, Volume II , Springer Verlag, Berlin.
Fakeev, A. G. (1971). Optimal stopping of a markov process,
Theory of Probability and Applications : 694–696.Guo, X. and Tomecek, P. (2008). Connections between singular control and optimal switching, SIAM J.Control Optim. (1): 421–443.Hamad`ene, S. and Jeanblanc, M. (2007). On the starting and stopping problem: Applications in reversibleinvestments, Mathematics of Operations Research : 182–192.Itˆo, K. and McKean, H. P. (1974).
Diffusion processes and their sample paths , Springer Verlag, NewYork.Johnson, T. C. and Zervos, M. (2009). The explicit solution to a sequential switching problem withnon-smooth data, to appear in Stochastics ∼ Methods of Mathematical Finance , Springer-Verlag, New York.Karlin, S. and Taylor, H. M. (1981).
A Second Course in Stochastic Processes , Academic Press, Orlando,FL.Lebedev, N. N. (1972).
Special Functions and Their Applications , Dover Publications, New York.Øksendal, B. and Sulem, A. (2005).
Applied Stochastic Control of Jump Diffusions , Springer Verlag, NewYork.Pham, H. (2007). On the smooth-fit propoerty of one-dimensional optimal switching problems,
S´eminairede Probabilit´es XL : 187–201.Pham, H. and Vath, V. L. (2007). Explicit solution to an optimal switching problem in the two-regimecase, SIAM Journal on Control and Optimization.
46 (2) : 395–426.Tang, S. J. and Yong, J. M. (1993). Finite horizon stochastic optimal switching and impulse controlswith a viscosity solution approach,
Stochastics Stochastics Rep. (3-4): 145–176.Zervos, M. (2003). A problem of sequential entry and exit decisions combined with discretionary stopping, SIAM Journal on Control and Optimization
42 (1) : 397–421. (E. Bayraktar)
Department of Mathematics, University of Michigan, Ann Arbor, MI 48109
E-mail address : [email protected] (M. Egami) Graduate School of Economics, Kyoto University, Sakyo-Ku, Kyoto, 606-8501, Japan
E-mail address ::