Perfect and partial hedging for swing game options in discrete time
aa r X i v : . [ q -f i n . P R ] J u l PERFECT AND PARTIAL HEDGING FOR SWING GAMEOPTIONS IN DISCRETE TIME.
YAN DOLINSKY, YONATHAN IRON AND YURI KIFERINSTITUTE OF MATHEMATICSHEBREW UNIVERSITY OF JERUSALEMJERUSALEM, ISRAEL
Abstract.
The paper introduces and studies hedging for game (Israeli) styleextension of swing options considered as multiple exercise derivatives. Assum-ing that the underlying security can be traded without restrictions we derivea formula for valuation of multiple exercise options via classical hedging argu-ments. Introducing the notion of the shortfall risk for such options we studyalso partial hedging which leads to minimization of this risk. Introduction
Swing contracts emerging in energy and commodity markets (see [1] and [4])are often modeled by multiple exercising of American style options which leads tomultiple stopping problems (see, for instance, [6], [2] and [8]). Most closely suchmodels describe options consisting of a package of claims or rights which can beexercised in a prescribed (or in any) order with some restrictions such as a delay timebetween successive exercises. Observe that peculiarities of multiple exercise optionsare due only to restrictions such as an order of exercises and a delay time betweenthem since without restrictions the above claims or rights could be considered asseparate options which should be dealt with independently.Attempts to valuate swing options in multiple exercises models are usually re-duced to maximizing the total expected gain of the buyer which is the expectedpayoff in the corresponding multiple stopping problem deviating from what now be-came classical and generally accepted methodology of pricing derivatives via hedg-ing and replicating arguments. This digression is sometimes explained by difficultiesin using an underlying commodity in a hedging portfolio in view of the high costof storage, for instance, in the case of electricity. We will not discuss here in depthpractical possibilities of hedging in energy markets but only observe that the sellerof a swing option could, for instance, use for hedging certain securities linked toa corresponding commodity (electricity, gas, oil etc.) index. Another instrumentwhich can be used for hedging is an appropriate basket of stocks of major compa-nies in the corresponding branch whose profit depends in a computable way fromthe price of commodity in question. Though such indirect hedging may seem to benot very precise it may still be helpful taking into account that all duable math-ematical models of financial markets cannot describe them precisely and are used
Date : October 26, 2018.2000
Mathematics Subject Classification.
Primary: 91B28 Secondary: 60G40, 91B30.
Key words and phrases. hedging, multiple exercise derivatives, game options, shortfall risk.Partially supported by the ISF grant no. 130/06. usually only as an auxiliary tool. Another theoretical but may be not very realisticin practice possibility is to buy from (and sell to) power stations an extra capac-ity for electricity production instead of storing electricity itself and use it as theunderlying risky security for a hedging portfolio. We observe also that multipleexercise options may appear in their own rights when an investor wants to buy orsell an underlying security in several instalments at times of his choosing. Anyway,the study of hedging for multiple exercise options is sufficiently motivated fromthe financial point of view and it leads to interesting mathematical problems. Inthis paper we assume that the underlying security can be used for construction ofa hedging portfolio without restrictions as in the usual theory of derivatives and,moreover, we will deal here with the more general game (Israeli) option (contingentclaim) setup when both the buyer (holder) and the seller (writer) of the option canexercise or cancell, respectively, the claims (or rights) in a given order but as in[5] each cancellation entails a penalty payment by the seller. This required us, inparticular, to extend Dynkin’s games machinery to the multiple stopping setup.In this paper a discrete time swing (multi stopping) game option is a contractbetween its seller and the buyer which allows to the seller to cancel (or terminate)and to the buyer to exercise L specific claims or rights in a particular order. Suchcontract is determined given 2 L payoff processes X i ( n ) ≥ Y i ( n ) ≥ , n = 0 , , ... , i = 1 , , ..., L adapted to a filtration F n , n ≥ S n , n ≥ k -th claim k ≤ L atthe time n then the seller pays to him the amount Y k ( n ) but if the latter cancels theclaim k at the time n before the buyer he has to pay to the buyer the amount X k ( n )and the difference δ k ( n ) = X k ( n ) − Y k ( n ) is viewed as the cancelation penalty. Inaddition, we require a delay of one unit of time between successive exercises andcancellations. Observe that unlike some other papers (cf. [2]) we allow payoffsdepending on the exercise number so, for instance, our options may change fromcall to put and vice versa after different exercises.The first goal of this paper is to develop a mathematical theory for pricing ofswing game options. The standard definition of the fair price of a derivative securityin a complete market is the minimal initial capital needed to create a (perfect)hedging portfolio, and so we have to start with a precise definition of a perfect hedge.Observe that a natural definition of a perfect hedge in a multi exercise frameworkis not a straightforward extension of a standard one and it has certain peculiarities.Namely, the seller of the option does not know in advance when the buyer willexercise the ( j − j -th claim should dependon this (random) time and on the capital he is left with in the portfolio after the( j − j -th claim should depend on the past behavior of both sellerand the buyer of the option. Actually, an optimal portfolio allocation dependsalso on the payoff processes of the future claims. The construction of hedgingstrategies in the multiple exercise setup requires a nontrivial additional iterativeprocedure in contrast to the 1-exercise case where perfect hedging strategies areobtained directly from the martingale representation. Several papers dealt withmathematical analysis of swing American options (see, for instance, [2] and [8])but none of these papers defined explicitly what is a perfect hedge and what is theoption price. In [8] the authors studied a specific type of swing American optionsbut they treated the problem from the buyer point of view which in general is edging of swing options 3 not interested in hedging but only on a stopping strategy which will provide hima maximal profit. In [2] the authors studied an optimal multi stopping problemfor continuous time models but they did not explained why the value of the aboveproblem under the martingale measure in a complete market is the option price.In this paper we define the notion of a perfect hedge for swing game options whichgeneralize swing American options, prove that in the binomial Cox-Ross-Rubinstein(CRR) market the option price V ∗ is equal to the value of the multi stopping Dynkingame with discounted payoffs under the unique martingale measure and provide adynamical programming algorithm which allows to compute both this value anda corresponding perfect hedge. Similar results can be obtained for the continuoustime Black–Scholes market with the stock price evolving according to the geometricBrownian motion but in this paper we restrict ourselves to the discrete time setup.Our second goal is to study hedging with risk for swing game options. In realmarket conditions a writer of an option may not be willing for various reasons totie in a hedging portfolio the full initial capital required for a perfect hedge. In thiscase the seller is ready to accept a risk that his portfolio value will be less thanhis obligation to pay and he will need additional funds to fulfil the contract, i.e.the writer must add money to his portfolio from other sources. In our setup thewriter is allowed to add money to his portfolio only at moments when the contract isexercised. The shortfall risk is defined as the expectation with respect to the marketprobability measure of the total sum that the seller added from other sources. Wewill show that for any initial capital x < V ∗ there exists a hedge which minimizesthe shortfall risk and this hedge can be computed by a dynamical programmingalgorithm. Observe that the existence of a hedge minimizing the shortfall risk isnot known in the continuous time even for usual (one stopping) game options (see[3]). Hedging with risk was not studied before for swing options of any type.In Section 2 we define explicitly the notions of perfect and partial hedges (thelatter, for the shortfall risk case). Relying on these we define the option price andthe shortfall risk. Then we state Theorem 2.4 which yields the option price togetherwith the corresponding perfect hedge. Next, we formulate Theorem 2.7 which fora given initial capital provides the shortfall risk and the corresponding optimalhedge together with the dynamical programming algorithm for their computation.In Section 3 we derive auxiliary lemmas needed in the proof, introduce the conceptof multi stopping Dynkin game and prove existence of a saddle point for this game.Section 4 and Section 5 are devoted to the proofs of Theorem 2.4 and Theorem 2.7,respectively. 2. Preliminaries and main results
Let Ω = { , − } N be the space of finite sequences ω = ( ω , ω , ..., ω N ); ω i ∈{ , − } with the product probability P = { p, − p } N , p >
0. Consider the binomialmodel of a financial market which is active at times n = 0 , , ..., N < ∞ andit consists of a savings account B n with an interest rate r which without loss ofgenerality (by discounting) we assume to be zero, i.e.(2.1) B n = B > , and of a stock whose price at time n equals(2.2) S n = S n Y i =1 (1 + ρ i ) , S > Y.Dolinsly, Y.Iron and Y.Kifer where ρ i ( ω , ω , ..., ω N ) = a + b + b − a ω i and − < a < < b . Thus ρ i , i = 1 , ..., N form a sequence of independent identically distributed (i.i.d.) random variables onthe probability space (Ω , P ) taking values b and a with probabilities p and 1 − p ,respectively. Recall, that the binomial CRR model is complete (see [9]) and S n , n ≥ F n = σ { ρ k , k ≤ n } , F = {∅ , Ω } and the unique martingale measure is given by ˜ P = { ˜ p, − ˜ p } N where ˜ p = aa − b .We consider a swing option of the game type which has the i -th payoff, i ≥ H ( i ) ( m, n ) = X i ( m ) I m A stopping strategy is a sequence s = ( s , ..., s L ) such that s ∈ Γ is a stopping time and for i > , s i : C i − → Γ is a map which satisfies s i (( a , ..., a i − ) , ( d , ..., d i − )) ∈ Γ N ∧ (1+ a i − ) . In other words for the i -th payoff both the seller and the buyer choose stoppingtimes taking into account the history of payoffs so far. Denote by S the set ofall stopping strategies and define the map F : S × S → Γ L × Γ L by F ( s, b ) =(( σ , ..., σ L ) , ( τ , ..., τ L )) where σ = s , τ = b and for i > σ i = s i (( σ ∧ τ , ..., σ i − ∧ τ i − ) , ( I σ <τ , ..., I σ i − <τ i − )) and(2.5) τ i = b i (( σ ∧ τ , ..., σ i − ∧ τ i − ) , ( I σ <τ , ..., I σ i − <τ i − )) . Set(2.6) c k ( s, b ) = L X i =1 I σ i ∧ τ i ≤ k which is a random variable equal to the number of payoffs until the moment k .For swing options the notion of a self financing portfolio involves not only allo-cation of capital between stocks and the bank account but also payoffs at exercisetimes. At the time k the writer’s decision how much money to invest in stocks(while depositing the remaining money into a bank account) depends not only onhis present portfolio value but also on the current claim. Denote by Ξ the set offunctions on the (finite) probability space Ω. Definition 2.2. A portfolio strategy with an initial capital x > is a pair π = ( x, γ ) where γ : { , ..., N − } × { , ..., L } × R → Ξ is a map such that γ ( k, i, y ) is an F k -measurable random variable which represents the number of stocks which the sellerbuy at the moment k provided that the current claim has the number i and thepresent portfolio value is y . At the same time the sum y − γ ( k, i, y ) S k is deposited to edging of swing options 5 the bank account of the portfolio. We call a portfolio strategy π = ( x, γ ) admissibleif for any y ≥ , (2.7) − yS k b ≤ γ ( k, i, y ) ≤ − yS k a . For any y ≥ denote K ( y ) = [ − yb , − ya ] . Notice that if the portfolio value at the moment k is y ≥ k + 1 before the payoffs (if there are any payoffs at thistime) is given by y + γ ( k, i, y ) S k ( S k +1 S k − 1) where i is the number of the nextpayoff. In view of independency of S k +1 S k − γ ( k, i, y ) S k we conclude that theinequality (2.7) is equivalent to the inequality y + γ ( k, i, y ) S k ( S k +1 S k − ≥ 0, i.e. theportfolio value at the moment k + 1 before the payoffs is nonnegative. Denote by A ( x ) be the set of all admissible portfolio strategies with an initial capital x > A = S x> A ( x ). Let π = ( x, γ ) be a portfolio strategy and s, b ∈ S . Set(( σ , ..., σ L ) , ( τ , ..., τ L )) = F ( s, b ) and c k = c k ( s, b ). The portfolio value at themoment k after the payoffs (if there are any payoffs at this moment) is given by V ( π,s,b )0 = x − H (1) ( σ , τ ) I σ ∧ τ =0 and for k > , (2.8) V ( π,s,b ) k = V ( π,s,b ) k − + I c k − A perfect hedge is a pair ( π, s ) which consists of a portfolio strategyand a stopping strategy such that V ( π,s,b ) k ≥ for any b ∈ S and k ≤ N . Observe that if ( π, s ) is a perfect hedge then without loss of generality we canassume that π is an admissible portfolio strategy and throughout this paper wewill consider only admissible portfolio strategies. As usual, the option price V ∗ isdefined as the infimum of V ≥ V .The following theorem provides a dynamical programming algorithm for compu-tation of both the option price and the corresponding perfect hedge. Theorem 2.4. Denote by ˜ E the expectation with respect to the unique martingalemeasure ˜ P . For any n ≤ N set (2.9) X (1) n = X L ( n ) , Y (1) n = Y L ( n ) and V (1) n = min σ ∈ Γ n max τ ∈ Γ n ˜ E ( H ( L ) ( σ, τ ) |F n ) and for < k ≤ L , X ( k ) n = X L − k +1 ( n ) + ˜ E ( V ( k − n +1) ∧ N |F n ) , (2.10) Y ( k ) n = Y L − k +1 ( n ) + ˜ E ( V ( k − n +1) ∧ N |F n ) and V ( k ) n = min σ ∈ Γ n max τ ∈ Γ n ˜ E ( X ( k ) σ I σ<τ + Y ( k ) τ I σ ≥ τ |F n ) . Then (2.11) V ∗ = V ( L )0 = min s ∈S max b ∈S G ( s, b ) where G ( s, b ) = ˜ E P Li =1 H ( i ) ( σ i , τ i ) and (( σ , ..., σ L ) , ( τ , ..., τ L )) = F ( s, b ) . Fur-thermore, the stopping strategies s ∗ = ( s ∗ , ..., s ∗ L ) ∈ S and b = ( b ∗ , ..., b ∗ L ) given Y.Dolinsly, Y.Iron and Y.Kifer by s ∗ = N ∧ min { k | X ( L ) k = V ( L ) k } , b ∗ = min { k | Y ( L ) k = V ( L ) k } , (2.12) s ∗ i (( a , ..., a i − ) , ( d , ..., d i − )) = N ∧ min { k > a i − | X ( L − i +1) k = V ( L − i +1) k } , b ∗ i (( a , ..., a i − ) , ( d , ..., d i − ))= N ∧ min { k > a i − | Y ( L − i +1) k = V ( L − i +1) k } , i > satisfy (2.13) G ( s ∗ , b ) ≤ G ( s ∗ , b ∗ ) ≤ G ( s, b ∗ ) for all s, b and there exists a portfolio strategy π ∗ ∈ A ( V ( L )0 ) such that ( π ∗ , s ∗ ) is a perfecthedge. Next, consider an option seller whose initial capital is x , which is less than theoption price, i.e. x < V ∗ . In this case the seller must (in order to fulfill hisobligation to the buyer) add money to his portfolio from other sources. In oursetup the seller is allowed to add money to his portfolio only at times when thecontract is exercised. We also require that after the addition of money by the sellerthe portfolio value must be positive. Definition 2.5. An infusion of capital is a map I : { , ..., N } × { , ..., L } × R → Ξ such that I ( k, j, y ) ≥ ( − y ) + is F k -measurable, I ( k, L, y ) = ( − y ) + for any k , andfor any j < L , I ( N, j, y ) = (cid:0) ( P Li = j +1 Y i ( N )) − y (cid:1) + . The set of such maps will bedenoted by I . Thus I ( k, j, y ) is the amount that the seller adds to his portfolio after the j -thpayoff payed at the moment k and the portfolio value after this payment is y . When k = N or j = L then clearly I ( k, j, y ) is the minimal amount which the seller shouldadd in order to fulfill his obligation to the buyer. Observe that when k = N oneinfusion of capital to the seller’s portfolio is already sufficient in order to fulfill hisobligations even if there are additional payoffs at this moment, so we conclude thatat each step that the contract is exercised there is no more than one infusion ofcapital. A hedge with an initial capital x < V ∗ is a triple ( π, I , s ) ∈ A ( x ) × I × S which consists of an admissible portfolio strategy with an initial capital x , infusionof capital and a stopping strategy. Let ( π, I, s ) be a hedge and b ∈ S be a stoppingstrategy for the buyer. Set (( σ , ..., σ L ) , ( τ , ..., τ L )) = F ( s, b ) and c k = c k ( s, b ).Define the stochastic processes { W ( π,I,s,b ) k } Nk =0 and { V ( π,I,s,b ) k } Nk =0 by W ( π,I,s,b )0 = x, V ( π,I,s,b )0 = x − I σ ∧ τ =0 (cid:0) H (1) ( σ , τ ) − (2.14) I (0 , , x − H (1) ( σ , τ )) (cid:1) and for k > ,W ( π,I,s,b ) k = V ( π,I,s,b ) k − + I c k − Given a hedge ( π, I, s ) ∈ A × I × S the shortfall risk for it isdefined by (2.16) R ( π, I, s ) = max b ∈S EC ( π, I, s, b ) which is the maximal expectation with respect to the market probability measure P of the total infusion of capital. The shortfall risk for the intitial capital x is definedby (2.17) R ( x ) = inf ( π,I,s ) ∈A ( x ) ×I× S R ( π, I, s ) . The following result asserts for any initial capital x there exists a hedge ( π, I, s ) ∈A ( x ) × I × S which minimizes the shortfall risk and both the risk and the optimalhedge can be obtained recurrently. Theorem 2.7. Define a sequence of functions J k : R + × { , ..., L } × { a, b } k → R + , ≤ k ≤ N by the following formulas J N ( y, j, u , ..., u N ) = (( P Li = L − j +1 f ( i ) N ( u , ..., u N )) − y ) + , j > , (2.18) J k ( y, , u , ..., u k ) = 0 , ≤ k ≤ N and for k < N and j > , J k ( y, j, u , ..., u k ) =(2.19) min inf z ≥ ( g ( L − j +1) k ( u ,...,u k ) − y ) + inf α ∈ K ( y + z − g ( L − j +1) k ( u ,...,u k )) (cid:0) z + pJ k +1 ( y + z − g ( L − j +1) k ( u , ..., u k ) + bα, j − , u , ..., u k , b ) +(1 − p ) J k +1 ( y + z − g ( L − j +1) k ( u , ..., u k ) + aα, j − , u , ..., u k , a ) (cid:1) , max inf z ≥ ( f ( L − j +1) k ( u ,...,u k ) − y ) + inf α ∈ K ( y + z − f ( L − j +1) k ( u ,...,u k )) (cid:0) z + pJ k +1 ( y + z − f ( L − j +1) k ( u , ..., u k ) + bα, j − , u , ..., u k , b ) +(1 − p ) J k +1 ( y + z − f ( L − j +1) k ( u , ..., u k ) + aα, j − , u , ..., u k , a ) (cid:1) , inf α ∈ K ( y ) (cid:0) pJ k +1 ( y + bα, j, u , ..., u k , b ) +(1 − p ) J k +1 ( y + aα, j, u , ..., u k , a ) (cid:1)!! . Then the shortfall risk for an initial capital x is given by (2.20) R ( x ) = J ( x, L ) . Furthermore, the hedge (˜ π = ( x, ˜ γ ) , ˜ I, ˜ s ) ∈ A ( x ) × I × S given by the formulas(5.34), (5.37) and (5.46) satisfies (2.21) R (˜ π, ˜ I, ˜ s ) = R ( x ) . Y.Dolinsly, Y.Iron and Y.Kifer Not surprisingly the formulas above and their proof are quite technical andcomplex since already for one stopping game options the corresponding recurrentformulas for the shortfall risk in [3] and their proof are rather complicated. Ourmethod extends the approach of [3] by relying on the dynamical programmingalgorithm for Dynkin’s games with appropriately modified payoff processes. Remark 2.8. Some applications may require a more general setup where the firstpayoff is as before but the i -th payoff for i > depends also on the first time whenthe i -th claim can be exercised, i.e. the i -th payoff depends on the time of the ( i − -th payoff. The first payoff is exactly as in formula (2.3). For i > we set ∀ m, n ≥ k H ( i,k ) ( m, n ) = X i,k ( m ) I m The following lemma is a well known result about Dynkin games (see [7]) whichwill be used for proving Theorems 2.4 and 2.7. Lemma 3.1. Let { X n , Y n ≥ } Nn =0 be two adapted stochastic processes. Set R ( m, n ) = I m Proposition 3.2. For any s, b ∈ S , (3.1) G ( s ∗ , b ) ≤ G ( s ∗ , b ∗ ) ≤ G ( s, b ∗ ) where s ∗ and b ∗ are the same as in (2.12). The above statement is, actually, a part of Theorem 2.4 (see (2.12)) but sinceit holds true in a wider setting we give it separately. Observe also that the aboveresult is correct for different definitions of strategies. For instance, we could take s i to be dependent only on the last time a i − but in order to be consistent we providethe argument only for the strategies set S . In fact, it is easy to see that in theproof we just use the assumption σ i , τ i ≥ ( σ i − ∧ τ i − + 1) ∧ N. Before we pass tothe proof of Proposition 3.2 we shall derive the following key lemma. Lemma 3.3. For s, b ∈ S set F ( s ∗ , b ) = (cid:0) ( σ ∗ , ..., σ ∗ L ) , ( τ , ..., τ L ) (cid:1) and F ( s, b ∗ ) = (cid:0) ( σ , ..., σ L ) , ( τ ∗ , ..., τ ∗ L ) (cid:1) . For every ≤ n ≤ N put X (0) n = Y (0) n = V (0) n = 0 and for any ≤ i ≤ L define R ( i ) ( σ, τ ) = I σ<τ X ( i ) σ + I σ ≥ τ Y ( i ) τ . Then E ( R ( i − ( σ ∗ L − i +2 , τ L − i +2 ) + H ( L − i +1) ( σ ∗ L − i +1 , τ L − i +1 ))(3.2) ≤ E ( R ( i ) ( σ ∗ L − i +1 , τ L − i +1 )) and E ( R ( i − ( σ L − i +2 , τ ∗ L − i +2 ) + H ( L − i +1) ( σ L − i +1 , τ ∗ L − i +1 ))(3.3) ≥ E ( R ( i ) ( σ L − i +1 , τ ∗ L − i +1 )) . Proof. We shall give only the proof of inequality (3.2) since (3.3) can be proven ina similar way. Set η i = ( σ ∗ i ∧ τ i + 1) ∧ N then we obtain from the definition that R ( i ) ( σ ∗ L − i +1 , τ L − i +1 ) = I { σ ∗ L − i +1 <τ L − i +1 } X ( i ) σ ∗ L − i +1 ∧ τ L − i +1 + I { σ ∗ L − i +1 ≥ τ L − i +1 } × Y ( i ) σ ∗ L − i +1 ∧ τ L − i +1 = I { σ ∗ L − i +1 <τ L − i +1 } (cid:0) X L − i +1 ( σ ∗ L − i +1 ∧ τ L − i +1 )+ E ( V ( i − η L − i +1 |F σ ∗ L − i +1 ∧ τ L − i +1 ) (cid:1) + I { σ ∗ L − i +1 ≥ τ L − i +1 } × (cid:0) Y L − i +1 ( σ ∗ L − i +1 ∧ τ L − i +1 ) + E ( V ( i − η L − i +1 |F σ ∗ L − i +1 ∧ τ L − i +1 ) (cid:1) = H ( L − i +1) ( σ ∗ L − i +1 , τ L − i +1 ) + E ( V ( i − η L − i +1 |F σ ∗ L − i +1 ∧ τ L − i +1 ) , and so E ( R ( i ) ( σ ∗ L − i +1 , τ L − i +1 ))(3.4) = E ( H ( L − i +1) ( σ ∗ L − i +1 , τ L − i +1 )) + E ( V ( i − η L − i +1 ) . On the other hand, R ( i − ( σ ∗ L − i +2 , τ L − i +2 ) = I { σ ∗ L − i +2 <τ L − i +2 } X ( i − σ ∗ L − i +2 + I { σ ∗ L − i +2 ≥ τ L − i +2 } Y ( i − τ L − i +2 ≤ I { σ ∗ L − i +2 <τ L − i +2 } V ( i − σ ∗ L − i +2 + I { σ ∗ L − i +2 ≥ τ L − i +2 } V ( i − τ L − i +2 = V ( i − σ ∗ L − i +2 ∧ τ L − i +2 which holds true by the definition of σ ∗ L − i +2 and the fact that Y ( i ) n ≤ V ( i ) n forevery 0 ≤ n ≤ N and 1 ≤ i ≤ L . Applying the last inequality in Lemma 3.1 with θ = η L − i +1 we obtain that(3.5) E (cid:0) R ( i − ( σ ∗ L − i +2 , τ L − i +2 ) (cid:1) ≤ E ( V ( i − η L − i +1 ) . Now (3.2) follows from (3.4) and (3.5). (cid:3) Observe that in the special case s = s ∗ and b = b ∗ if (cid:0) ( σ ∗ , ..., σ ∗ L ) , ( τ ∗ , ..., τ ∗ L ) (cid:1) = F ( s ∗ , b ∗ )then inequalities (3.2) and (3.3) become equalities and E ( R ( i − ( σ ∗ L − i +2 , τ ∗ L − i +2 ) + H ( L − i +1) ( σ ∗ L − i +1 , τ ∗ L − i +1 ))(3.6) = E ( R ( i ) ( σ ∗ L − i +1 , τ ∗ L − i +1 ))for every 1 < i ≤ L . Proof of Proposition 3.2. For b ∈ S let F ( s ∗ , b ) = (cid:0) ( σ ( s ∗ , b ) , ..., σ L ( s ∗ , b )) , ( τ ( s ∗ , b ) , ..., τ L ( s ∗ , b )) (cid:1) and F ( s ∗ , b ∗ ) = (cid:0) ( σ ( s ∗ , b ∗ ) , ..., σ L ( s ∗ , b ∗ )) , ( τ ( s ∗ , b ∗ ) , .., τ L ( s ∗ , b ∗ )) (cid:1) . We shall prove only the left hand side of (3.1) while its right hand side follows inthe same way. By Lemma 3.3 we see that for every 1 < i ≤ L , E ( R ( i − ( σ L − i +2 ( s ∗ , b ) , τ L − i +2 ( s ∗ , b )) + P L − i +1 j =1 H ( j ) ( σ j ( s ∗ , b ) , τ j ( s ∗ , b ))) ≤ E ( R ( i ) ( σ L − i +1 ( s ∗ , b ) , τ L − i +1 ( s ∗ , b )) + P L − ij =1 H ( j ) ( σ j ( s ∗ , b ) , τ j ( s ∗ , b )))and for ( s ∗ , b ∗ ), E ( R ( i − ( σ L − i +2 ( s ∗ , b ∗ ) , τ L − i +2 ( s ∗ , b ∗ )) + P L − i +1 j =1 H ( j ) ( σ j ( s ∗ , b ∗ ) , τ j ( s ∗ , b ∗ )))= E ( R ( i ) ( σ L − i +1 ( s ∗ , b ∗ ) , τ L − i +1 ( s ∗ , b ∗ )) + P L − ij =1 H ( j ) ( σ j ( s ∗ , b ∗ ) , τ j ( s ∗ , b ∗ ))) . By induction it follows that(3.7) G ( s ∗ , b ) = E ( L X j =1 H ( j ) ( σ j ( s ∗ , b ) , τ j ( s ∗ , b ))) ≤ E ( R ( L ) ( σ ( s ∗ , b ) , τ ( s ∗ , b )))and for ( s ∗ , b ∗ ),(3.8) G ( s ∗ , b ∗ ) = E ( R ( L ) ( σ ( s ∗ , b ∗ ) , τ ( s ∗ , b ∗ ))) = V ( L )0 where the last term is the value of the usual (one stopping) Dynkin game . Observethat from the definition of s ∗ , b ∗ for every b ∈ S the inequality(3.9) E ( R ( L ) ( σ ( s ∗ , b ) , τ ( s ∗ , b ))) ≤ E ( R ( L ) ( σ ( s ∗ , b ∗ ) , τ ( s ∗ , b ∗ ))) = V ( L )0edging of swing options 11 is just the saddle point property of the usual Dynkin game. From (3.6), (3.7) and(3.8) it follows that G ( s ∗ , b ) ≤ E ( R ( L ) ( σ ( s ∗ , b ) , τ ( s ∗ , b ))) ≤ E ( R ( L ) ( σ ( s ∗ , b ∗ ) , τ ( s ∗ , b ∗ ))) = G ( s ∗ , b ∗ ) = V ( L )0 . (cid:3) As a consequence we obtain Corollary 3.4. The multi stopping Dynkin game possess a saddle point < s ∗ , b ∗ > ,and so it has a value which is equal to G ( s ∗ , b ∗ ) . In the remaining part of this section we derive auxiliary lemmas which will beused for the proof of Theorem 2.7. Definition 3.5. A function ψ : R + → R + is a piecewise linear function vanishingat ∞ if there exists a natural number n , such that (3.10) ψ ( y ) = n X i =1 I [ a i ,b i ) ( c i y + d i ) where c , ..., c n , d , ..., d n ∈ R and { [ a i , b i ) } ni =1 is a sequence of disjoint finite inter-vals. Lemma 3.6. Let A ≥ and ψ , ψ : R + → R + be continuous, decreasing andpiecewise linear functions vanishing at ∞ . Define ψ : R + → R + and ψ A : R → R + by ψ ( y ) = min λ ∈ K ( y ) (cid:0) pψ ( y + bλ ) + (1 − p ) ψ ( y + aλ ) (cid:1) and ψ A ( y ) = inf z ≥ ( A − y ) + (cid:0) z + ψ ( y + z − A ) (cid:1) . Then ψ and ψ A are continuous, decreasing and piecewise linear functions vanishingat ∞ . Furthermore, there exists u ≥ ( A − y ) + such that (3.11) ψ A ( y ) = u + ψ ( y + u − A ) . Proof. From Lemma 3.3 in [3] it follows that ψ ( y ) is a decreasing continuous func-tion. Let us show that ψ ( y ) is a piecewise linear function vanishing at ∞ . Since0 ∈ K ( y ) then(3.12) ψ ( y ) ≤ pψ ( y ) + (1 − p ) ψ ( y ) ≤ max( ψ ( y ) , ψ ( y )) . There exists a natural number n such that(3.13) ψ i ( y ) = n X j =1 I [ a j ,b j ) ( c ( i ) j y + d ( i ) j ) , i = 1 , c ( i ) j , d ( i ) j ∈ R and { [ a i , b i ) } ni =1 is a sequence of disjoint finite intervals. Fix y and define the function φ y ( λ ) = pψ ( y + bλ ) + (1 − p ) ψ ( y + aλ ). From (3.13) itfollows that there exists(3.14) λ ∈ {− yb , − ya } ∪ { a j − yb , b j − yb , a j − ya b j − yb } nj =1 . such that ψ ( y ) = φ y ( λ ). Thus, there exists a finite sequence of real numbers u , ..., u m , v , ..., v m such that for any y ,(3.15) ψ ( y ) = u i y + v i for some i (which depends on y ). This together with (3.12) and the fact that ψ ( y )is a continuous function gives that ψ ( y ) is a piecewise linear function vanishing at ∞ . Next, we deal with ψ A ( y ). Observe that ψ A ( y ) ≤ ψ (0) + ( A − y ) + . Thus(3.16) ψ A ( y ) = inf ( A − y ) + ≤ z ≤ ( A − y ) + + ψ (0) (cid:0) z + ψ ( y + z − A ) (cid:1) and (3.11) follows from the fact that ψ is continuous. Choose y < y . Since ψ ( y )is a decreasing function then(3.17) ψ A ( y ) ≤ inf z ≥ ( A − y ) + (cid:0) z + ψ ( y + z − A ) (cid:1) ≤ inf z ≥ ( A − y ) + (cid:0) z + ψ ( y + z − A ) (cid:1) = ψ A ( y ) . Thus ψ A ( y ) is a decreasing function. Now we want to prove continuity. Choose ǫ > 0. Since ψ ( y ) is a continuous piecewise linear function vanishing at ∞ thenthere exists a δ > | y − y | < δ ⇒ | ψ ( y ) − ψ ( y ) | < ǫ. Set δ = min( ǫ, δ ). We will show that(3.19) | y − y | < δ ⇒ | ψ A ( y ) − ψ A ( y ) | ≤ ǫ assuming without loss of generality that y < y . There exists u ≥ ( A − y ) + suchthat(3.20) ψ A ( y ) = u + ψ ( y + u − A ) . If u ≥ ( A − y ) + then using (3.18,)(3.21) ψ A ( y ) − ψ A ( y ) ≤ u + ψ ( y + u − A ) − ( u + ψ ( y + u − A )) ≤ ǫ. If u < ( A − y ) + then | u − ( A − y ) + | ≤ ( A − y ) + − ( A − y ) + ≤ δ and | ( y + ( A − y ) + − A ) − ( y + u − A ) | ≤ δ . Thus from (3.18) it follows that(3.22) ψ A ( y ) − ψ A ( y ) ≤ ( A − y ) + + ψ ( y +( A − y ) + − A ) − ( u + ψ ( y + u − A )) ≤ ǫ. By (3.21) and (3.22) we obtain (3.19) and conclude that ψ A ( y ) is a continuousfunction. Next, let(3.23) ψ ( y ) = k X i =1 I [ α i ,β i ) ( w i y + x i )where k is a natural number, w i , x i ∈ R and { [ α i , β i ) } ki =1 is a sequence of disjointfinite intervals. Fix y and define the function φ A,y ( z ) = z + ψ ( y + z − A ). From(3.16) and (3.23) it follows that there exists z ∈ { ( A − y ) + } ∪ { α i + A − y, β i + A − y } ki =1 such that ψ A ( y ) = φ A,y ( z ). Hence, as before we see that there exists a finitesequence of real numbers U , ..., U M , V , ..., V M such that for any y , ψ A ( y ) = U i y + V i for some i which depends on y . This together with (3.16) and the fact that ψ A ( y )is a continuous function gives that ψ A ( y ) is a piecewise linear function vanishing at ∞ . (cid:3) Lemma 3.7. For any ≤ k ≤ N and ≤ j ≤ L and u , ..., u k ∈ { a, b } the function J k ( · , j, u , ..., u k ) is continuous, decreasing, piecewise linear and vanishing at ∞ . edging of swing options 13 Proof. We will use backward induction in k . For k = N the statement followsfrom (2.18). Suppose the statement is correct for k = n + 1 and prove it for k = n . Fix j > j = 0 the statement is clear) and u , ..., u n ∈ { a, b } . Set ψ ( i )1 ( y ) = J n +1 ( y, i, u , ..., u n , b ) and ψ ( i )2 ( y ) = J n +1 ( y, i, u , ..., u n , a ). From theinduction hypothesis it follows that ψ ( i )1 , ψ ( i )2 are continuous, decreasing and piece-wise linear functions vanishing at ∞ . Thus, applying Lemma 3.1 to the functions ψ ( j − ( y ) , ψ ( j − ( y ) and A = g ( L − j +1) n ( u , ..., u n ) we obtain that the first term in(2.19) is a continuous, decreasing and piecewise linear function vanishing at ∞ (withrespect to y ). Similarly we obtain that the second term in (2.19) is a continuous,decreasing and a piecewise linear function vanishing at ∞ . Using Lemma 3.1 forthe functions ψ ( j )1 ( y ) , ψ ( j )2 ( y ) we see that the third term in (2.19) is a continuous,decreasing and a piecewise linear function vanishing at ∞ . Thus J n ( · , j, u , ..., u n )is a continuous, decreasing and piecewise linear function vanishing at ∞ completingthe proof. (cid:3) Hedging and fair price In this section we prove Theorem 2.4 starting with the following observation. Lemma 4.1. Assume Y k , V k +1 are random variables which are respectively F k and F k +1 measurable. Assume that Y k ≥ ˜ E ( V k +1 |F k ) . Then there exist a F k -measurable random variable γ k such that (4.1) Y k + γ k ( S k − S k +1 ) ≥ V k +1 . Proof. Set V k = ˜ E ( V k +1 |F k ). Then by the martingale representation theorem inthe binomial model (see, for instance [9]) there exists a F k -measurable randomvariable γ k such that V k +1 = V k + γ k ( S k − S k +1 )and (4.1) follows. (cid:3) Next, we define a special portfolio strategy π ∗ = ( x ∗ , γ ∗ ) setting x ∗ = G ( s ∗ , b ∗ ) = V ( L )0 and taking γ ∗ ( k, i, y ) to be the random variable γ k fromLemma 4.1 with respect to Y k = y and V k +1 = V ( L − i +1) k +1 I { y ≥ ˜ E ( V ( L − i +1) k +1 |F k ) } . Notethat if y ≥ ˜ E ( V ( L − i +1) k +1 |F k ) then by Lemma 4.1,(4.2) y + γ ∗ ( k, i, y )( S k +1 − S k ) ≥ V ( L − i +1) k +1 . Now we obtain. Lemma 4.2. The pair ( π ∗ , s ∗ ) is a perfect hedge.Proof. Let b ∈ S be any stopping strategy. Set F ( s ∗ , b ) = (( σ , ..., σ L ) , ( τ , ..., τ L )) . In order to derive that the pair ( π ∗ , S ∗ ) is a perfect hedge we have to show that forevery 0 ≤ k ≤ N , V ( π ∗ ,s ∗ ,b ) k ≥ . In fact, we shall see that for every 0 ≤ k ≤ L ,(4.3) V ( π ∗ ,s ∗ ,b ) k ≥ ˜ E ( V ( L − c k ) k +1 |F k ) where c k = P Li =1 I { σ i ∧ τ i ≤ k } . Since c k is measurable with respect to the σ -algebra F k the inequality (4.3) is a consequence of the following inequalties V ( π ∗ ,s ∗ ,b ) k I c k = i ≥ ˜ E ( V ( L − i ) k +1 |F k ) I c k = i , ≤ i ≤ L. For every 1 ≤ i ≤ L the above inequality will be proved by induction in k . For k = 0 we may have either c = 0 or c = 1 where the second event occurs wheneither the writer or the holder exercised the first claim at the time k = 0. If c = 0then by (2.8), V ( π ∗ ,s ∗ ,b )0 = x ∗ = V ( L )0 . Since V ( L ) σ ∗ ∧ k is a supermartingale with respect to {F k } Nk =0 and 1 ≤ σ ∗ ∧ τ ≤ σ ∗ itfollows that on the event c = 0, which is F measurable, we have V ( L )0 ≥ ˜ E ( V ( L ) σ ∗ ∧ ) = ˜ E ( V ( L )1 ) . If c = 1 we obtain V ( π ∗ ,s ∗ ,b )0 = V ( L )0 − H (1) ( σ ∗ , τ ) ≥ (cid:0) X ( L ) σ ∗ − X ( σ ∗ ) (cid:1) I { σ ∗ <τ } + (cid:0) Y ( L ) τ − Y ( τ ) (cid:1) I { σ ∗ ≥ τ } = ˜ E ( V ( L − σ ∗ ∧ τ +1 |F σ ∗ ∧ τ ) = ˜ E ( V ( L − )where the first equality is (2.8), the inequality is derived from the definition of thestopping time σ ∗ and the fact that V ( L ) ≥ Y ( L ) and the last equalities follow fromthe definitions of X ( L ) and Y ( L ) and the fact that σ ∗ ∧ τ = 0 when c = 1.Next, let 0 < k ≤ N . Assume, first, that c k = i < L . Then by the definitionof c k it follows that σ ∗ i ∧ τ i ≤ k . Similarly to the case k = 0 we may have either σ ∗ i ∧ τ i < k or σ ∗ i ∧ τ i = k . If σ ∗ i ∧ τ i < k then c k − = i and so by (2.8), V ( π ∗ ,s ∗ ,b ) k = V ( π ∗ ,s ∗ ,b ) k − + γ ∗ ( k − , i + 1 , V ( π ∗ ,s ∗ ,b ) k − )( S k − S k − ) . where the equality holds on the F k event σ ∗ i ∧ τ i < k. By the induction hypothesiswe obtain on this event that V ( π ∗ ,s ∗ ,b ) k − ≥ ˜ E ( V ( L − i ) k |F k − ) . By (4.2) it follows that V ( π ∗ ,s ∗ ,b ) k = V ( π ∗ ,s ∗ ,b ) k − + γ ∗ ( k − , i + 1 , V π ∗ ,s ∗ ,bk − )( S k − S k − ) ≥ V ( L − i ) k . Since c k = i the definition of c k yields that σ ∗ i +1 ≥ σ ∗ i +1 ∧ τ i +1 ≥ k + 1, and so fromthe supermartingale property of V ( L − i ) σ ∗ i +1 ∧ l for l ≥ k + 1 ≥ σ ∗ i ∧ τ ∗ i + 1 we obtain V ( L − i ) k ≥ ˜ E ( V ( L − i ) σ ∗ i +1 ∧ k +1 |F k ) = ˜ E ( V ( L − i ) k +1 |F k ) . Now consider the F k event σ ∗ i ∧ τ i = k . Then c k − = i − V ( π ∗ ,s ∗ ,b ) k = V ( π ∗ ,s ∗ ,b ) k − + γ ∗ ( k − , i, V ( π ∗ ,s ∗ ,b ) k − )( S k − S k − ) − H ( σ ∗ i , τ i ) . Since c k − = i − V ( π ∗ ,s ∗ ,b ) k − ≥ ˜ E ( V ( L − i +1) k |F k − ) , and so from the definition of γ ∗ ( k − , i, y ) we obtain that V ( π ∗ ,s ∗ ,b ) k − + γ ∗ ( k − , i, V ( π ∗ ,s ∗ ,b ) k − )( S k − S k − ) − H ( i ) ( σ ∗ i , τ i ) ≥ V ( L − i +1) k − H ( i ) ( σ ∗ i , τ i ) = V ( L − i +1) σ ∗ i ∧ τ i − H ( i ) ( σ ∗ i , τ i ) . edging of swing options 15 From the definition of σ ∗ i , the fact that V ( i ) ≥ Y ( i ) and the definition of X ( i ) , Y ( i ) it follows that V ( L − i +1) σ ∗ i ∧ τ i − H ( i ) ( σ ∗ i , τ i ) ≥ (cid:0) X ( L − i +1) σ ∗ i − X i ( σ ∗ i ) (cid:1) I { σ ∗ i <τ i } + (cid:0) Y ( L − i +1) τ i − Y i ( τ i ) (cid:1) I { σ ∗ i ≥ τ i } = ˜ E ( V ( L − i ) σ ∗ i ∧ τ i +1 |F σ ∗ i ∧ τ i ) = ˜ E ( V ( L − i ) k +1 |F k ) . We are left only with the event c k = L . On this event the inequality (4.3) is reducedto V ( π ∗ ,s ∗ ,b ) k ≥ . If σ ∗ L ∧ τ L = k then the proof is the same as above in the case σ ∗ i ∧ τ i = k for i < L .In the case σ ∗ L ∧ τ L < k there are no claims left to exercise or cancel, and so by thedefinition of γ ∗ we see that the portfolio value will stay nonnegative till the time N . (cid:3) Next, we show that x ∗ = V ( L )0 is the minimal initial capital for a perfect hedge. Lemma 4.3. Assume that the pair ( π, s ) = (( x, γ ) , s ) is a perfect hedge. Then x ≥ x ∗ = V ( L )0 . Proof. Let b ∗ be the stopping strategy for the buyer defined in (2.12) and set F ( s, b ∗ ) = (( σ , ..., σ L ) , ( τ ∗ , ..., τ ∗ L )) . We want to show that(4.4) V ( π,s,b ∗ ) k ≥ ˜ E ( V ( L − c k )( k +1) ∧ N |F k )where c k is computed with respect to ( s, b ∗ ). Recall that for every 0 ≤ k ≤ N thefunction c k is F k measurable and since inequality (4.4) is between F k measurablefunctions we can prove (4.4) separately on the events c k = i. The inequality (4.4) will be proved by the backward induction in k . When c k = L the right hand side of (4.4) is zero and the definition of a perfect hedge yields thatthe left hand side of (4.4) is non negative, hence (4.4) is true in these cases. Next,assume that c k = i where 0 ≤ i ≤ L − k < N ). We split the proof intotwo events c k +1 = i and c k +1 = i + 1. In the second event the ( i + 1)-th claim wasexercised or canceled at the time k + 1.We begin with the event c k +1 = i (thus k < N − V ( π,s,b ∗ ) k +1 ≥ ˜ E ( V ( L − i ) k +2 |F k +1 ) = ˜ E ( V ( L − i ) τ ∗ i +1 ∧ ( k +2) |F k +1 ) ≥ V ( L − i ) τ ∗ i +1 ∧ ( k +1) . The equality here holds true since τ ∗ i +1 ≥ τ ∗ i +1 ∧ σ i +1 ≥ k + 2 > k + 1 > σ i ∧ τ ∗ i when c k +1 = c k = i and the last inequality follows from the submartingale property of V ( L − i ) τ ∗ i +1 ∧ l for l > σ i ∧ τ ∗ i . Since c i = k we have from (2.8) that V π,s,b ∗ k +1 = V ( π,s,b ∗ ) k + γ ( k, i + 1 , V ( π,s,b ∗ ) k )( S k +1 − S k ) . Since S k is a martingale with respect to ˜ P then using this equality and taking theconditional expectation with respect to F k in the above inequality we obtain V ( π,s,b ∗ ) k ≥ ˜ E ( V ( L − i ) τ ∗ i +1 ∧ ( k +1) |F k ) = ˜ E ( V ( L − i ) k +1 |F k ) . Next, assume that c k +1 = i + 1 which together with the assumption c k = i yieldsthat σ i +1 ∧ τ ∗ i +1 = k + 1. By the induction hypothesis it follows that V ( π,s,b ∗ ) k +1 ≥ ˜ E ( V ( L − i − k +2 |F k +1 ) = ˜ E ( V ( L − i − σ i ∧ τ ∗ i +1 |F σ i ∧ τ ∗ i ) , and so V ( π,s,b ∗ ) k +1 + H ( i +1) ( σ i +1 , τ ∗ i +1 ) ≥ ˜ E ( V ( L − i − σ i ∧ τ ∗ i +1 |F σ i ∧ τ ∗ i ) + H ( i +1) ( σ i +1 , τ ∗ i +1 )= X ( L − i ) σ i +1 I { σ i +1 <τ ∗ i +1 } + Y ( L − i ) τ ∗ i +1 I { σ i +1 ≥ τ ∗ i +1 } ≥ V ( L − i ) σ i +1 ∧ τ ∗ i +1 = V ( L − i ) k +1 where the second inequality holds true since X ( L − i ) ≥ V ( L − i ) and in view of thedefinition of the stopping time τ ∗ i +1 . On the event c k = i and c k +1 = i + 1 theequality (2.8) becomes V ( π,s,b ∗ ) k +1 + H ( i +1) ( σ i +1 , τ ∗ i +1 ) = V ( π,s,b ∗ ) k + γ ( k, i + 1 , V ( π,s,b ∗ ) k )( S k +1 − S k )and taking the conditional expectation of the above inequality with respect to thesigma algebra F k we obtain that V ( π,s,b ∗ ) k ≥ ˜ E ( V ( L − i ) k +1 |F k )completing the proof of (4.4). As a special case of (4.4) for k = 0 it follows that V ( π,s,b ∗ )0 ≥ ˜ E ( V ( L − c )1 ) . If c = 0 then τ ∗ ≥ σ ∧ τ ∗ ≥ V ( L ) τ ∗ ∧ l , l ≥ V ( π,s,b ∗ )0 ≥ ˜ E ( V ( L − c )1 ) = ˜ E ( V ( L ) τ ∗ ∧ ) ≥ V ( L )0 = x ∗ . If c = 1 then σ ∧ τ ∗ = 0, and so x − H ( σ , τ ∗ ) = V ( π,s,b ∗ )0 ≥ ˜ E ( V ( L − )which can also be written in the form x ≥ ˜ E ( V ( L − σ ∧ τ ∗ +1 ) + H ( σ , τ ∗ )= X ( L ) σ I { σ <τ ∗ } + Y ( L ) τ ∗ I { σ ≥ τ ∗ } ≥ V ( L ) σ ∧ τ ∗ = V ( L )0 = x ∗ or in short x ≥ x ∗ = V ( L )0 . (cid:3) We can now prove the main theorem of this section. Proof of Theorem 2.4. From Lemma 4.2 and the definition of the fair price V ∗ weobtain that V ( L )0 = x ∗ ≥ V ∗ . On the other hand, Lemma 4.3 yields that x ∗ ≤ V ∗ . By Proposition 3.1, G ( b, s ∗ ) ≤ G ( b ∗ , s ∗ ) = V ( L )0 ≤ G ( b ∗ , s )for any pair of stopping strategies b, s ∈ S which gives (2.13) and collecting togetherthe above inequalities we obtain (2.11). Since π ∗ = ( V ( L )0 , γ ∗ ) we it follows that edging of swing options 17 π ∗ ∈ A ( V ( L )0 ) and by Lemma 3.3 the pair ( π ∗ , s ∗ ) is a perfect hedge completing theproof of Theorem 2.4. (cid:3) Shortfall risk and its hedging In this section we derive Theorem 2.7 whose proof is quite technical but the mainidea is to apply Lemma 3.1 to Dynkin’s games with appropriately constructed payoffprocesses which via Lemma 5.1 below enables us to produce a hedge for the shortfallrisk whose optimality is established by means of Lemmas 5.2 and 5.4 below.For any I ∈ I set Z ( I ) ( y, k, j, u , ..., u k ) = y − f ( L − j +1) k ( u , ..., u k ) + I ( k, L − j + 1 , (5.1) y − f ( L − j +1) k ( u , ..., u k )) and ˜ Z ( I ) ( y, k, j, u , ..., u k ) = y − g ( L − j +1) k ( u , ..., u k ) + I ( k, L − j + 1 , y − g ( L − j +1) k ( u , ..., u k )) . Observe that if at the moment k the seller pays his ( L − j +1)-th payoff and this is hisfirst payoff at this moment (at k = N more than one payoff can occur) then his port-folio value after this payoff is either Z ( I ) ( y, k, j, ρ , ..., ρ k ) or ˜ Z ( I ) ( y, k, j, ρ , ..., ρ k )in the case of an exercise or a cancellation, respectively, provided an infusion ofcapital before the payoff is y (where ρ i is the same as in (2.2)). Next, for any π = ( x, γ ) ∈ A ( x ) and I ∈ I define U ( π,I ) ( y, k, j, u , ..., u k +1 ) = Z ( I ) ( y, k, j, u , ..., u k ) + I j> (5.2) × γ ( k, L − j + 2 , Z ( I ) ( y, k, j, u , ..., u k )) S u k +1 Q ki =1 (1 + u i ) and˜ U ( π,I ) ( y, k, j, u , ..., u k +1 ) = ˜ Z ( I ) ( y, k, j, u , ..., u k ) + I j> × γ ( k, L − j + 2 , ˜ Z ( I ) ( y, k, j, u , ..., u k )) S u k +1 Q ki =1 (1 + u i ) . Note that if at the moment k < N the seller pays his ( L − j + 1)-th payoff then hisportfolio value at the time k +1 before any payoffs is either U ( π,I ) ( y, k, j, ρ , ..., ρ k +1 )or ˜ U ( π,I ) ( y, k, j, ρ , ..., ρ k +1 ) in the case of an exercise or a cancellation, respectively,at the time k provided that his portfolio value before payoffs was y . Finally, for any( π, I ) ∈ A × I define a sequence of functions J ( π,I ) k : R + × { , ..., L } × { a, b } k → R + ,0 ≤ k ≤ N setting, first, J ( π,I ) N ( y, j, u , ..., u N ) = (( P Li = L − j +1 f ( i ) N ( u , ..., u N )) − y ) + , j > , (5.3) J ( π,I ) k ( y, , u , ..., u k ) = 0 , ≤ k ≤ N. Next, for k < N and j > J ( π,I ) k ( y, j, u , ..., u k ) = I ( k, L − j + 1 , y − f ( L − j +1) k ( u , ..., u k ))(5.4) + pJ ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , b ) , j − , u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , a ) , j − , u , ..., u k , a ) if I ( k, L − j + 1 , y − f ( L − j +1) k ( u , ..., u k ))(5.5) + pJ ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , b ) , j − , u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , a ) , j − , u , ..., u k , a ) ≥ I ( k, L − j + 1 , y − g ( L − j +1) k ( u , ..., u k ))+ pJ ( π,I ) k +1 ( ˜ U ( π,I ) ( y, k, j, u , ..., u k , b ) , j − , u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( ˜ U ( π,I ) ( y, k, j, u , ..., u k , a ) , j − , u , ..., u k , a )and J ( π,I ) k ( y, j, u , ..., u k ) = min I ( k, L − j + 1 , y − g ( L − j +1) k ( u , ..., u k ))(5.6) + pJ ( π,I ) k +1 ( ˜ U ( π,I ) ( y, k, j, u , ..., u k , b ) , j − , u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( ˜ U ( π,I ) ( y, k, j, u , ..., u k , a ) , j − , u , ..., u k , a ) , max I ( k, L − j + 1 , y − f ( L − j +1) k ( u , ..., u k ))+ pJ ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , b ) , j − , u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( U ( π,I ) ( y, k, j, u , ..., u k , a ) , j − , u , ..., u k , a ) ,pJ ( π,I ) k +1 ( y + γ ( k, L − j + 1 , y ) S b Q ki =1 (1 + u i ) , j, u , ..., u k , b )+(1 − p ) J ( π,I ) k +1 ( y + γ ( k, L − j + 1 , y ) S a Q ki =1 (1 + u i ) , j, u , ..., u k , a ) !! if the inequality in (5.5) does not hold true.For any j ≥ k ≤ N consider the set S ( j ) k of sequences s = ( s , ..., s j ) suchthat s ∈ Γ k and for i > s i : C i − → Γ is a map which satisfy s i (( a , ..., a i − ) , ( d , ..., d i − )) ∈ Γ N ∧ (1+ a i − ) . Next, define a map F : S ( j ) k × S ( j ) k → Γ j × Γ j by F ( s, b ) = (( σ , ..., σ j ) , ( τ , ..., τ j ))in the same way as in (2.5). Fix ( π, I ) ∈ A × I , j ≥ m ≥ k ≤ N and y ≥ k where the initial capital ofthe seller equal y , the number of remaining payoffs is m and it starts from the( L − j + 1)-th claim. Let z = ( a, d ) = (( a , ..., a m ) , ( d , ..., d m )) ∈ C m be a sequencewhich represents the history of the payoffs. Set c n = c n ( z ) = L − j + P mi =1 I a i ≤ n . edging of swing options 19 Define the stochastic processes { W ( y,π,I,k,j,z ) n } Nn = k and { V ( y,π,I,k,j,z ) n } Nn = k by W ( y,π,I,k,j,z ) k = y, V ( y,π,I,k,j,z ) k = W ( y,π,I,k,j,z ) k − I a = k (5.7) × (cid:18) I d =1 X L − j +1 ( k ) + I d =0 Y L − j +1 ( k ) + I k = N P Li = L − j +2 Y i ( N ) − I ( k, L − j + 1 ,W ( y,π,I,k,j,z ) k − I d =1 X L − j +1 ( k ) − I d =0 Y L − j +1 ( k )) (cid:19) and for n > k,V ( y,π,I,k,j,z ) n = W ( y,π,I,k,j,z ) n − + I c n − 0. Define ˜ s ( y, π, I, k, j ) = (˜ s , ..., ˜ s j ) ∈ S ( j ) k and ˜ b ( y, π, I, k, j ) = (˜ b , ..., ˜ b j ) ∈ S ( j ) k by˜ s = N ∧ min n n ≥ k | J ( π,I ) n ( W ( y,π,I,k,j ) n , j, ρ , ..., ρ n )(5.12) ≥ I ( n, L − j + 1 , W ( y,π,I,k,j ) n − X L − j + i ( n ))+ E ( J ( π,I ) n +1 ( ˜ U ( π,I ) ( W ( y,π,I,k,j ) n , n, j, ρ , ..., ρ n +1 ) , j − , ρ , ..., ρ n +1 ) |F n ) o , ˜ b = N ∧ min n n ≥ k | J ( π,I ) n ( W ( y,π,I,k,j ) n , j, ρ , ..., ρ n )= I ( n, L − j + 1 , W ( y,π,I,k,j ) n − Y L − j +1 ( n ))+ E ( J ( π,I ) n +1 ( U ( π,I ) ( W ( y,π,I,k,j ) n , n, j, ρ , ..., ρ n +1 ) , j − , ρ , ..., ρ n +1 ) |F n ) o . For i > z = ( a, d ) = (( a , ..., a i − ) , ( d , ..., d i − )) ∈ C i − and define˜ s i ( z ) = N ∧ min n n > a i − | J ( π,I ) n ( W ( y,π,I,k,j,z ) n , j − i + 1 , ρ , ..., ρ n )(5.13) ≥ I ( n, L − j + i, W ( y,π,I,k,j,z ) n − X L − j + i ( n ))+ E ( J ( π,I ) n +1 ( ˜ U ( π,I ) ( W ( y,π,I,k,j,z ) n , n, j − i + 1 , ρ , ..., ρ n +1 ) , j − i, ρ , ..., ρ n +1 ) |F n ) o , ˜ b i ( z ) = N ∧ min n n > a i − | J ( π,I ) n ( W ( y,π,I,k,j,z ) n , j − i + 1 , ρ , ..., ρ n )= I ( n, L − j + i, W ( y,π,I,k,j,z ) n − Y L − j + i ( n ))+ E ( J ( π,I ) n +1 ( U ( π,I ) ( W ( y,π,I,k,j,z ) n , n, j − i + 1 , ρ , ..., ρ n +1 ) , j − i, ρ , ..., ρ n +1 ) |F n ) o . The following two lemmas will be crucial for the proof of Theorem 2.7. Lemma 5.1. Let π, I ∈ A × I , n ≤ N , j ≥ , and y ≥ . Define the stochasticprocesses { A k } Nk = n and { D k } Nk = n by A N = D N = ( P Lq = L − j +1 Y q ( N ) − W ( y,π,I,n,j ) N ) (+) and for k < N, (5.14) A k = I ( k, L − j + 1 , W ( y,π,I,n,j ) k − Y L − j +1 ( k ))+ E ( J ( π,I ) k +1 ( U ( π,I ) ( W ( y,π,I,n,j ) k , k, j, ρ , ..., ρ k +1 ) , j − , ρ , ..., ρ k +1 ) |F k ) ,D k = I ( k, L − j + 1 , W ( y,π,I,n,j ) k − X L − j +1 ( k ))+ E ( J ( π,I ) k +1 ( ˜ U ( π,I ) ( W ( y,π,I,n,j ) k , k, j, ρ , ..., ρ k +1 ) , j − , ρ , ..., ρ k +1 ) |F k ) . Set (5.15) V k = min σ ∈ Γ k max τ ∈ Γ k E ( D σ I σ<τ + A τ I τ ≤ σ |F k ) . Then for any k ≥ n , (5.16) V k = J ( π,I ) k ( W ( y,π,I,n,j ) k , k, j, ρ , ..., ρ k ) . Furthermore, the stopping times ˜ σ = ˜ s = ˜ s ( y, π, I, n, j ) and ˜ τ = ˜ b = ˜ b ( y, π, I, n, j ) (5.17) given by (5.12) with ˜ s ( y, π, I, k, j ) = (˜ s , ..., ˜ s j ) and ˜ b ( y, π, I, k, j ) = (˜ b , ..., ˜ b j ) sat-isfy (5.18) E ( D ˜ σ I ˜ σ<τ + A τ I ˜ σ ≥ τ |F n ) ≤ V n ≤ E ( D σ I σ< ˜ τ + A ˜ τ I σ ≥ ˜ τ |F n ) for any σ, τ ∈ Γ n . edging of swing options 21 Proof. Fix π, I ∈ A × I , and j ≥ 1. We will use backward induction on n . For n = N the statement is obvious since all the terms in (5.16) and (5.18) are equalto (( P Li = L − j +1 f ( i ) N ( ρ , ..., ρ N )) − y ) + . Suppose that the assertion holds true for n + 1 , ..., N and prove it for n . Fix y ≥ n ≤ k < N (for k = N the statementis obvious). Fix m > k and denote Z m = W ( y,π,I,n,j ) m . For any i ≥ m we have W ( Z m ,π,I,m,j ) i = W ( y,π,I,n,j ) i , and so A N = ( P Lq = L − j +1 Y q ( N ) − W ( Z m ,π,I,m,j ) N ) (+) and for m ≤ i < N, (5.19) A i = I ( i, L − j + 1 , W ( Z m ,π,I,m,j ) i − Y L − j +1 ( i ))+ E ( J ( π,I ) i +1 ( U ( π,I ) ( W ( Z m ,π,I,m,j ) i , i, j, ρ , ..., ρ i +1 ) , j − , ρ , ..., ρ i +1 ) |F i ) ,D i = I ( i, L − j + 1 , W ( Z m ,π,I,m,j ) i − X L − j +1 ( i ))+ E ( J ( π,I ) i +1 ( ˜ U ( π,I ) ( W ( Z m ,π,I,m,j ) i , i, j, ρ , ..., ρ i +1 ) , j − , ρ , ..., ρ i +1 ) |F i ) . Since Z m is F m -measurable then using the induction hypothesis for m > k ≥ n (with Z m in place of y ) we obtain that for any m > k ,(5.20) V m = J ( π,I ) m ( Z m , j, ρ , ..., ρ m ) . Thus E ( V k +1 |F k ) = pJ ( π,I ) k +1 ( W ( y,π,I,n,j ) k + γ ( k, L − j + 1 , W ( y,π,I,n,j ) k )(5.21) × S b Q ki =1 (1 + ρ i ) , j, ρ , ..., ρ k , b ) + (1 − p ) J ( π,I ) k +1 ( W ( y,π,I,n,j ) k + γ ( k, L − j + 1 , W ( y,π,I,n,j ) k ) S a Q ki =1 (1 + ρ i ) , j, ρ , ..., ρ k , a ) . Using Lemma 3.1 for the processes { A i } Ni = k and { D i } Ni = k together with (5.3)-(5.6)and (5.21) we obtain that for any k ≥ n ,(5.22) V k = J ( π,I ) k ( W ( y,π,I,n,j ) k , , ρ k , ..., ρ k ) . From (5.12) and (5.22) it follows that˜ σ = N ∧ min { i ≥ n | V i ≥ D i } , ˜ τ = N ∧ min { i ≥ n | V i = A i } . (5.23)Thus applying Lemma 3.1 to the processes { A i } Ni = n and { D i } Ni = n we obtain (5.18). (cid:3) Lemma 5.2. For any π, I ∈ A × I , n ≤ N , j ≥ , s, b ∈ S ( j ) n and y ≥ , R ( y, π, I, n, j, ˜ s ( y, π, I, n, j ) , b ) ≤ R ( y, π, I, n, j )(5.24) = J ( π,I ) n ( y, j, ρ , ..., ρ n ) ≤ R ( y, π, I, n, j, s, ˜ b ( y, π, I, n, j )) . Proof. Fix π, I ∈ A × I . We will use the backward induction in n . For n = N thestatement is obvious since all the terms are equal to (( P Li = L − j +1 f ( i ) N ( ρ , ..., ρ N )) − y ) + . Suppose that the assertion is correct for n + 1 , ..., N and let us prove it for n .For j > n ≤ k < N and k ∈ { , } define the map Q ( k ,k ) : S ( j ) n → S ( j − k +1 by Q ( k ,k ) ( s , ..., s i +1 ) = ( s ′ , ..., s ′ i ) where s ′ = s ( k , k ) and for m > , (5.25) s ′ m (( a , ..., a m − ) , ( d , ..., d m − ))= s m +1 (( k , a , ..., a m − ) , ( k , d , ..., d m − )) . For any j ≥ y ≥ s = ˜ s ( y, π, I, n, j ). From (5.12)-(5.13) it follows thatfor any j > s ( k ,k ) = Q ( k ,k ) (˜ s ) satisfies(5.26) ˜ s ( k ,k ) = ˜ s ( W ( y,π,I,n,j, ( k ,k )) k +1 , π, I, k + 1 , j − . Thus by the induction hypothesis we obtain that for any n ≤ k < N , k ∈ { , } , j > b ′ ∈ S ( j ) k +1 , R ( W ( y,π,I,n,j, ( k ,k )) k +1 , π, I, k + 1 , j − , ˜ s ( k ,k ) , b ′ ) ≤ (5.27) J ( π,I ) k +1 ( W ( y,π,I,n,j, ( k ,k )) k +1 , j − , ρ , ..., ρ k +1 ) . Fix j ≥ y ≥ b ∈ S ( j ) n . Set F (˜ s, b ) = (( σ , ..., σ j ) , ( τ , ..., τ j )), A = { σ < τ } and z = ( σ ∧ τ , I A ). If j > s ′ = ˜ s ( σ ∧ τ , I A ) and b ′ = I σ ∧ τ Definition 5.3. Let D ⊂ R be an interval of the form [ a, b ] or [ a, ∞ ) , H be aset and f : D × H → R such that f ( · , h ) is a continuous function which has aminimum on D . Define the function argmin f : H → D by argmin f ( h ) = min { y ∈ D | f ( y, h ) = min z ∈ K f ( z, h ) } . Lemmas 3.6 and 3.7 enable us to consider the following functions. Define ˜ γ : { , ..., N − } × { , ..., L } × R → Ξ by(5.34) ˜ γ ( k, j, y ) = argmin f ( k, j, ρ , ..., ρ k ) S Q ki =1 (1 + ρ i )where f : K ( y ) × { , ..., N − } × { , ..., L } × { a, b } k → R is given by f ( α, k, j, u , ..., u k ) = pJ k +1 ( y + bα, L − j + 1 , u , ..., u k , b )(5.35) +(1 − p ) J k +1 ( y + aα, L − j + 1 , u , ..., u k , a ) . Also define ˜ I : { , ..., N } × { , ..., L } × R → Ξ by˜ I ( N, j, y ) = (( L X i = j +1 Y i ( N ) − y ) + , ˜ I ( k, L, y ) = ( − y ) + . (5.36)Then for k < N and j < L ,(5.37) ˜ I ( k, j, y ) = argmin g ( k, j, ρ , ..., ρ k )where g : [ − y + , ∞ ) × { , ..., N − } × { , ..., L − } × { a, b } k → R is given by g ( z, k, j, u , ..., u k ) = z + min α ∈ K ( y + z ) (cid:0) pJ k +1 ( y + z + bα, L − j, (5.38) u , ..., u k , b ) + (1 − p ) J k +1 ( y + z + aα, L − j, u , ..., u k , a ) (cid:1) . Clearly ˜ I ∈ I . For any initial capital x consider the portfolio strategy ˜ π = ( x, ˜ γ ).Observe that ˜ γ satisfies (2.7), and so ˜ π ∈ A ( x ). Lemma 5.4. For any k ≤ N and ( π, I ) ∈ A ( x ) × I (5.39) J k ( y, j, ρ , ..., ρ k ) = J (˜ π, ˜ I ) k ( y, j, ρ , ..., ρ k ) ≤ J ( π,I ) k ( y, j, ρ , ..., ρ k ) . Proof. We will use the backward induction. Fix π = ( x, γ ) ∈ A ( x ) and I ∈ I . For k = N the statement is obvious. Suppose the assertion holds true for n + 1 and prove it for n . For j = 0 the statement is clear. Fix j ≥ 1. From the inductionhypothesis and the definition of ˜ γ, ˜ I we obtain thatinf α ∈ K ( y ) (cid:0) pJ n +1 ( y + bα, j, ρ , ..., ρ n , b )(5.40) +(1 − p ) J n +1 ( y + aα, j, ρ , ..., ρ n , a ) (cid:1) = pJ n +1 (cid:0) y + ˜ γ ( n, L − j + 1 , y ) S b Q ni =1 (1 + ρ i ) , j, u , ..., u n , b (cid:1) +(1 − p ) J n +1 (cid:0) y + ˜ γ ( n, L − j + 1 , y ) S a Q ni =1 (1 + ρ i ) , j, u , ..., u n , a (cid:1) = pJ (˜ π, ˜ I ) n +1 (cid:0) y + ˜ γ ( n, L − j + 1 , y ) S b Q ni =1 (1 + ρ i ) , j, ρ , ..., ρ n , b (cid:1) +(1 − p ) J (˜ π, ˜ I ) n +1 (cid:0) y + ˜ γ ( n, L − j + 1 , y ) S Q ni =1 (1 + ρ i ) , j, ρ , ..., ρ n , a (cid:1) . From the induction hypothesis and the fact that γ satisfies (2.7) it follows thatinf α ∈ K ( y ) (cid:0) pJ n +1 ( y + bα, j, ρ , ..., ρ n , b )(5.41) +(1 − p ) J n +1 ( y + aα, j, ρ , ..., ρ n , a ) (cid:1) ≤ inf α ∈ K ( y ) (cid:0) pJ ( π,I ) n +1 ( y + bα, j, ρ , ..., ρ n , b )+(1 − p ) J ( π,I ) n +1 ( y + aα, j, ρ , ..., ρ n , a ) (cid:1) ≤ pJ ( π,I ) n +1 ( y + γ ( n, L − j + 1 , y ) S b Q ni =1 (1 + ρ i ) , j, ρ , ..., ρ n , b )+(1 − p ) J ( π,I ) n +1 (cid:0) y + γ ( n, L − j + 1 , y ) S a Q ni =1 (1 + ρ i ) , j, ρ , ..., ρ n , a (cid:1) . From the induction hypothesis and the definition of ˜ γ, ˜ I we obtaininf z ≥ ( g ( L − j +1) n ( ρ ,...,ρ n ) − y ) + inf α ∈ K ( y + z − g ( L − j +1) n ( ρ ,...,ρ n )) (5.42) (cid:18) z + pJ n +1 (cid:0) y + z − g ( L − j +1) n ( ρ , ..., ρ n ) + bα, j − , ρ n , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) y + z − g ( L − j +1) n ( ρ , ..., ρ n ) + aα, j − , ρ , ..., ρ n , a (cid:1)(cid:19) = ˜ I (cid:0) n, L − j + 1 , y − g ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ n +1 (cid:0) ˜ U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) ˜ U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) = ˜ I (cid:0) n, L − j + 1 , y − g ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ (˜ π, ˜ I ) n +1 (cid:0) ˜ U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J (˜ π, ˜ I ) n +1 (cid:0) ˜ U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) . edging of swing options 25 Using that γ satisfies (2.7) and I ( · , · , u ) ≥ ( − u ) + it follows by the induction hy-pothesis that inf z ≥ ( g ( L − j +1) n ( ρ ,...,ρ n ) − y ) + inf α ∈ K ( y + z − g ( L − j +1) n ( ρ ,...,ρ n )) (5.43) (cid:18) z + pJ n +1 (cid:0) y + z − g ( L − j +1) n ( ρ , ..., ρ n ) + bα, j − , ρ n , ..., ρ n , b )+(1 − p ) J n +1 (cid:0) y + z − g ( L − j +1) n ( ρ , ..., ρ n ) + aα, j − , ρ , ..., ρ n , a (cid:1)(cid:19) ≤ I (cid:0) n, L − j + 1 , y − g ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ n +1 (cid:0) ˜ U ( π,I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) ˜ U ( π,I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) ≤ I (cid:0) n, L − j + 1 , y − g ( L − j +1) n ( ρ , ..., ρ n ))+ pJ ( π,I ) n +1 ( ˜ U ( π,I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J ( π,I ) n +1 (cid:0) ˜ U ( π,I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) . In a similar way we obtaininf z ≥ ( f ( L − j +1) n ( ρ ,...,ρ n ) − y ) + inf α ∈ K ( y + z − f ( L − j +1) n ( ρ ,...,ρ n )) (5.44) (cid:18) z + pJ n +1 (cid:0) y + z − f ( L − j +1) n ( ρ , ..., ρ n ) + bα, j − , ρ n , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) y + z − f ( L − j +1) n ( ρ , ..., ρ n ) + aα, j − , ρ , ..., ρ n , a (cid:1)(cid:19) = ˜ I (cid:0) n, L − j + 1 , y − f ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ n +1 (cid:0) U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) = ˜ I (cid:0) n, L − j + 1 , y − f ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ (˜ π, ˜ I ) n +1 (cid:0) U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J (˜ π, ˜ I ) n +1 (cid:0) U (˜ π, ˜ I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) and inf z ≥ ( f ( L − j +1) n ( ρ ,...,ρ n ) − y ) + inf α ∈ K ( y + z − f ( L − j +1) n ( ρ ,...,ρ n )) (5.45) (cid:18) z + pJ n +1 (cid:0) y + z − f ( L − j +1) n ( ρ , ..., ρ n ) + bα, j − , ρ n , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) y + z − f ( L − j +1) n ( ρ , ..., ρ n ) + aα, j − , ρ , ..., ρ n , a (cid:1)(cid:19) ≤ I (cid:0) n, L − j + 1 , y − f ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ n +1 (cid:0) U ( π,I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J n +1 (cid:0) U ( π,I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) ≤ I (cid:0) n, L − j + 1 , y − f ( L − j +1) n ( ρ , ..., ρ n ) (cid:1) + pJ ( π,I ) n +1 (cid:0) U ( π,I ) ( y, n, j, ρ , ..., ρ n , b ) , j − , ρ , ..., ρ n , b (cid:1) +(1 − p ) J ( π,I ) n +1 (cid:0) U ( π,I ) ( y, n, j, ρ , ..., ρ n , a ) , j − , ρ , ..., ρ n , a (cid:1) . Now, (5.39) follows from (5.40)–(5.45). (cid:3) Finally, fix an initial capital x ≥ π = ( x, ˜ γ ). Set(5.46) ˜ s = ˜ s ( x, ˜ π, ˜ I, , L ) . Using Lemmas 5.2 and 5.4 (for j = L and n = 0) we obtain that for any π, I, s ∈A ( x ) × I × S , R (˜ π, ˜ I, ˜ s ) = J (˜ π, ˜ I )0 ( x, L ) = J ( x, L ) ≤ J ( π,I )0 ( x, L ) = R ( π, I, s ) . Thus R (˜ π, ˜ I, ˜ s ) = R ( x ) = J ( π,I )0 ( x, L ) . completing the proof of Theorem 2.7. (cid:3) References [1] Barbieri, A., and M.B.Garman (1996): Understanding the Valuation of Swing Contracts, In: Energy and Power Risk Management, FEA.[2] Carmona, R., and N.Touzi (2008): Optimal multiple stopping and valuation of swingoptions, Math. Finance 18, 239-268.[3] Dolinsky, Ya., and Yu.Kifer (2007): Hedging with risk for game options in discrete time, Stochastics 79, 169–195.[4] Jaillet, P., E.I.Ronn and S.Tompaidis (2004): Valuation of Commodity-Based SwingOptions, Manage. Sci. 50(7), 909–921.[5] Kifer, Yu. (2000): Game options, Finance and Stoch. 4, 443–463.[6] Meinshausen, N., and Hambly, B.M. (2004): Monte Carlo methods for the valuation ofmultiple exercise options, Math. Finance 14, 557–583.[7] Ohtsubo, Y. (1986): Optimal stopping in sequential games with or without a constraintof always terminating, Math. Oper Res. 11, 591-607.[8] Ross, S.M., and Z.Zhu (2008): On the structure of a swing contract’s optimal value andoptimal strategy, J. Appl. Prob 45, 1–15.[9] Shiryaev, A.N., Y.M.Kabanov, D.O.Kramkov and A.B.Melnikov (1994): To the theoryof computations of European and American options, discrete case,