American Options with Asymmetric Information and Reflected BSDE
aa r X i v : . [ q -f i n . P R ] A ug American Options with Asymmetric Information and Reflected BSDE
Neda EsmaeeliDepartment of Mathematical SciencesSharif University of TechnologyAzadi AvenueTehran 14588-89694IranE-mail: n [email protected] Peter Imkeller ∗ Institut f¨ur MathematikHumboldt-Universit¨at zu BerlinUnter den Linden 610099 BerlinGermanyE-mail: [email protected]
Abstract
We consider an American contingent claim on a financial market where the buyer has additional informa-tion. Both agents (seller and buyer) observe the same prices, while the information available to them maydiffer due to some extra exogenous knowledge the buyer has. The buyer’s information flow is modeled byan initial enlargement of the reference filtration. It seems natural to investigate the value of the Americancontingent claim with asymmetric information. We provide a representation for the cost of the additionalinformation relying on some results on reflected backward stochastic differential equations (RBSDE). This isdone by using an interpretation of prices of American contingent claims with extra information for the buyerby solutions of appropriate RBSDE. primary 60G40, 91G20; secondary 91G80, 60H07.
Keywords and phrases:
American contingent claims; asymmetric information; cost of information; initialenlargement of filtrations; reflected BSDE.
A European contingent claim is a contract on a financial market whose payoff depends on the market state atmaturity or exercise time. The problem of valuation and hedging of contingent claims on complete markets, firststudied by Black and Scholes [5], Merton [30, 31], Harrison and Kreps [20], Harrison and Pliska [21], Duffie[8], and Karatzas [26], among others, can be formulated in terms of backward stochastic differential equations(BSDE). Pricing and hedging on incomplete markets has been investigated by many authors for some decades.We only mention pioneering papers by F¨ollmer and Schweizer [13], M¨uller [32], F¨ollmer and Sondermann [14],Schweizer [38], Sch¨al [37], Bouchaud and Sornette [6] and El Karoui and Quenez [10] who were among the firstto link this problem to BSDE. BSDE were introduced, on a Brownian filtration, by Bismut [4]. Pardoux andPeng [35] proved existence and uniqueness of adapted solutions under suitable square-integrability assumptionsfor coefficients and terminal condition. For some decades, BSDE represent a vibrant field of research, due to itsclose ties with stochastic control and mathematical finance.In contrast to their European counterparts, American contingent claims (ACC), such as American call or putoptions, can be exercised at any time before maturity. Ignoring interest rates, it is well known that the value ofthe process of an American contingent claim is related to the Snell envelope of the payoff process, i.e. the smallestsupermartingale dominating it. The optimal exercise time is given by the hitting time of the payoff process by theSnell envelope. This key observation links optimal stopping problems to reflected backward stochastic differentialequations (RBSDE), i.e. BSDE constrained to stay above a given barrier which in the case of the ACC is given bythe payoff function. RBSDE in continuous time, the variant related to ACC, were first investigated in El Karoui etal. [9]. In this context the solution process is kept above the reflecting barrier by means of an additional process.1
SETUP AND PRELIMINARIES F = ( F t ) t ∈ [0 ,T ] , thebuyer possesses additional information modeled by some random variable G which is already available initially. Sohis information evolution is described by the enlarged filtration G = ( G t ) t ∈ [0 ,T ] with G t = F t ∨ σ ( G ) . We studythe effect of this additional information on the value and the optimal exercise time of an American contingentclaim. The situation is similar to an insider’s optimal investment problem in the simplest possible model, wherehe aims to maximize expected utility from the terminal value of his portfolio, and his investment decisions arebased on the associated larger flow of information. Pikovsky and Karatzas [36] first studied this problem in theframework of an initially enlarged filtration. Variants of the model were investigated among others by Elliott etal. [11], Grorud and Pontier [16, 17], Amendinger et al. [2], or Ankirchner et al. [3].Building on results about initial enlargements of filtrations by Jacod [25], in the first part of the paper wereduce the problem to a standard optimal stopping problem on an enlarged probability space in case G possessesconditional laws with respect to the smaller filtration that are smooth enough (density hypothesis). Underthe density hypothesis we write the value function of an American contingent claim obtained with additionalinformation as the value function of a modified American contingent claim on the enlarged space. To defineit as the product of the underlying probability space and the (real) space of possible values of G , we give afactorization of G –stopping times in terms of parametrized F − stopping times. This is a rational choice, since theinitial enlargement is related to a measure change on this product space; see for instance Jacod [25] or Amendingeret al. [2].In the second part, following the well known link between optimal stopping problems and RBSDE in El Karouiet al. [9], on a Brownian basis we define a corresponding RBSDE on the product space associated to the initialenlargement of the filtration. BSDE for (initially or progressively) enlarged filtrations have been studied by Eyraud-Loisel [12] or Kharroubi et al. [27]. The approach used in [12] is based on measure changes, which is one, butnot the main, tool for our approach. Our treatment of the RBSDE is based on Ito calculus and the canonicaldecomposition of semimartingales in G . Extending results in El Karoui et al. [9], we rewrite the value functionof the American contingent claim with asymmetric information in terms of the solution of the RBSDE on theproduct space. This provides a solution of the RBSDE with respect to the larger filtration. Possessing additionalinformation, the buyer has a larger value of the expected payoff than the seller. We study the advantage of thebuyer in terms of the solutions of two different RBSDE.The outline of the paper is the following. After presenting notations and assumptions in Section 2, weintroduce the financial market model with asymmetric information. In Section 3, we factorize G –stopping timesas parametrized F –stopping times, and give a formula for the value of an ACC with asymmetric information.We also study the value function for conditional expectations with respect to the small filtration - an optimalprojection problem. Section 4 is concerned with the link between optimal stopping problems and RBSDE. Werecall some results from El Karoui et al. [9] and extend them to parametrized RBSDE. We define an RBSDEthat corresponds to the optimal stopping problem on the product space. By changing variables in the solutionof this RBSDE, we obtain an alternative expression for the value function with additional information in termsof the solution of the RBSDE in the initially enlarged filtration. In Section 5, we define the cost of additionalinformation by utility indifference. We obtain a formula for the cost in terms of a difference of solutions of twoRBSDE on different spaces. Finally, we compute it in a simple case. Let
T > represent a finite time. We consider a filtered probability space (Ω , F , F , P ) , where F = ( F t ) t ∈ [0 ,T ] isthe reference filtration satisfying the usual conditions of right-continuity and completeness. Moreover we assumethat F is trivial. Equations resp. inequalities involving random variables are usually understood in the almost suresense. We consider a random variable G : Ω → R . Let G be the initial enlargement of F by G , i.e. G = ( G t ) t ∈ [0 ,T ] where G t = F t ∨ σ ( G ) , t ∈ [0 , T ] . SETUP AND PRELIMINARIES P G the law of G and for t ∈ [0 , T ] by P Gt ( ω, du ) the regular version of the conditional lawof G given F t . Throughout this paper, we will assume that Jacod’s density hypothesis ([24], [25]) stated in thefollowing assumption is satisfied. Assumption 2.1.
For t ∈ [0 , T ] , the regular conditional law of G given F t is equivalent with the law of G for P -almost all ω ∈ Ω i.e. P [ G ∈ ·|F t ] ∼ P ( G ∈ · ) , P – a.s . According to [25], for each t ∈ [0 , T ] there exists an F ⊗ B ( R ) − measurable version of α t ( u )( ω ) := dP Gt ( u,ω ) dP G ( u ) which is strictly positive. And for each u ∈ R , { α t ( u ) } t ∈ [0 ,T ] is a martingale w.r.t F . We recall that it is shownin [1, Proposition 1.10] that the strict positivity of α implies the right continuity of the filtration G . Let t ∈ R + and H a filtration in F . We denote by T t,T ( H ) the set of H − stopping times with values in [ t, T ] . Definition 2.2.
Consider the following payoff process R = L [0 ,T ) + ξ { T } , (1) where L is an F − adapted real-valued c`adl`ag process and ξ an F T − measurable random variable, satisfying theintegrability condition E [ sup t ∈ [0 ,T ] | L t | + | ξ | ] < ∞ . (2) For t ∈ [0 , T ] , τ ∈ T t,T ( F ) , the value function of an American contingent claim is defined by V t = ess sup τ ∈T t,T ( F ) E [ R ( τ ) |F t ] . (3) τ is the buyer’s stopping time and plays the role of a control tool. We suppose throughout this paper that ≤ L T ≤ ξ < + ∞ .We consider an American contingent claim where, in contrast to the seller, the buyer possesses additionalinformation. This extra information may be based for instance on a good analyst or better software. The additionalinformation is described by the random variable we denote by G . A natural question one may ask is ”what is thevalue of an American contingent claim with extra information? “ Another one addresses the following problem.As the buyer has more information, he has access to a larger set of available stopping times leading to a higherexpected payoff. This immediately leads to the question ”what is the cost of this extra information? “A filtration usually encodes a flow of information. So it is natural to model extra information by an enlargementof a filtration. We will consider an initial enlargement of the reference filtration. This means that we add all theextra information at initial time to the reference filtration. As introduced above, G = ( G t ) t ∈ [0 ,T ] is the initialenlargement of F by G . Formally, incorporating extra information leads to working on the following productspaces whose second component is the space of possible values of the additional information given by a realvalued random variable. So we consider the probability space ( b Ω , b F , b F , b P ) , where b Ω := Ω × R , b F t := \ s>t ( F s ⊗ B ( R )) , t ∈ [0 , T ] , b F := ( b F t ) t ∈ [0 ,T ] , b F = F ⊗ B ( R ) , b P := P ⊗ η, (4)where η is a probability measure on ( R , B ( R )) playing the role of the law of the additional information. Withoutloss of generality we may assume that ( b Ω , b F , b P ) is complete and that b F contains all b P − null sets of b F . Wedenote by b E the expectation w.r.t. b P . Taking expectations with respect to b P takes into account averaging over AMERICAN CONTINGENT CLAIMS IN AN INITIALLY ENLARGED FILTRATION X defined on b Ω we have b E ( X ) = E (cid:18)Z R X ( u ) dη ( u ) (cid:19) . (5)where X ( u ) = X ( ., u ) , u ∈ R .Due to the definition of the value function of an American contingent claim (3), our first step on the way toanswer the above questions is to study ess sup τ ∈T t,T ( G ) E [ R ( τ ) |H t ] , (6)where H t = G t . We also study the case H t = F t which will be seen to be understood as an optimalprojection problem. Our main idea is to look for a suitable representation of G − stopping times as ”parametrized” F − stopping times, and then reduce the problem to a corresponding problem in a product filtration which containsthe reference filtration. We will answer the first question in this section, while the second one is treated in Section5. We denote by V G := ess sup τ ∈T ,T ( G ) E [ R ( τ ) |G ] the value of the American contingent claim with extra information. We will use the density hypothesis to writethis value as the value of an American contingent claim in the product filtration b F . For this purpose, we needsome properties of the filtration G . We begin with the following remark.
Remark 2.3. G = σ ( G ) . This holds true by the fact that F is right-continuous and F is trivial. It is clear that V G is a G − measurable random variable. Hence by factorization it is of the form f ( G ) where f is a real-valued measurable function. In this section, we present a characterization of G − stopping times. We then derive a formula representing thevalue function of an American contingent claim with extra information. Throughout this section, we work on theprobability space ( b Ω , b F , b F , b P ) from (4) where η = P G . G -stopping times We start with the following proposition.
Proposition 3.1.
Let X : b Ω × R + −→ R be an b F − adapted process. Then for the random variable G , the process X ( G ) : Ω × R + −→ R is G − adapted.Proof. We define ¯ G : Ω −→ b Ω ω ( ω, G ( ω )) . Then for fixed t ∈ [0 , T ] , we have for each B ∈ B ( R ) X t ( ω, G ( ω )) − ( B ) = X t ( ¯ G ( ω )) − ( B )= ¯ G − ( X − t ( B )) . (7) .1 Factorization of G -stopping times X − t ( B ) ∈ b F t , it is sufficient to prove that ¯ G − ( C × D ) ∈ G t for C ∈ F t and D ∈ B ( R ) .Indeed, we have ¯ G − ( C × D ) = C ∩ G − ( B ) ∈ F t ∨ σ ( G ) = G t . Remark 3.2.
With similar arguments, one can show that if X : b Ω × R + −→ R is an b F − progressively measurable(resp. predictable) process, then X ( G ) : Ω × R + −→ R is G − progressively measurable(resp. predictable). The following proposition characterizes G − stopping times in terms of b F − stopping times. Proposition 3.3.
Let τ ′ : Ω → R + be a random time. τ ′ is a G − stopping time if and only if there exists an b F − stopping time τ : b Ω → R + such that τ ′ ( ω ) = τ ( ω, G ( ω )) for P − a.e. ω ∈ Ω .Proof. Suppose first that τ is an b F − stopping time. For t ∈ [0 , T ] we have to show that { τ ( ω, G ( ω )) ≤ t } ∈ G t . We have with ¯ G as defined in the previous proposition { τ ( ω, G ( ω )) ≤ t } = ( τ ◦ ¯ G ) − ( −∞ , t ]= ¯ G − ( τ − ( −∞ , t ]) ∈ G t , (8)where the last equality follows from the proof of this proposition.Now to prove the inverse claim, we first show that for every G − predictable set H there exists an b F − predictableprocess { J t ( ω, u ) } t ∈ [0 ,T ] which is measurable in ( t, ω, u ) such that H ( s, ω ) = J s ( ω, G ( ω )) , P − a.s, s ∈ [0 , T ] . We have G t = F t ∨ σ ( G ) = σ ( { F ∩ G − ( B ) : F ∈ F t , B ∈ B ( R ) } ) , t ∈ [0 , T ] . From the definition of a predictable σ − algebra, we get P ( G ) = σ ( { ( t, ∞ ) × ( F ∩ G − ( B )) : F ∈ F t , B ∈ B ( R ) , t ∈ [0 , T ] }∪{{ }× ( F ∩ G − ( B )) : F ∈ F , B ∈ B ( R ) } ) . We start with a set in the generator of P ( G ) . So let t ∈ [0 , T ] , F ∈ F t , B ∈ B ( R ) and suppose that H =( t, ∞ ) × ( F ∩ G − ( B )) . Define J s ( ω, u ) := 1 ( t, ∞ ) × F × B ( s, ω, u ) , s ∈ [0 , T ] . Then J s ( ω, u ) is b F − measurable and b F − predictable because ( t, ∞ ) × F × B is an b F − predictable set. Moreover,for ( s, ω ) ∈ [0 , T ] × Ω we have H ( s, ω ) = 1 ( t, ∞ ) × F ( s, ω ) · ( t, ∞ ) × B ( s, G ( ω )) = J s ( ω, G ( ω )) . For H = { } × ( F ∩ G − ( B )) with F ∈ F , B ∈ B ( R ) we argue similarly. Now define Λ := { H ∈ P ( G ) | ∃ J : b F − predictable such that 1 H (t , ω ) = J t ( ω, G( ω )) , P − a . s , for t ∈ [0 , T] . } We know that the generator set of P ( G ) is a subset of Λ . Furthermore Λ is a λ − system, so that according toDynkin’s π − λ theorem, we have P ( G ) ⊆ Λ . Now suppose that τ ′ is a G − stopping time. Then [0 , τ ′ ] ∈ P ( G ) .So by what has been shown, there exists an b F − predictable process J which is measurable in ( ω, u ) such that [0 ,τ ′ ] ( t, ω ) = J t ( ω, G ( ω )) , P − a.s, t ∈ [0 , T ] . Now define τ ( ω, u ) := inf { t > J t ( ω, u ) = 0 } . The process J is b F − predictable so it is b F − progressively measurable. Hence by the D´ebut theorem, τ is an b F − stopping time. Moreover, for P − a.e ω ∈ Ω we have τ ′ ( ω ) = τ ( ω, G ( ω )) . This completes the proof. .2 Value function in an initially enlarged filtration Corollary 3.4.
Let τ : b Ω → R + be an b F − stopping time. Then for every u ∈ R , τ ( u ) = τ ( · , u ) is an F − stoppingtime.Proof. Let u ∈ R and t ∈ [0 , T ] . Then { ω | τ ( ω, u ) ≤ t } × { u } = { ( ω, u ) | τ ( ω, u ) ≤ t } ∈ b F t = \ s>t ( F s ⊗ B ( R )) . Hence { ω | τ ( ω, u ) ≤ t } ∈ \ s>t F s = F t . Since u is arbitary the proof is complete. We recall a ”parametrized” version of the conditional expectation.
Lemma 3.5.
Let ( U, U ) be a measurable space and X : Ω × U → R be an F ⊗ U− measurable random variablesatisfying one of the conditions(1) X is positive,(2) ∀ u ∈ U, E [ | X ( ., u ) | ] < ∞ . Then there exists a
G ⊗ U− measurable random variable Y : Ω × U → R , such that for all u ∈ UY ( ., u ) = E [ X ( ., u ) |G ] , P − a.s. Proof.
See [39], p. 115.
Remark 3.6.
We denote a random variable X : b Ω → R , by X ( . ) to emphasize its dependence on a parameter.Obviously we mean X ( u ) = X ( ω, u ) , ω ∈ Ω . For our next steps we need to introduce the following notation. Recall the payoff process R , and set R : b Ω × R + → R , ( u, t ) L t α t ( u )1 [0 ,T [ ( t ) + ξα T ( u )1 { T } ( t ) . (9)We denote this new payoff function on the product space with R again. Note that, opposed to the first one, itnow acts on two variables. Remark 3.7.
Note that for an b F − stopping time τ : b Ω → R + , R ( ., τ ( . )) : b Ω → R is a positive b F− measurablerandom variable. Since it is a payoff function, Lemma 3.5 guarantees the existence of an F t ⊗ B ( R ) − measurableversion of E [ R ( u, τ ( u )) |F t ] for u ∈ R , t ∈ [0 , T ] . Proposition 3.8.
Let t ∈ [0 , T ] . Then the following equation holds E [ R ( u, τ ( u )) |F t ] u = G = b E h R ( ., τ ( . )) | b F t i G , P − a.s. Proof.
We will show that for every bounded F t ⊗ B ( R ) − measurable random variable K : Ω × R → R we have E [ E [ R ( u, τ ( u )) |F t ] u = G K ( G )] = E hb E h R ( ., τ ( . )) | b F t i G K ( G ) i . (10) .2 Value function in an initially enlarged filtration E [ R ( u, τ ( u )) |F t ] u = G and b E h R ( ., τ ( . )) | b F t i G are G t − measurable random variables, the assertionthen follows from (10) and monotone class arguments.To show (10), note that K ( . ) and α t ( . ) are F t ⊗ B ( R ) − measurable, hence K ( u ) and α t ( u ) are F t − measurablefor u ∈ R . We obtain E [ E [ R ( u, τ ( u )) |F t ] u = G K ( G )] = E [ E [ E [ R ( u, τ ( u )) |F t ] u = G K ( G ) |F t ]]= E (cid:20)Z R E [ R ( u, τ ( u )) |F t ] K ( u ) α t ( u ) dP G ( u ) (cid:21) = E (cid:20)Z R R ( u, τ ( u )) K ( u ) α t ( u ) dP G ( u ) (cid:21) . On the other hand, E hb E h R ( ., τ ( . )) | b F t i G K ( G ) i = E h E hb E h R ( ., τ ( . )) | b F t i G K ( G ) |F t ii = E (cid:20)Z R b E h R ( ., τ ( . )) | b F t i u K ( u ) α t ( u ) dP G ( u ) (cid:21) = b E hb E h R ( ., τ ( . )) K ( . ) α t ( . ) | b F t ii = E (cid:20)Z R R ( u, τ ( u )) K ( u ) α t ( u ) dP G ( u ) (cid:21) . The last two equations are satisfied by the definition of b E in (5). Remark 3.9.
Let t ∈ [0 , T ] , u ∈ R and G = u be constant P − a.s. Then from Remark 3.7 we have E [ R ( u, τ ( u )) |F t ] = b E h R ( ., τ ( . )) | b F t i u , b P − a.s. The following result gives a useful clue to calculate conditional expectations with respect to the larger filtration.
Lemma 3.10.
Suppose that X : b Ω × R + → R is a process, t ∈ [0 , T ] and G : Ω → R a random variable suchthat X t ( G ) is G t − measurable and P − integrable. Then for s ≤ t E [ X t ( G ) |G s ] = 1 α s ( G ) E [ X t ( u ) α t ( u ) |F s ] u = G . Proof.
See [7], p. 5.
Theorem 3.11.
Let t ∈ [0 , T ] . Under Assumption 2.1 on G and the integrability condition (2) on R we have for t ∈ [0 , T ] V Gt := ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i! G . Proof.
Let τ ′ ∈ T t,T ( G ) . From Proposition 3.3 and Lemma 3.10, we have E h R ( τ ′ ) |G t i = E [ R ( τ ( G )) |G t ] = 1 α t ( G ) E [ R ( τ ( u )) α T ( u ) |F t ] u = G , where τ ( . ) ∈ T t,T ( b F ) . .2 Value function in an initially enlarged filtration u ∈ R , τ ( u ) is an F − stopping time. So by using iterated conditional expectations andthe martingale property of ( α t ( u )) t ∈ [0 ,T ] w.r.t F , we get E [ R ( τ ( u )) α T ( u ) |F t ] = E (cid:2) E (cid:2) ( L τ ( u ) [0 ,T [ ( τ ( u )) + ξ { T } ( τ ( u ))) α T ( u ) |F τ ( u ) (cid:3) |F t (cid:3) = E (cid:2) L τ ( u ) α τ ( u ) ( u )1 [0 ,T [ ( τ ( u )) + ξ { T } ( τ ( u )) α T ( u ) |F t (cid:3) = E [ R ( u, τ ( u )) |F t ] . Thus we have ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i = ess sup τ ( . ) ∈T t,T ( b F ) E [ R ( τ ( G )) |G t ]= ess sup τ ( . ) ∈T t,T ( b F ) α t ( G ) E [ R ( τ ( u )) α T ( u ) |F t ] u = G = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) ( E [ R ( u, τ ( u )) |F t ]) u = G = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) (cid:16)b E h R ( ., τ ( . )) | b F t i(cid:17) G . The last equality comes from Proposition 3.8. Moreover, b E [ R ( ., τ ( . )) | b F t ] is measurable in ( ω, u ) , and the essen-tial supremum of a measurable family { b E [ R ( ., τ ( . )) | b F t ]; τ ( . ) ∈ T t,T ( b F ) } is again measurable in ( ω, u ) . Therefore,we have ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u = ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i! u , P − a.s and this still holds P − a.s. if we replace u by G ( · ) .All in all, we obtain as claimed ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) (cid:16)b E h R ( ., τ ( . )) | b F t i(cid:17) G = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i! G . From Neveu [33], it is known that the essential supremum of a family A of non negative random variablesis a well defined almost surely unique random variable. Moreover, if A is directed above , i.e. a ∨ a ′ ∈ A for a and a ′ ∈ A , then there exists a sequence ( a n ) n ∈ N in A such that a n ↑ (ess sup A ) as n → ∞ . See Proposition(VI-1.1) in [33] for a complete proof. Proposition 3.12.
There exists a sequence of stopping times ( τ n ) n ∈ N with τ n in T t,T for n ∈ N such that thesequence ( E [ R ( τ n ) |F t ]) n ∈ N is increasing and such that V t = lim n →∞ ↑ E [ R ( τ n ) |F t ] , P − a.s. .2 Value function in an initially enlarged filtration Proof.
It is sufficient to show that the set { E [ R ( τ ) |F t ]; τ ∈ T t,T } is directed above. Then the result followsfrom known results on the essential supremum by Neveu [33]. See Kobylanski and Quenez [28] for details of theproof and a complete discussion for the general case where a deterministic time t is replaced by a stopping timein T ,T . Theorem 3.13.
Let t ∈ [0 , T ] . Under Assumption 2.1 on G and the integrability condition (2) on R , we havefor t ∈ [0 , T ] ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = Z R ( ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i ) u dP G ( u ) . Proof.
Let τ ′ ∈ T t,T ( G ) . By Proposition 3.3 there exists an b F − stopping time τ ( . ) such that τ ′ = τ ( G ) , P − a.s. We therefore have E h R ( τ ′ ) |F t i = E [ R ( τ ( G )) |F t ] . By using the conditional law of G given F t we get E [ R ( τ ( G )) |F t ] = E [ E [ R ( τ ( G )) |F T ] |F t ]= E (cid:20)Z R R ( τ ( u )) α T ( u ) dP G ( u ) |F t (cid:21) = Z R E [ R ( τ ( u )) α T ( u ) |F t ] dP G ( u )= Z R E (cid:2) E (cid:2) R ( τ ( u )) α T ( u ) |F τ ( u ) (cid:3) |F t (cid:3) dP G ( u )= Z R E (cid:2) L τ ( u ) α τ ( u ) ( u )1 [0 ,T [ ( τ ( u )) + ξα T ( u )1 { T } ( τ ( u )) |F t (cid:3) dP G ( u )= Z R E [ R ( u, τ ( u )) |F t ] dP G ( u ) . Here we use the martingale property of ( α t ( u )) t ∈ [0 ,T ] w.r.t F .From Remark 3.9 we further deduce ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = ess sup τ ( . ) ∈T t,T ( b F ) E [ R ( τ ( G )) |F t ]= ess sup τ ( . ) ∈T t,T ( b F ) Z R E [ R ( u, τ ( u )) |F t ] dP G ( u )= ess sup τ ( . ) ∈T t,T ( b F ) Z R b E h R ( ., τ ( . )) | b F t i u dP G ( u )= Z R ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u dP G ( u ) , P − a.s. To show the last equation we need to prove Z R ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u dP G ( u ) ≤ ess sup τ ( . ) ∈T t,T ( b F ) Z R b E h R ( ., τ ( . )) | b F t i u dP G ( u ) , P − a.s., the reverse inequality being standard. The measurability of the family { b E [ R ( ., τ ( . )) | b F t ]; τ ( . ) ∈ T t,T ( b F ) } in ( ω, u ) implies ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u = ( ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i ) u . RBSDE IN AN INITIALLY ENLARGED FILTRATION τ n ( . ) ∈ T t,T ( b F ) such that b P − a.e. we have ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i = lim n →∞ b E h R ( ., τ n ( . )) | b F t i . Therefore by dominated convergence Z R ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u dP G ( u ) = lim n →∞ Z R b E h R ( ., τ n ( . )) | b F t i u dP G ( u )= ess sup τ ( . ) ∈T t,T ( b F ) Z R b E h R ( ., τ ( . )) | b F t i u dP G ( u ) , P − a.s. This finally allows us to deduce ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = Z R ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i u dP G ( u )= Z R ( ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i ) u dP G ( u ) . For both H t = F t or G t , we could calculate the optimal expected payoff (6) based on a value function of a newoptimal stopping problem in the product space. Since optimal stopping problems and reflected BSDE are knownto be connected via the Snell envelope, it seems natural to look for the corresponding RBSDE in the productspace. This will lead us to consider parametrized RBSDE, where the parameter is given by the possible values ofa random variable G initially enlarging an underlying filtration. It will be of independent interest to investigatesuch parametrized RBSDE. This is the goal of the following section. Since the martingale representation propertyplays an important role for RBSDE, we need to suppose that the reference filtration F is the natural filtration ofa Brownian motion. Reflected BSDE (RBSDE) were studied by El Karoui et al. [9] on a Brownian basis. Solution processes of suchequations are constrained to keep above a given process called obstacle or barrier. Our work generalizes [9] tothe setting of parametrized RBSDE where the reference filtration is the natural filtration of a Brownian motion.Let B = ( B t ) ≤ t ≤ T be a one-dimensional Brownian motion defined on a probability space (Ω , F , P ) and F = ( F t ) ≤ t ≤ T be the natural filtration of B , which satisfies the usual conditions of completion and rightcontinuity. Denote L = { X : X F T − measurable random variable, E ( | X | ) < ∞} , H = { X : X = ( X t ) ≤ t ≤ T continuous predictable process, E Z T | X t | dt < ∞} , S = { X : X = ( X t ) ≤ t ≤ T continuous predictable process, E ( sup ≤ t ≤ T | X t | ) < ∞} , I = { K : K = ( K t ) ≤ t ≤ T non-decreasing continuous process , K = 0 , K T ∈ L } . As in El Karoui et al. [9] consider a triplet of standard parameters ( ξ, f, L ) satisfying the following conditions .2 RBSDE and optimal stopping problems ξ ∈ L ; (ii) f : Ω × [0 , T ] × R × R −→ R is such that f ( · , · , y, z ) is predictable, E hR T f ( · , t, , dt i < ∞ , andthat it is globally Lipschitz continuous in ( y, z ) for fixed ( ω, t ) ∈ Ω × [0 , T ] ;(iii) L ∈ S .ξ is called terminal variable , f driver and L barrier process . We shall always assume that L T ≤ ξ. A triplet ( Y, Z, K ) ∈ S ×H ×I is a solution of the reflected backward stochastic differential equation (RBSDE) associatedwith ( ξ, f, L ) if it satisfies the following equations resp. inequalities for any t ∈ [0 , T ] Y t = ξ + Z Tt f ( s, Y s , Z s ) ds + K T − K t − Z Tt Z s dB s ,Y t ≥ L t , Z T ( Y t − L t ) dK t = 0 . (11) K controls Y to stay above the barrier L . The condition R T ( Y t − L t ) dK t = 0 which is known as theSkorokhod condition guarantees that the process K acts in a minimal fashion.If the standard triplet satisfies (i)-(iii), there exists a unique solution of (11) (see El Karoui et al. [9]). In casethe barrier L is just optional and right upper semicontinuous, the existence of a unique solution of the RBSDE isshown in Grigorova et al. [15]. The component Y is cadlag in this case. Remark 4.1. If f does not depend on y and z , condition (ii) can be simplified to(ii*) f : Ω × [0 , T ] −→ R is a predictable process s.t. E hR T f t dt i < ∞ . Snell envelopes provide the well known link between value functions of optimal stopping problems and solutionsof corresponding RBSDE (see for example El Karoui et al in [9]). We shall extend this link to the frameworkof parametrized RBSDE defined on the product space. We start by recalling some basic facts from the classicaltheory.
Proposition 4.2.
Let ( Y, Z, K ) be the solution of the RBSDE (11).Then Y t = ess sup τ ∈T t,T ( F ) E (cid:20)Z τt f ( s, Y s , Z s ) ds + L τ [0 ,T [ ( τ ) + ξ { T } ( τ ) |F t (cid:21) , where T t,T ( F ) is the set of all F − stopping times with values in [ t, T ] . Proof.
See [9].
Proposition 4.3.
Suppose that f = ( f t ) ≤ t ≤ T is an F − progressively measurable process that does not dependon y and z . Under assumptions (i), (ii*), and (iii), the RBSDE (11) with driver f has a unique solution { ( Y t , Z t , K t ); 0 ≤ t ≤ T } .Proof. See [9]. .3 Parametrized RBSDE f does not depend on y, z , the link between RBSDEand optimal stopping problems via Snell envelope becomes very explicit. This is stated in the following propositionthat is mentioned in [9]. Proposition 4.4.
Suppose that f is an F − progressively measurable process that does not depend on y and z .Under the assumptions (i), (ii*), and (iii), Y + R · f s ds is the value function of an optimal stopping problem withpayoff Z · f s ds + L [0 ,T [ + ξ { T } , where Y is the first component of the solution triplet of the RBSDE (11) with coefficient f . Furthermore, for t ∈ [0 , T ] the stopping time τ ∗ = inf { s ∈ [ t, T ] : Y s = L s } ∧ T is optimal, in the sense Y t = E "Z τ ∗ t f s ds + L τ ∗ [0 ,T [ ( τ ∗ ) + ξ { T } ( τ ∗ ) |F t . Remark 4.5. If f ≡ then Y , the first component of the solution of RBSDE (11), is the value function of theAmerican contingent claim with payoff L t [0 ,T [ ( t ) + ξ { T } ( t ) and τ ∗ is the optimal stopping time for the buyerafter time t . In the sequel, we suppose that (Ω , F , F , P ) is the filtered probability space carrying a one-dimensional Brownianmotion B , and F = ( F t ) ≤ t ≤ T is the Brownian standard filtration. Recall our product space ( b Ω , b F , b F , b P ) from (4). In order to obtain solutions of RBSDE in the initially enlargedfiltration in the following section, the statement of problems with initial enlargements in the framework of productspaces now leads us to consider RBSDE in such product spaces. As the main ingredient for obtaining solutionsof RBSDE, we need a martingale representation theorem in this setting. For this purpose, some preparations areneeded. Remark 4.6.
If for a random variable X : b Ω → R , we write X ( . ) , the · stands for the parameter u ∈ R . Proposition 4.7.
Suppose that M : b Ω × [0 , T ] → R is an b F− measurable function such that for each u ∈ R , { M t ( u ) } t ∈ [0 ,T ] is a martingale w.r.t F and R R E [ | M t ( u ) | ] dη ( u ) < + ∞ . Then { M t ( . ) } t ∈ [0 ,T ] is a martingale w.r.t b F . Proof.
For t ∈ [0 , T ] we have from Fubini’s theorem b E [ | M t ( . ) | ] = Z R E [ | M t ( u ) | ] dη ( u ) < + ∞ . Suppose that s ≤ t , C ∈ F s , and D ∈ B ( R ) . From Fubini’s theorem and martingale property of { M t ( u ) } t ∈ [0 ,T ] w.r.t F we have b P − a.s. b E [ M t ( . )1 C × D ( . )] = E [ Z R M t ( u )1 C D ( u ) dη ( u )]= Z R E [ M t ( u )1 C ]1 D ( u ) dη ( u )= Z R E [ M s ( u )1 C ]1 D ( u ) dη ( u )= E [ Z R M s ( u )1 C D ( u ) dη ( u )]= b E [ M s ( . )1 C × D ( . )] . (12) .3 Parametrized RBSDE E := { A ∈ b F s : b E [ M t ( . )1 A ( . )] = b E [ M s ( . )1 A ( . )] , b P − a.s. } and H := { C × D ; C ∈ F s , D ∈ B ( R ) } . From (12), we have H ⊆ E . Moreover H is a π − system and E is a λ − system so by the Dynkin’s π − λ theorem,we have ∀ A ∈ b F s , b E [ M t ( . )1 A ( . )] = b E [ M s ( . )1 A ( . )] . Corollary 4.8.
We define b B : b Ω × [0 , T ] → R by b B t ( ω, u ) := B t ( ω ) , where B = ( B t ) t ∈ [0 ,T ] is a Brownian motionw.r.t. F . Then from the above proposition, b B ( . ) is a Brownian motion w.r.t b F . Proposition 4.9.
Let b X : b Ω × [0 , T ] → R and X : Ω × [0 , T ] → R be two stochastic processes such that for each t ∈ [0 , T ] , b X t ( ω, u ) = X t ( ω ) , u ∈ R . Then we have σ ( b X s ( . ) , ≤ s ≤ t ) = σ ( X s , ≤ s ≤ t ) ⊗{∅ , R } , t ∈ [0 , T ] . Proof.
It is clear that σ ( X s , ≤ s ≤ t ) ⊗ {∅ , R } is contained in the natural filtration of b X ( . ) on [0 , t ] . On theother hand, since b X is constant in the second variable, the natural filtration of b X ( . ) on [0 , t ] is also contained in σ ( X s , ≤ s ≤ t ) ⊗ {∅ , R } . Corollary 4.10.
Proposition 4.9 implies that for b B ( . ) defined in Corolary 4.8, we have σ ( b B s ( . ) , ≤ s ≤ t ) = σ ( B s , ≤ s ≤ t ) ⊗ {∅ , R } , t ∈ [0 , T ] . Furtheremore, since F = ( F t ) ≤ t ≤ T is the natural filtration generated by B , then σ ( b B s ( . ) , ≤ s ≤ t ) = F t ⊗ {∅ , R } , t ∈ [0 , T ] . The proposition above implies that the natural filtration generated by b B ( . ) is a subset of the product filtration b F = ( b F t ) ≤ t ≤ T given by b F t = T s>t ( F s ⊗ B ( R )) , t ∈ [0 , T ] . So it is not clear a priori that the martingalerepresentation property can be extended to the product space. However, a simple direct argument making use ofthe product structure will prove that the martingale representation theorem from the first factor extends to thewhole space. For more details we need the following preliminaries. Corollary 4.11.
Let X : b Ω × [0 , T ] → R be an b F ⊗ B ([0 , T ]) − measurable function such that R T X s ( u ) dB s isdefined for u ∈ R , then b E ( Z T X s ( . ) dB s ) = b E ( Z T X s ( . ) ds ) . Proof.
It can be easily deduced from the definition of b E , Fubini’s theorem, and Ito’s isometry that b E ( Z T X s ( . ) dB s ) = E [ Z R ( Z T X s ( u ) dB s ) dη ( u )]= Z R E ( Z T X s ( u ) dB s ) dη ( u )= Z R E ( Z T X s ( u ) ds ) dη ( u )= b E ( Z T X s ( . ) ds ) . .3 Parametrized RBSDE ( b Ω , b F T , b P ) . Let L be the space of b F T − measurable random variables X that are square integrable i.e. b E ( | X | ) < + ∞ . We denote by BL the subspace of L consisting of boundedelements and by SBL the subspace of BL composed of linear combinations of random variables of the form H ( ω ) K ( u ) where H is F T − measurable and bounded and K is B ( R ) − measurable and bounded. Theorem 4.12.
Let M : b Ω → R be an b F T − measurable random variable such that b E ( | M ( . ) | ) < + ∞ . Thenthere exists a unique b F− measurable function b Z : b Ω × [0 , T ] → R , which is predictable w.r.t b F such that b E ( R T | b Z s ( . ) | ds ) < + ∞ and M ( . ) = M ( . ) + Z T b Z s ( . ) dB s , b P − a.e, where M ( . ) ∈ b F . Proof.
First we suppose that M ( . ) ∈ BL . Since SBL is dense in BL , for each M ( . ) ∈ BL there exists asequence { M n ( . ) } n ∈ N ∈ SBL such that M n ( . ) → M ( . ) in L . Thus, by linearity it suffices to prove the theoremfor M ( ω, u ) = H ( ω ) K ( u ) ∈ SBL . Since H ∈ L (Ω , F T , P ) and F is a Brownian filtration, from the martingalerepresentation theorem, there exists a unique F− measurable process Z = ( Z t ) t ∈ [0 ,T ] which is predictable w.r.t F and E ( R T | Z s | ds ) < + ∞ such that H = H + Z T Z s dB s , P − a.s, where H ∈ F . By multiplying by K, we get for η − a.e u ∈ R M ( u ) = M ( u ) + Z T b Z s ( u ) dB s , P − a.s. (13)Where M ( u ) := H K ( u ) and b Z s ( u ) := Z s K ( u ) . It can be easily seen that M ( . ) ∈ b F and b Z ( . ) is b F − predictable. Furthermore, from boundedness of K , we have b E ( Z T | b Z s ( . ) | ds ) = E ( Z T | Z s | ds )( Z R K ( u ) dη ( u )) < + ∞ . Since the null sets are independent of u, we have M ( . ) = M ( . ) + Z T b Z s ( . ) dB s , b P − a.e. Now for M ( . ) ∈ BL , M n ( . ) → M ( . ) in L where { M n ( . ) } n ∈ N ∈ SBL . Thus, { M n ( . ) } n ∈ N is Cauchy in L . On the other hand, we have b E (cid:0) | M n ( . ) − M m ( . ) | (cid:1) = b E (cid:0) | M n ( . ) − M m ( . ) | (cid:1) + b E | Z T ( b Z ns ( . ) − b Z ms ( . )) dB s | ! + 2 b E | M n ( . ) − M m ( . ) | | Z T ( b Z ns ( . ) − b Z ms ( . )) dB s | !! . From the boundedness of M n ( . ) for n ≥ , and Proposition 4.7 we get b E (cid:0) | M n ( . ) − M m ( . ) | (cid:1) = b E (cid:0) | M n ( . ) − M m ( . ) | (cid:1) + b E | Z T ( b Z ns ( . ) − b Z ms ( . )) dB s | ! , .3 Parametrized RBSDE b E | M n ( . ) − M m ( . ) | | Z T ( b Z ns ( . ) − b Z ms ( . )) dB s | !! ≤ c b E Z T | b Z ns ( . ) − b Z ms ( . ) | dB s ! and R T | b Z ns ( . ) − b Z ms ( . ) | dB s is a martingale w.r.t b F with zero expectation. Therefore { M n ( . ) } n ∈ N is Cauchy in L , and from Corollary 4.11 { b Z n ( . ) } n ∈ N is Cauchy in L ( b Ω × [0 , T ]) . Thus the sequences converge to M ( . ) resp. b Z ( . ) . A subsequence of { M n ( . ) } converges to M ( . ) for b P − a.e. ( ω, u ) ∈ b Ω . Therefore M ( . ) is b F − measurable.Similarly, by extracting a subsequence we obtain that b Z ( . ) is b F − predictable after an eventual modification ona set of measure zero in product space. By using Corollary 4.11 and uniqueness of limits in L , the proof ofexistence of a representation is complete for M ( . ) ∈ BL .Finally for M ( . ) ∈ L , we define M n ( . ) := M ( . ) · {| M ( . ) |≤ n } , n ∈ N . Then { M n ( . ) } n ∈ N is a sequence ofbounded random variables. Since b E ( | M ( . ) | ) < ∞ , we have M n ( . ) → M ( . ) in L . On the other hand, since M n ( . ) ∈ BL , n ∈ N , for each individual n we get a representation of the form M n ( . ) = M n ( . ) + Z T b Z ns ( . ) dB s , b P − a.e. where M n ( . ) ∈ b F and b Z n ( . ) is an b F − predictable process. Using Corollary 4.11 again, in a similar way as in thepreceding part of the proof we obtain the assertion for M ( . ) ∈ L . To prove uniqueness, suppose that there are two predictable processes b Z ( . ) and b Z ( . ) such that M ( . ) = M ( . ) + Z T b Z s ( . ) dB s = M ( . ) + Z T b Z s ( . ) dB s , b P − a.e. Then from Corollary 4.11 we get b E Z T ( b Z s ( . ) − b Z s ( . )) dB s ! = b E Z T ( b Z s ( . ) − b Z s ( . )) ds ! . This implies that for a.a ( ω, u, s ) ∈ b Ω × [0 , T ] we have b Z s ( ω, u ) = b Z s ( ω, u ) . Theorem 4.13.
Let M : b Ω × [0 , T ] → R be an b F ⊗ B ([0 , T ]) − measurable function such that { M t ( . ) } t ∈ [0 ,T ] isa martingale w.r.t b F and b E ( | M t ( . ) | ) < + ∞ , for t ∈ [0 , T ] . Then there exists a unique b F− measurable function b Z : b Ω × [0 , T ] → R , which is predictable w.r.t b F such that b E ( R T | b Z s ( . ) | ds ) < + ∞ and for t ∈ [0 , T ] ,M t ( . ) = M ( . ) + Z t b Z s ( . ) dB s , b P − a.e. where M ( . ) is b F − measurable.Proof. From the martingale property, since M T ( . ) ∈ L we have for t ∈ [0 , T ] M t ( . ) = b E ( M T ( . ) | b F t ) , b P − a.e. Thus from the previous theorem, there exists a unique b F − measurable M ( . ) ∈ L and a unique b F − predictableprocess b Z ( . ) such that M T ( . ) = M ( . ) + Z T b Z s ( . ) dB s , b P − a.e. Therefore for t ∈ [0 , T ] M t ( . ) = b E ( M T ( . ) | b F t ) = M ( . ) + Z t b Z s ( . ) dB s , b P − a.e. .3 Parametrized RBSDE f of an RBSDE is just an b F − progressively measurableprocess. Now by employing the representation property for martingales depending on a parameter of the precedingtheorem, we can define and solve parametrized reflected BSDE in the product space. For a probability measure Q on ( b Ω , b F , b F ) , we consider the following spaces b L Q = { X ( . ) : X ( . ) b F T − measurable random variable , b E Q ( | X ( . ) | ) < ∞} , b H Q = { X ( . ) : X ( . ) = ( X t ( . )) ≤ t ≤ T continuous predictable process, b E Q Z T | X t ( . ) | dt < ∞} , b S Q = { X ( . ) : X ( . ) = ( X t ( . )) ≤ t ≤ T continuous predictable process, b E Q ( sup ≤ t ≤ T | X t ( . ) | ) < ∞} , b I Q = { K ( . ) : K ( . ) = ( K t ( . )) ≤ t ≤ T non-decreasing continuous process, K ( . ) = 0 , K T ( . ) ∈ b L Q } . b E Q stands for the expectation w.r.t the measure Q . If Q = b P , we simply write b E and also b S , b H , b I .Now consider the filtered probability space ( b Ω , b F , b F , b P ) . From Corolary 4.8, B is still a Brownian motion w.r.tthis probability space. Now let a triplet ( ξ ( . ) , f ( . ) , L ( . )) be given satisfying(i’) ξ ( . ) ∈ b L ; (ii’) f ( . ) is a predictable process s.t. b E hR T f t ( . ) dt i < ∞ ; (iii’) L ( . ) ∈ b S .We call a triplet ( Y ( . ) , Z ( . ) , K ( . )) ∈ b S × b H × b I a solution of the parametrized RBSDE with driver f ( · ) , terminal variable ξ ( · ) , and barrier L ( · ) , if Y t ( . ) = ξ ( . ) + Z Tt f s ( . ) ds + K T ( . ) − K t ( . ) − Z Tt Z s ( . ) dB s ≤ t ≤ T,Y t ( . ) ≥ L t ( . ) , ≤ t ≤ T, Z T ( Y t ( . ) − L t ( . )) dK t ( . ) = 0 . (14) Remark 4.14.
For η a.e u ∈ R , Y t ( u ) = ξ ( u ) + Z Tt f s ( u ) ds + K T ( u ) − K t ( u ) − Z Tt Z s ( u ) dB s ≤ t ≤ T,Y t ( u ) ≥ L t ( u ) , ≤ t ≤ T, Z T ( Y t ( u ) − L t ( u )) dK t ( u ) = 0 . (15) is an RBSDE w.r.t (Ω , F , F , P ) . This is the reason we call the RBSDE (14) a parametrized RBSDE.
Similarly to the usual case, we shall always assume that L T ( . ) ≤ ξ ( . ) . Equipped with these concepts, we canextend the classical existence and uniqueness theorem for solutions of RBSDE to the parametrized RBSDE (14),and then rewrite Proposition 4.4 for the product space in the following remark. Theorem 4.15.
Under assumptions (i’), (ii’), and (iii’), the RBSDE (14) has a unique solution ( Y ( . ) , Z ( . ) , K ( . )) .Proof. From (ii’), f is an b F − progressively measurable process such that b E hR T f t ( . ) dt i < ∞ . Thus the RBSDE(14) is a backward reflection problem (BRP) according to the terminology of [9]. Now we can rewrite the proof .3 Parametrized RBSDE Y ( . ) = ( Y t ( . )) t ∈ [0 ,T ] defined by Y t ( . ) = ess sup τ ( . ) ∈T t,T ( b F ) b E "Z τ ( . ) t f s ( . ) ds + L τ ( . ) ( . )1 [0 ,T [ ( τ ( . )) + ξ ( . )1 { T } ( τ ( . )) | b F t , t ∈ [0 , T ] . Then with the argument given in [9], Y t ( . ) + R t f s ( . ) ds is the value function of an optimal stopping problemwith payoff H t ( . ) = Z t f s ( . ) ds + L t ( . )1 [0 ,T [ ( t ) + ξ ( . )1 { T } ( t ) . By the theory of Snell envelopes, it is the smallest supermartingale which dominates H ( . ) . Y ( . ) is continuousbecause of the continuity of H ( . ) on the interval [0 , T ) and the assumption L T ( . ) ≤ ξ ( . ) . This means that thejump of H ( . ) at time T is positive. So Y ( . ) ∈ b S from the following inequality b E (cid:18) sup ≤ t ≤ T Y t ( . ) (cid:19) ≤ c b E ξ ( . ) + Z T f s ( . ) ds + sup ≤ t ≤ T L t ( . ) ! . which is obtained by Burkholder’s inequality and the conditions (i’), (ii’), and (iii’). Denote by τ ∗ ( . ) the stoppingtime τ ∗ ( . ) = inf { t ≤ s ≤ T : Y s ( . ) ≤ L s ( . ) } ∧ T. Then τ ∗ ( . ) is optimal, in the sense that Y t ( . ) = b E "Z τ ∗ ( . ) t f s ( . ) ds + L τ ∗ ( . ) ( . )1 [0 ,T [ ( τ ∗ ( . )) + ξ ( . )1 { T } ( τ ∗ ( . )) | b F t (16)Now Doob-Meyer’s decomposition of the continuous supermartingale Y t ( . ) + R t f s ( . ) ds yields an adaptedcontinuous process K ( . ) = ( K t ( . )) t ∈ [0 ,T ] and a continuous uniformly integrable martingale M ( . ) = ( M t ( . )) t ∈ [0 ,T ] such that Y t ( . ) = M t ( . ) − Z t f s ( . ) ds − K t ( . ) , where K ( . ) = 0 and K t = K τ ∗ ( . ) . The Skorohod condition and square integrability of K T ( . ) follow fromarguments similar to the ones of [9] in the product space. Hence the b F − martingale M t ( . ) = b E ( M T ( . ) | b F t ) = b E ξ ( . ) + Z T f s ( . ) ds − K T ( . ) | b F t ! is also square integrable, i.e. b E ( | M t ( . ) | ) < ∞ , t ∈ [0 , T ] . Thus, we can use Theorem 4.13 to find the process b Z ( . ) such that M t ( . ) = R t b Z s ( . ) ds, where b E ( R T | b Z s ( . ) | ds ) < + ∞ .Uniqueness of the solution can be achieved from Corollary 3.7 in [9] which is satisfied on the product spaceunder assumptions (i’), (ii’), and (iii’). Remark 4.16.
Under the assumptions (i’), (ii’), and (iii’), Y t ( . ) + R t f s ( . ) ds is the value function of an optimalstopping problem with the payoff Z t f s ( . ) ds + L t ( . )1 [0 ,T [ ( t ) + ξ ( . )1 { T } ( t ) , where Y t ( . ) is the solution of the RBSDE (14). Furthermore the stopping time τ ∗ ( . ) = inf { s ∈ [ t, T ] : Y s ( . ) = L s ( . ) }∧ T is optimal, in the sense Y t ( . ) = b E "Z τ ∗ ( . ) t f s ( . ) ds + L τ ∗ ( . ) ( . )1 [0 ,T [ ( τ ∗ ( . )) + ξ ( . )1 { T } ( τ ∗ ( . )) | b F t . .4 RBSDE in an initially enlarged filtration Especially in the case f ≡ , Y t ( . ) , the solution of the RBSDE (14), is the value function of an Americancontingent claim with the payoff L t ( . )1 [0 ,T [ ( t ) + ξ ( . )1 { T } ( t ) and τ ∗ ( . ) is the optimal stopping time for the buyer. Remark 4.17.
An extension of Theorem 4.15 to the case in which f also depends on y and z with global Lipschitzcontinuity in these two variables can be obtained by means of the proof of Theorem 5.2 in [9]. We will now show that under suitable conditions on the parametrized payoff function R in (9), the correspondingvalue function is the solution of a parametrized RBSDE on the same product space. For this purpose, considerthe product space ( b Ω , b F , b F , b P ) from (4) where b P = P ⊗ P G and P G is the law of the random variable G whichcarries the extra information. We consider the RBSDE (14) with f ≡ , L ( . ) = Lα ( . ) , and ξ ( . ) = ξα T ( . ) , where L and ξ are the barrier resp. final variable of the usual RBSDE (11). Then we obtain the following parametrizedRBSDE − dY t ( . ) = dK t ( . ) − Z t ( . ) dB F t , ≤ t ≤ T,Y T ( . ) = ξα T ( . ) ,Y t ( . ) ≥ L t α t ( . ) , ≤ t ≤ T, R T ( Y t ( . ) − L t α t ( . )) dK t ( . ) = 0 . (17)Since we work with two different filtrations in this section, we denote by B F t a Brownian motion w.r.t F . FromTheorem 4.15 and Remark 4.16 in the previous section, under conditions (i’) and (iii’) for ξα T ( . ) and L t α t ( . ) ,the RBSDES (17) has a unique solution ( Y ( . ) , Z ( . ) , K ( . )) ∈ b S × b H × b I and Y t ( . ) is the value function of anoptimal stopping problem with the payoff R t ( · ) = L t α t ( . )1 [0 ,T [ ( t ) + ξα T ( . )1 { T } ( t ) , t ∈ [0 , T ] . Theorem 3.11 motivates us to define b Y ( . ) := Y ( . ) α ( . ) . Recall that, due to our hypotheses on G , α ( . ) is a positivecontinuous martingale, so that sup s ∈ [0 ,T ] 1 α s ( . ) < ∞ . This implies that our definition makes sense. We will provethat b Y ( G ) is the solution of an RBSDE that corresponds to the optimization problem in the enlarged filtration.Note that for each u ∈ R , α ( u ) is a martingale w.r.t F and for each t ∈ [0 , T ] , it has an b F− measurable version.Therefore from Proposition 4.7, { α t ( . ) } t ∈ [0 ,T ] is a martingale w.r.t b F . If we suppose that it is b P − square integrable,then the martingale representation Theorem 4.13 yields dα t ( . ) = β t ( . ) dB F t , where β ( . ) is an b F − predictable processwhich is square integrable with respect to b P .By Ito’s formula, we get that b Y ( . ) satisfies in the following RBSDE: − d b Y t ( . ) = − (cid:20) ( β t ( . ) α t ( . ) ) b Y t ( . ) − Z t ( . ) α t ( . ) β t ( . ) (cid:21) dt + 1 α t ( . ) dK t ( . ) − (cid:20) Z t ( . ) α t ( . ) − β t ( . ) α t ( . ) b Y t ( . ) (cid:21) dB F t , ≤ t ≤ T, b Y T ( . ) = ξ, b Y t ( . ) ≥ L t , ≤ t ≤ T, R T ( b Y t ( . ) − L t ) α t ( . ) dK t ( . ) = 0 . (18)The Skorokhod condition has the stated form because Z T ( b Y t ( . ) − L t ) α t ( . ) dK t ( . ) = Z T ( Y t ( . ) α t ( . ) − L t ) α t ( . ) dK t ( . )= Z T ( Y t ( . ) − L t α t ( . )) dK t ( . ) = 0 , b P − a.e. We now define b K ( . ) = R · α s ( . ) dK s ( . ) and b Z ( . ) = Z ( . ) α ( . ) − β ( . ) α ( . ) b Y ( . ) . Since α ( . ) is continuous in t and positive, b K ( . ) is an increasing continuous process such that b K ≡ and d b K t ( . ) = dK t ( . ) α t ( . ) . Furthermore, b Y ( . ) , b K ( . ) and .4 RBSDE in an initially enlarged filtration b Z ( . ) are b F − predictable processes. This follows from the b F − predictability of Y ( . ) , Z ( . ) , K ( . ) , α ( . ) , and β ( . ) . Inaddition, we have Z T b Z s ( . ) ds ≤ Z T (cid:18) Z s ( . ) α s ( . ) (cid:19) ds + 2 Z T (cid:18) β s ( . ) α s ( . ) b Y s ( . ) (cid:19) ds ≤ s ∈ [0 ,T ] α s ( . ) Z T Z s ( . ) ds + 2 sup s ∈ [0 ,T ] Y s ( . ) ! sup s ∈ [0 ,T ] α s ( . ) ! Z T β s ( . ) ds < ∞ , b P − a.e., because Y ( . ) is continuous in t , α ( . ) continuous and strictly positive, and Z ( . ) and β ( . ) are square integrablein b Ω × [0 , T ] . Thus the Ito integral process for b Z with respect to B F is still defined and is a local martingale (see[34], p. 35). With similar arguments, it can be shown that sup s ∈ [0 ,T ] | b Y s ( . ) | ≤ sup s ∈ [0 ,T ] α s ( . ) ! sup s ∈ [0 ,T ] | Y s ( . ) | ! < ∞ , b P − a.e. Furthermore, since K ( . ) ∈ b I , we have b K T ( . ) ≤ sup s ∈ [0 ,T ] α s ( . ) ! K T ( . ) < ∞ , b P − a.e. Now we introduce the following spaces, corresponding to a filtration H = ( H t ) t ∈ [0 ,T ] on an arbitrary probabilityspace: ¯ H H = { X : X = ( X t ) ≤ t ≤ T H − predictable process , Z T | X t | dt < ∞} , ¯ S H = { X : X = ( X t ( . )) ≤ t ≤ T continuous H − predictable process , sup ≤ t ≤ T | X t | < ∞} , ¯ I H = { K : K = ( K t ) ≤ t ≤ T increasing continuous process, K = 0 , K T H T − measurable , K T < ∞ . } Therefore ( b Y t ( . ) , b Z t ( . ) , b K t ( . )) ≤ t ≤ T ∈ ¯ S b F × ¯ H b F × ¯ I b F solves the RBSDE − d b Y t ( . ) = β t ( . ) α t ( . ) b Z t ( . ) dt + d b K t ( . ) − b Z t ( . ) dB F t , ≤ t ≤ T, b Y T ( . ) = ξ, b Y t ( . ) ≥ L t , ≤ t ≤ T, R T ( b Y t ( . ) − L t ) d b K t ( . ) = 0 , (19)in ( b Ω , b F , b F , b P ) .The Skorokhod condition follows from (18), since Z T ( b Y t ( . ) − L t ) d b K t ( . ) = Z T ( b Y t ( . ) − L t ) 1 α t ( . ) α t ( . ) dK t ( . ) ≤ ( sup t ∈ [0 ,T ] α t ( . ) ) R T ( b Y t ( . ) − L t ) α t ( . ) dK t ( . ) = 0 . The last inequality holds by b Y ( . ) ≥ L , and since α ( . ) is positive and continuous. The following propositionrecalls the canonical decomposition of a local martingale in the smaller filtration with respect to the larger one. .4 RBSDE in an initially enlarged filtration Proposition 4.18.
Any F − local martingale M is a G − semimartingale with canonical decomposition M t = M Gt + Z t d < M, α . ( G ) > s α s − ( G ) , where M G is a G − local martingale.Proof. See Theorem 2.5.c in [25]. Also [2] and [7].The preceding proposition and the continuity of α ( . ) imply B F t = B G t + Z t d < B F , α . ( G ) > s α s − ( G ) = B G t + Z t β s ( G ) α s ( G ) ds. (20)Now consider ( b Y t ( G ) , b Z t ( G ) , b K t ( G )) ≤ t ≤ T . By Remark 3.2, it is a triplet of G − predictable processes. Evalu-ating (19) at G and replacing B F t from the above proposition, ( b Y ( G ) , b Z ( G ) , b K ( G )) ∈ ¯ S G × ¯ H G × ¯ I G will satisfyin the following RBSDE in (Ω , F , G , P ) : − d b Y t ( G ) = d b K t ( G ) − b Z t ( G ) dB G t , ≤ t ≤ T, b Y T ( G ) = ξ, b Y t ( G ) ≥ L t , ≤ t ≤ T, R T ( b Y t ( G ) − L t ) d b K t ( G ) = 0 . (21)RBSDE (21) is an RBSDE in the initially enlarged filtration G with generator f ≡ . As we will see in thefollowing section, it relates to our optimal stopping problem in the initially enlarged filtration. We will commenton the square-integrability of the solution components below.RBSDE (19) possesses a non-trivial driver independent of y . Similarly to SDE we can apply Girsanov’s theoremto get rid of it. To do this, we set for t ∈ [0 , T ] q t ( . ) := exp (cid:16)R t β s ( . ) α s ( . ) dB F s − R t ( β s ( . ) α s ( . ) ) ds (cid:17) . Then Girsanov’stheorem implies that if β ( . ) α ( . ) satisfies Novikov’s condition which means b E exp ( Z T
12 ( β s ( . ) α s ( . ) ) ds ) ! < ∞ , (22)then q T ( . ) is a likelihood ratio which defines a new probability measure on ( b Ω , b F ) by b Q ( A ) = b E (cid:0) q T ( . )1 { A } ( . ) (cid:1) , A ∈ b F , under which b B t ( . ) := B F t − R t β s ( . ) α s ( . ) ds is a Brownian motion. We now suppose that (22) is satisfied. Underthe probability measure b Q on the space ( b Ω , b F , b F ) we rewrite (19) to get the following RBSDE with standardparameters ( ξ, , L ) w.r.t the Brownian motion b B ( . ) , − d b Y t ( . ) = d b K t ( . ) − b Z t ( . ) d b B t ( . ) , ≤ t ≤ T, b Y T ( . ) = ξ, b Y t ( . ) ≥ L t , ≤ t ≤ T, R T ( b Y t ( . ) − L t ) d b K t ( . ) = 0 . (23)Note that b B ( G ) = B G from (20). We know from [9] that if(i*) ξ ∈ b L b Q , and (iii*) L ∈ b S b Q , .4 RBSDE in an initially enlarged filtration ( b Y ( . ) , b Z ( . ) , b K ( . )) ∈ b S b Q × b H b Q × b I b Q . Moreover, since α ( . ) is strictly positive, wehave dα t ( . ) = β t ( . ) dB F t = β t ( . ) α t ( . ) α t ( . ) dB F t . Thus, Ito’s formula gives α ( . ) = exp (cid:18)Z · β s ( . ) α s ( . ) dB F s − Z · ( β s ( . ) α s ( . ) ) ds (cid:19) = q ( . ) , and α ( . ) acts as a likelihood ratio between b P and b Q . Remark 4.19.
From the definition of b Q , it can be easily seen that the assumptions (i*) and (iii*) are equivalentwith (i’) and (iii’) for ξα ( . ) and Lα ( . ) , if α ( . ) is bounded b P − a.e. Therefore we may state that an initial enlargement of a filtration in optimal stopping problems correspondsto a change of a measure in a parametrized RBSDE on the product of the underlying probability space and thestate space in which the additional information G takes its values. See [23] for a complete discussion. Novikov’scondition is satisfied for example if β t ( . ) α t ( . ) is b P − a.e. bounded. This condition has been studied in [12]. But it isrestrictive, and it will be seen below that it does not hold in simple examples.Let us finally discuss conditions under which RBSDE (17) has a unique solution. Since we need to refer tothese conditions later, let us collect them in the following assumption. Assumption 4.20. (1) ξ ∈ L ;(2) L ∈ S ;(3) α : b Ω × [0 , T ] → R + is bounded b P − a.e. Theorem 4.21.
Under Assumption (4.20), there exists a unique solution for RBSDE (17). It coincides withthe value function of an American contingent claim with the payoff L t α t ( . )1 [0 ,T [ ( t ) + ξα T ( . )1 { T } ( t ) and τ ∗ ( . ) =inf { s ∈ [ t, T ] : Y s ( . ) = L s α s ( . ) }∧ T is the optimal stopping time for the buyer. Furthermore if Novikov’s condition(22) is satisfied, then RBSDE (23) has a unique solution.Proof. Under Assumption (4.20), the integrability conditions (i’) and (iii’) from Section 4.3 are fulfilled by ξ ( . ) = ξα T ( . ) and L ( . ) = Lα ( . ) . Thus by Theorem 4.15 and Remark 4.16, there exists a unique solution for RBSDE (17),and it coincides with the value of the corresponding optimal stopping problem on the product space. Existenceand uniqueness of the solutions of RBSDE (23) follow from Remark 4.19.The following example illustrates that for t > boundedness of β t α t may be easily missed, though α t is bounded. Example 4.22.
Let G = B T + X , where B T is the endpoint of a one-dimensional F − Brownian motion with B = 0 and X a random variable with centered normal distribution with variance ǫ > which is independent of F . In this case the buyer has noisy information about B T . Due to independence, we know that G has a normallaw with mean zero and variance T + ǫ . Therefore we have for all t ∈ [0 , T ] P ( B T + X ∈ du |F t ) = P ( B T + X − B t + B t ∈ du |F t )= P ( B T + X − B t ∈ du − y ) | y = B t = 1 p π ( T − t + ǫ ) exp (cid:18) − ( u − B t ) T − t + ǫ ) (cid:19) du = α t ( u ) P ( B T + X ∈ du ) , .5 American contingent claims with asymmetric information and parametrized RBSDE where α t ( u ) = q ( T + ǫ )( T − t + ǫ ) exp (cid:16) − ( u − B t ) T − t + ǫ ) + u T + ǫ ) (cid:17) , u ∈ R . So here the conditional law of G given F t isabsolutely continuous with respect to the law of G for all t ∈ [0 , T ] . Note that for all u ∈ R , α ( u ) = 1 , that α is continuous in ( t, u ) ∈ (0 , T ] × R , and that by ǫ > we have lim u →±∞ α t ( u ) = 0 , P − a.s. Therefore, for all t ∈ [0 , T ] , α t ( · ) is bounded b P − a.e. It is known from [22] that β t ( . ) is the Malliavin trace of α t ( . ) . So we have β t ( . ) α t ( . ) = D t ln( α t ( . )) , t ∈ [0 , T ] . Therefore we obtain, β t ( u ) α t ( u ) = T − t + ǫ ) ( u − B t ) which is notbounded. In this subsection we will rigorously establish the link between optimal solutions for American contingent claimsfor which the buyer has privileged information and solutions of RBSDE w.r.t. enlarged filtrations.
Lemma 4.23.
Under the assumptions (i’) and (iii’) from Section 4.3 we have for t ∈ [0 , T ] V Gt = ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i = Y t ( G ) α t ( G ) = b Y t ( G ) , where Y ( . ) is the solution of the RBSDE (17) and b Y ( G ) satisfies RBSDE (21). Furthermore, τ ∗ : b Ω → R + defined by τ ∗ ( G ) = inf { s ∈ [ t, T ] : Y s ( G ) = L s α s ( G ) } ∧ T = inf { s ∈ [ t, T ] : b Y s ( G ) = L s } ∧ T is the optimal stopping time for the buyer after time t .Proof. Theorem 3.11 gives V Gt = ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i! G . Remark 4.16 implies Y t ( . ) = ess sup τ ( . ) ∈T t,T ( b F ) b E h R ( ., τ ( . )) | b F t i . Furthermore, τ ∗ ( . ) = inf { s ∈ [ t, T ] : Y s ( . ) = L s α s ( . ) } ∧ T is the optimal stopping time. The proof is completed by recalling b Y ( G ) = Y ( G ) α ( G ) from the definition in the previoussection. Corollary 4.24.
The previous lemma implies in particular that V G = ess sup τ ′ ∈T ,T ( G ) E h R ( τ ′ ) |G i = Y ( G ) = b Y ( G ) , since α ≡ . Therefore, the value of the American contingent claim with extra information is given by the initialsolution of the parametrized RBSDE (17) evaluated at G ..5 American contingent claims with asymmetric information and parametrized RBSDE Lemma 4.25.
Under assumptions (i’) and (iii’) from Section 4.3 we have for t ∈ [0 , T ]ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = Z R Y t ( u ) dP G ( u ) . (24) where Y ( . ) is the solution of the RBSDE (17) and τ ∗ ( G ) the optimal stopping time for the buyer after time t .Proof. The proof follows easily from Theorem 3.13 and Remark 4.16.The following example exhibits a more explicit description of the value of an American call option withadditional information.
Example 4.26.
Consider an American call option with payoff R ( t ) = ( S t − K ) + , where K is the strike price.The stock price process S satisfies for t ∈ [0 , T ] dS t = µS t dt + σS t dB t , where µ is the drift, σ > the volatility. Suppose that G is a random variable such that α is bounded P ⊗ P G − a.e .From Theorem 3.11, we have for t ∈ [0 , T ] V Gt = 1 α t ( G ) ess sup τ ( . ) ∈T t,T ( b F ) b E h ( S τ ( . ) − K ) + α τ ( . ) ( . ) | b F t i! G . We define V t ( . ) := ess sup τ ( . ) ∈T t,T ( b F ) b E h ( S τ ( . ) − K ) + α τ ( . ) ( . ) | b F t i , t ∈ [0 , T ] . From known results about the Snell envelope, we have τ ∗ ( . ) = inf { s ∈ [ t, T ] : V s ( . ) = L s α s ( . ) } ∧ T is optimalin the sense V Gt = 1 α t ( G ) (cid:16)b E h ( S τ ∗ ( . ) − K ) + α τ ∗ ( . ) ( . ) | b F t i(cid:17) G . Now from Proposition 3.8, V Gt = 1 α t ( G ) E (cid:2) ( S τ ∗ ( u ) − K ) + α τ ∗ ( u ) ( u ) |F t (cid:3) u = G , P − a.s. The process S is a semimartingale. So from Tanaka’s formula the following decomposition for V G is obtained for t ∈ [0 , T ] : α t ( G ) V Gt = ( S − K ) + α t ( G )+ E [ α τ ∗ ( u ) ( u ) Z τ ∗ ( u )0 I { S s > K } dS s |F t ] u = G + 12 E [ α τ ∗ ( u ) ( u ) l Kτ ∗ ( u ) ( S ) |F t ] u = G , P − a.s. where l K ( S ) is the local time of S at K . Since in particular α ( G ) = 1 and F is trivial, we have V G = ( S − K ) + + E [ α τ ∗ ( u ) ( u ) Z τ ∗ ( u )0 I { S s > K } dS s ] u = G + 12 E [ α τ ∗ ( u ) ( u ) l Kτ ∗ ( u ) ( S )] u = G , P − a.s. On the other hand S t = S e σB t +( µ − σ ) t , t ∈ [0 , T ] . Therefore L = ( S − K ) + , ξ = ( S T − K ) + and α satisfiesAssumption 4.20 since e σB is a continuous function and E ( e σB t ) = e t σ < ∞ for each t ∈ [0 , T ] . Hence Lemma4.23 provides a representation for the solution of RBSDE (21) with barrier ( S − K ) + and final value ( S T − K ) + ,where S is a geometric Brownian motion. Remark 4.27.
We have b Y ( . ) = Y ( . ) α ( . ) , so if we replace Y ( · ) by b Y ( · ) α ( · ) in (24), we get ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = Z R b Y t ( u ) α t ( u ) dP G ( u ) = E [ b Y t ( G ) |F t ] . where b Y ( G ) solves the RBSDE (21) in the initially enlarged filtration. The last equation is due to the definitionof α ( · ) . COST OF ADDITIONAL INFORMATION Remark 4.28.
Under Assumption (4.20), RBSDE (17) has a unique solution with first component Y ( . ) . Fur-thermore, RBSDE (21) has a unique solution whose first component coincides with V G . On the other hand, fromLemma 4.23 we have V G = b Y ( G ) , P − a.s. Thus b Y ( G ) is the unique solution of RBSDE (21). However, squareintegrability of other components of the solution remains open. They are not necessarily unique, being derivedfrom the Doob-Meyer decomposition for continuous supermartingales, as shown in [9]. For American contingent claims, the buyer has to select a stopping time τ ∈ T ,T at which he exercises his optionin such a way that the expected payoff R ( τ ) is maximized. If he has privileged information, he has access to alarger set of exercise times leading to a higher expected payoff. The value of the additional information can beinterpreted as the price he should pay to obtain it. From a utility indifference point of view, the price should bedefined as the difference of the maximal expected payoff the buyer receives with additional information and themaximal expected payoff without. To investigate this value in our framework. We denote the cost of the extra information with
CEI , and definemore formally
Definition 5.1.
CEI ( t ) := ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i − ess sup τ ∈T t,T ( F ) E [ R ( τ ) |F t ] , t ∈ [0 , T ] , and CEI := CEI (0) = ess sup τ ′ ∈T ,T ( G ) E h R ( τ ′ ) | σ ( G ) i − sup τ ∈T ,T ( F ) E [ R ( τ )] . The last equation follows from the triviality of F and G = σ ( G ) ( see Remark 2.3). We call CEI ( · ) the value function of the additional information. We have for t ∈ [0 , T ] CEI ( t ) = ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |G t i − ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i! + ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i − ess sup τ ∈T t,T ( F ) E [ R ( τ ) |F t ] ! . The second expression is a non-negative random variable. We prove that the expectation of the first expression isalso positive and thus E [ CEI ( t )] is a positive quantity. By the tower property of conditional expectation we have ess sup τ ′ ∈T t,T ( G ) E h R ( τ ′ ) |F t i = ess sup τ ′ ∈T t,T ( G ) E h E h R ( τ ′ ) |G t i |F t i ≤ ess sup τ ′ ∈T t,T ( G ) E (cid:2) V Gt |F t (cid:3) = E (cid:2) V Gt |F t (cid:3) , P − a.s. Therefore we obtain that E [ CEI ( t )] ≥ for t ≥ .If we suppose again that F is a Brownian filtration as in section 4, we are able to link CEI ( t ) to RBSDE asfollows: Corollary 5.2.
Under Assumption (4.20), Lemma 4.23 and Remark 4.5 yield the equation
CEI ( t ) = Y t ( G ) α t ( G ) − Y t = b Y t ( G ) − Y t , t ∈ [0 , T ] , (25) .2 A special case where Y ( . ) is the solution of (17), b Y ( G ) the solution of (21), and Y is the solution of the RBSDE − dY t = dK t − Z t dB F t , ≤ t ≤ T,Y T = ξ,Y t ≥ L t , ≤ t ≤ T, R T ( Y t − L t ) dK t = 0 . (26) Since in particular α ( G ) = 1 , we can express CEI as the difference of the initial values of solutions of twoRBSDE, namely
CEI = Y ( G ) − Y = b Y ( G ) − Y . (27) Remark 5.3.
From the remarks preceding the above corollary, we conclude that E [ b Y t ( G )] ≥ E [ Y t ] for t ≥ . Inother words, the average of the solution of the initially enlarged RBSDE is bigger than the average of the solutionof the initial RBSDE. Let us briefly comment on
CEI ( T ) , the value of extra information at exercise time T from the perspective ofthe RBSDE. By definition we have CEI ( T ) := ess sup τ ′ ∈T T,T ( G ) E h R ( τ ′ ) |G T i − ess sup τ ∈T T,T ( F ) E [ R ( τ ) |F T ] = E [ R ( T ) |G T ] − E [ R ( T ) |F T ] = ξ − ξ = 0 . Looking at this value with the underlying RBSDE, we get (see 25)
CEI ( T ) = Y T ( G ) α T ( G ) − Y T = b Y T ( G ) − Y T . But Y T ( G ) α T ( G ) = ξα T ( G ) α T ( G ) = ξ , and Y T = b Y T ( G ) = ξ , which confirms CEI ( T ) = 0 . This is what we expect, sinceadditional information at exercise time does not help the buyer to do better by a better strategy. It would beinteresting to find a more precise description of the price of the additional information. As it stands, it is given bythe difference of the first components Y of two solution processes of RBSDE with identical terminal conditions,drivers, and obstacles, but on two spaces of different complexity. We conjecture that Y is an increasing functionof the complexity of the spaces, but at the moment cannot substantiate this claim. We briefly discuss a simple case for which
CEI can be explicitly calculated. Assume that F = ( F t ) t ∈ [0 ,T ] is aBrownian standard filtration and G is independent of F t for all t ∈ [0 , T ] . In this case we have for t ∈ [0 , T ] , u ∈ R α t ( u ) = dP Gt ( u, · ) dP G ( u ) = 1 , P − a.s, so from formula (27) CEI = 0 . This is because we face the RBSDE − dY t ( . ) = dK t ( . ) − Z t ( . ) dB F t , ≤ t ≤ T,Y T ( . ) = ξα T ( . ) = ξ,Y t ( . ) ≥ L t α t ( . ) = L t , ≤ t ≤ T, R T ( Y t ( . ) − L t α t ( . )) dK t ( . ) = R T ( Y t ( . ) − L t ) dK t ( . ) = 0 . (28)By uniqueness of the solution of the RBSDE, Y ( . ) ≡ Y .In addition, V G , the value of the American contingent claim with additional information coincides with thevalue of the same American contingent claim without this information. This follows from Remark 4.24 stating V G = Y ( G ) , where Y ( . ) is the solution of (28), and uniqueness of its solution giving Y ( G ) = Y. .2 A special case Acknowledgements
We would like to thank the referees for their careful reading and helpful comments.
This work isa part of the first author’s PhD thesis at Sharif University of Technology under supervision of Professor BijanZ.Zangeneh. She wishes to thank her supervisor for his support, encouragement and guidance. Furthermore, shewants to thank Viktor Feunou, PhD candidate at Humboldt-Universit¨at zu Berlin, for helpful discussions. Thefinancial support from Humboldt-Universit¨at zu Berlin is gratefully acknowledged. In particular, she wants tothank Professor Peter Imkeller for making this possible.
EFERENCES References [1] Amendinger, J. (1999). Initial Enlargement of Filtrations and Additional Information in Financial Markets,
PhD thesis, Technischen Universit¨at zu Berlin .[2] Amendinger, J., Imkeller, P. and Schweizer, M. (1998). Additional logarithmic utility of an insider,
StochasticProcess. Appl. , , 263–268.[3] Ankirchner, S., Dereich, S. and Imkeller, P. (2006). The Shannon information of filtrations and the additionallogarithmic utility of insiders. Ann. Probab. , , 743–778.[4] Bismut, J.M. (1973). Conjugate convex functions in optimal stochastic control, J. Math. Anal. Appl. , ,384–404.[5] Black, F., Scholes, M. (1973). The pricing of options and corporate liabilities, J. Polit. Econ. , , 637–654.[6] Bouchaud, J-P, Sornette, D. (1994). The Black-Scholes option pricing problem in mathematical finance:generalization and extensions for a large class of stochastic processes, J. Phys. I (France) , , 863–881.[7] Callegaro, G., Jeanblanc, M. and Zargari, B. (2013). Carthaginian enlargement of filtrations, ESAIM Probab.Stat. , , 550–566.[8] Duffie, D. (1988). Security markets: Stochastic models , Academic Press: Boston.[9] El Karoui, N., Kapoudjian, C., Pardoux, E., Peng, S. and Quenez, M.C. (1997). Reflected solutions of backwardSDE’s, and related obstacle problems for PDE’s,
Ann. Probab. , , 702–737.[10] El Karoui, N., Quenez, M.C. (1995) , Dynamic Programming and Pricing of Contingent Claims in anIncomplete Market, SIAM J. Control Optim. , , 29-66.[11] Elliott, R.J., Geman, H. and Korkie, B.M. (1997). Portfolio optimization and contingent claim pricing withdifferential information, Stochastics Stochastics Rep. , , 185–203.[12] Eyraud-Loisel, A. (2005). Backward stochastic differential equations with enlarged filtration: Option helgingof an insider trader in a financial market with jumps, Stochastic Process. Appl. , , 1745–1763.[13] F¨ollmer, H., Schweizer, M. (1990). Hedging of contingent claims under incomplete information , in AppliedStochastic Analysis (eds.) M. H. A. Davis and R. J. Elliott, Gordon and Breach: London.[14] F¨ollmer, H., Sondermann, D. (1986). Hedging of non-redundant contingent claims,
In W. Hildenbrand andA. Mas-Collel (eds.), Contributions to Mathematical Economics , 205–223.[15] Grigorova, M., Imkeller, P., Offen, E., Ouknine, Y. (2016). Reflected BSDEs when the obstacle is notright-continuous and optimal stopping. arXiv:1504.06094.[16] Grorud, A., Pontier, M. (1998). Asymmetric information and incomplete market,
Int. J. Theor. Appl. Finance , , 285–302.[17] Grorud, A., Pontier, M. (1998). Insider trading in a continuous time market model, Int. J. Theor. Appl.Finance , , 331–347.[18] Hamad´ene, S. (2002). Reflected BSDE with discontinuous barriers, Stochastics Stochastics Rep. , , 571–596.[19] Hamad´ene, S., Lepeltier, J.P. (2000). Reflected BSDE’s and mixed game problem, Stochastic ProcessesAppl. , , 177–188.[20] Harrison, M., Kreps, D. (1979). Martingale and arbitrage in multiperiod securiries markets, Int. J. Econ.Theory , , 381–408.[21] Harrison, M., Pliska, S.R. (1981). Martingales and stochastic integrals in the theory of continuous trading, Stochastic Processes Appl. , , 215–260. EFERENCES
Math. Finance , , 153–169.[23] Imkeller, P., Perkowski, N. (2015). The existence of dominating local martingale measures, Finance Stoch. , , 685–717.[24] Jacod, J. (1979). Calcul stochastique et probl´emes de martingales , Lecture Notes in Mathematics 714.Springer-Verlag: Berlin.[25] Jacod, J. (1985).
Grossissement Initial, Hypoth´ese(H’) et Th`eor´eme de Girsanov , Lecture Notes in Mathe-matics 1118. Springer-Verlag: Berlin.[26] Karatzas, I. (1989). Optimization problems in the theory of continuous trading,
SIAM. J. Control Optim. , , 1221–1259.[27] Kharroubi, I., Lim, T. (2014). Progressive enlargement of filtrations and backward stochastic differentialequations with jumps, J. Theoret. Probab. , , 683–724.[28] Kobylanski, M., Quenez, M.C. (2012). Optimal stopping time problem in a general framework, Electron. J.Probab. , , 1–28.[29] Lepeltier, J.P., Xu, M. (2005). Penalization method for reflected backward stochastic differential equationswith one r.c.l.l. barrier, Statist. Probab. Lett. , , 58–66.[30] Merton, R. (1973). Theory of rational option pricing, Bell J. Econ. Manage. Sci. , , 141–183.[31] Merton, R. (1991). Continuous time finance , Basil Blackwell: Oxford.[32] M¨uller, S. (1985).
Arbitrage pricing of contingent claims , Lecture Notes in Economics and MathematicalSystems, 254, Springer-Verlag: Berlin.[33] Neveu, J. (1975).
Discrete-parameter martingales , North-Holland: Amsterdam.[34] Oksendal, B. (2003).
Stochastic differential equations: An introduction with applications , Springer-Verlag:Berlin.[35] Pardoux, E., Peng, S. (1990). Adapted solution of a backward stochastic differential equation,
SystemsControl Lett. , , 55–61.[36] Pikovsky, I., Karatzas, I. (1996). Anticipative portfolio optimization, Adv. in Appl. Probab. , , 1095–1122.[37] Sch¨al, M. (1994). On quadratic cost criteria for option hedging, Math. Oper. Res. , , 121–131.[38] Schweizer, M. (1995). Variance-optimal hedging in discrete time, Math. Oper. Res. , , 1–32.[39] Stricker, C., Yor, M. (1978). Calcul stochastique d´ependant d’un paramˆetre, Z. W. Geb. ,45