Asymptotic Static Hedge via Symmetrization
aa r X i v : . [ q -f i n . P R ] J a n Asymptotic Static Hedge via Symmetrization
Jirˆo Akahori ∗ Flavia Barsotti † Yuri Imamura ‡ Dept. of Mathematical Sciences Risk Methodologies, Group Financial Risks Dept. of Business EconomicsRitsumeikan University, Japan UniCredit S.p.a., Italy Tokyo University of Science, [email protected] [email protected] [email protected]
November 15, 2018
Abstract
This paper is a continuation of [1] where the authors i) showed that a payment at arandom time, which we call timing risk , is decomposed into an integral of static positions ofknock-in type barrier options, ii) proposed an iteration of static hedge of a timing risk byregarding the hedging error by a static hedge strategy of Bowie-Carr [5] type with respect toa barrier option as a timing risk, and iii) showed that the error converges to zero by infinitelymany times of iteration under a condition on the integrability of a relevant function.Even though many diffusion models including generic 1-dimensional ones satisfy the re-quired condition, a construction of the iterated static hedge that is applicable to any uni-formly elliptic diffusions is postponed to the present paper because of its mathematicaldifficulty. We solve the problem in this paper by relying on the symmetrization, a techniquefirst introduced in [11] and generalized in [2], and also work on parametrix , a classical tech-nique from perturbation theory to construct a fundamental solution of a partial differentialequation. Due to a lack of continuity in the diffusion coefficient, however, a careful study ofthe integrability of the relevant functions is required. The long lines of proof itself could bea contribution to the parametrix analysis.Keywords: static hedge, barrier option, parametrix, symmetrizationMSC 2010: primary 91G20 secondary 91G80
The present paper is a continuation of our previous paper [1], and focuses on solving mathemat-ical difficulty by exploring various mathematical techniques.Let us first recall the content of [1]. As the title says, the paper focuses on how the timingrisk— a payment at a random time (a stopping time to be mathematically precise)— is evaluated.It answers the question by how it can be statically hedged. The first contribution of [1] was to ∗ The First author was supported by JSPS KAKENHI Grant Number 23330109, 24340022, 23654056 and25285102. † The views presented in this paper are solely those of the author and do not necessarily represent those ofUniCredit S.p.a. ‡ The third author was supported by JSPS KAKENHI Grant Number 24840042. .As a consequence, the timing risk is hedged without error if an integral—infinitesimal amountfor each maturity—of Bowie-Carr type strategies is allowed. The integral of static positions isreferred to as Carr-Picron type hedging strategy in [1]. In a general case, Bowie-Carr strategy,and therefore Carr-Picron one brings about hedging error . Since the error is again a timing risk,it is decomposed into an integral of static positions of knock-in options to which Carr-Picrontype strategy can be applied. The second contribution of [1] is that to claim that the error willbe dramatically reduced by repeating this procedure, and converges to zero finally.The mathematics behind the above mentioned results of [1] is parametrix . The parametrixmethod is a classical way to construct a fundamental solution to a partial differential equation asan convergent series, called heat kernel expansion (see e.g. [10]). Recently, the method has beensuccessfully applied in finance and related fields (e.g. [9] [3], among others). The parametrix, notlike the Watanabe expansion in Malliavin calculus, does not require smoothness but ellipticityin the diffusion coefficients. It heavily depends on the integrability of the second-order differenti-ation of the approximating kernel, and this is obtained by the ellipticity and (H¨older) continuityof the coefficient. The conditions for the parametrix to work is postulated as assumptions in [1].Even though many diffusion models including generic 1-dimensional ones satisfy the requiredcondition, a construction of the iterated static hedge that is applicable to any uniformly ellipticdiffusions is postponed to the present paper.The contribution of the present paper is two-fold. Firstly, we propose a systematic way forconstructing an exact static hedging strategy of (single) barrier options (instead of general timingrisks, to avoid detailed economic discussions) under a general multi-dimensional diffusion setting,in contrast with the existing results based on price-expansion like [13] or [15]. Secondly, we givean example with discontinuous diffusion coefficients where the parametrix method can give aheat kernel expansion, which is convergent if the discontinuity is “controllable” (see Theorem3.17).The present paper describes a methodological proposal by stating existence and convergenceof asymptotic static hedging errors by leveraging on both parametrix techniques and kernelsymmetrization, a technique first introduced in [11] and generalized in [2]. This is done for afairly large class of multi-dimensional stochastic assets’ dynamics but with the uniformly ellipticcondition. First order, second order and higher orders hedging errors are derived and theirintegral representation is reported. Existence and asymptotic convergence are then proved. The It is often called semi-static hedge. Semi-static hedge represents the hedge of knock-out/knock-in options bysimply holding positions in plain vanilla options: this topic has been widely discussed and extensively studied sincethe paper [5]. After this seminal contribution, the related financial literature has developed different directionsof research. One stream of studies has focused on the extension of the reflection principle (i.e. the key tool inthe Black-Scholes setting) to a ’weaker’ symmetry property (see, for example, [6]) or to a more general setting(e.g.[12]). To provide a concrete example, as an extreme case, the paper by [7] obtained an exact semi-statichedging formula in a general one-dimensional diffusion environment by constructing an operator which maps thepay-off function (of the option to be hedged) to a function that admits an exact semi-static hedging formula. Theapproach has then been extended in [4] as “weak reflection principle” and may work for jump processes.
The aim of this Section is to recall the framework of asymptotic hedging error identification andexpansion and the main theoretical results achieved in [1].We first recall the strategy of semi-static hedge of barrier options. Let X be a diffusionprocess and τ be the first exit time of X out of a domain D ⊂ R d . We want to hedge the knock-out option by holding two plain options. Suppose that its pay-off is given by f ( X T )1 { τ>T } ,where f is, for the moment, a bounded measurable function on R d . The hedge strategy we willbe working on is as follows: long position of the option whose pay-off is f ( X T )1 { X T ∈ D } , and theshort position of the one with ˆ f , where ˆ f is a measurable function on R d such that ˆ f = 0 on D .Then, • If X never exit D , then the hedge works apparently. • On the event { τ < T } , at time τ the hedger liquidates the portfolio. The cost is e − r ( T − τ ) E [( f ( X T )1 { X T ∈ D } − ˆ f ( X T )) |F τ ] . If the latter was also zero, we could say that the static hedge works perfectly but otherwise thelatter could be understood as the error of the static hedge.We can also consider the static hedge of the knock-in pay-off f ( X T )1 { τ Suppose that Z T Z R d q s ( x, z ) | ( S ) T − s f ( z ) | dzds < ∞ (6) for each x ∈ R d . Then, the hedging error is decomposed into the integral of knock-in options: E [ E [1 { τ Theorem 2.2 ([1]) With (9) , we have that for n ∈ N E [ f ( X T )1 { τ ≥ T } |F t ∧ τ ] ( resp. E [ f ( X T )1 { τ This Section deals with the static hedge problem by showing how to build asymptotics of statichedge error by resorting to parametrix techniques and kernel symmetrization. The main theo-retical results are presented under a more general setting than the one considered in [1]. Theintermediate steps underlying the analysis are presented and discussed in separate Subsections.The introduction of semi-static hedges based on symmetrization under a fairly general class ofmulti-dimensional models is contained in Subsection 3.1. The assumptions considered for theunderlying asset price process and their implications are threated in Subsection 3.2. The integraldecomposition of the hedging error and the derivation of the first order hedging error (Theorem3.9) are contained in Subsection 3.3. The results for the second order hedging error is insteadgiven in Theorem 3.13, Subsection 3.4. Then, Subsection 3.5 shows how to extend the basicideas of the preceding Subsections to the identification of higher orders hedging errors. This Subsection deals with the introduction of semi-static hedges based on symmetrization undera fairly general class of multi-dimensional models. A key element to be considered is the existenceof a proper pair of the map f { x ∈ D } ˆ f and the density p in (5). Let us start from an example.In [1], two specific cases are presented: the one dimensional case, and the multi-dimensional casebased on put-call symmetry introduced in [2]. The one dimensional case relies on the reflectionprinciple of 1-dimensional Brownian motion to pick up p and π , which are, respectively, the heatkernel of the standard Brownian motion and the reflection with respect to the boundary K : π ( f )( x ) = f ( x )1 { x>K } − f (2 K − x )1 { x ≤ K } . (10)When working under the one-dimensional case, almost all diffusions can be smoothly transformedto Brownian motion with drift; since the knock-out region can always be characterized as aninterval, the transformation would just shift it to a different interval .When working under the multi-dimensional setting, the same does not apply. Indeed, it isnot always true that a generic diffusion process can be smoothly transformed into a Brownianmotion with drift. This holds only for some special cases. Moreover, the knock-out/in region D has not always the same shape, i.e. it cannot always be characterized as an interval, thus wecannot leverage on homeomorphic properties.In this paper, we consider a multi-dimensional setting by focusing on a specific class ofknock-out/in regions, i.e. those which are diffeomorphic to a hyper-halfspace . Let us introducethe following notation and setting. Let define the region D as D := { x ∈ R d |h x, γ i > k } , for some γ ∈ R d with | γ | = 1 and k ∈ R , and θ being the reflection with respect to ∂D definedas θ ( x ) = x − h γ, x i γ + 2 kγ = ( I − γ ⊗ γ ) x + 2 kγ. In [1] the cases where the region is a half line are studied. A similar approach can be extended to considerthe cases with double boundaries. Observe that by the diffeomorphism, the diffusion matrix can take any form, so we do not assume any specificform in the diffusion/drift coefficients except for the uniform ellipticity. π with the same approach reported in Equa-tion (10), by considering its multi-dimensional version as: π ( f )( x ) = f ( x )1 { x ∈ D } − f ( θ ( x ))1 { x D } . (11)For the delta-approximating kernel, we rely on the symmetrization introduced in [1]. We supposethat the infinitesimal generator of X (already transformed one) is given by12 A ( x ) · ∇ ⊗ + b ( x ) · ∇ ≡ X i,j a i,j ( x ) ∂ ∂x i ∂x j + X i b i ( x ) ∂∂x i , (12)where A and b are functions on R d , d × d - positive definite matrix valued, and R d valued,respectively. Let p t ( x, y ) := (2 π ) − d { det ˜ A ( y ) t } − e − t h ˜ A ( y ) − ( x − y ) ,x − y i , (13)where ˜ A ( x ) = ( A ( x ) x ∈ D Ψ A ( θ ( x ))Ψ x D, (14)and Ψ = I − γ ⊗ γ . Observe that this is the symmetrization of A with respect to the reflection θ introduced in [2].We can now state the following result linking function p t ( x, y ) and π ( · ) given, respectively,in (13) and (5). Proposition 3.1 The function p t ( x, y ) defined in (13) satisfies (5) with respect to function π ( · ) defined in (11) . Proof: Since Ψ = I and x = θ ( x ) for x ∈ ∂D , p t ( x, θ ( y ))= (2 π ) − d { det ˜ A ( θ ( y )) t } − e − t h ˜ A ( θ ( y )) − ( x − θ ( y )) ,x − θ ( y ) i = (2 π ) − d { det Ψ ˜ A ( y )Ψ t } − e − t h ˜ A ( y ) − Ψ( θ ( x ) − θ ( y )) , Ψ( θ ( x ) − θ ( y )) i = (2 π ) − d { det ˜ A ( y ) t } − e − t h ˜ A ( y ) − Ψ ( x − y ) , Ψ ( x − y ) i = p t ( x, y ) . Therefore, Z R d π ( f )( y ) p t ( x, y ) dy = Z R d f ( y )1 { y ∈ D } ( p t ( x, y ) − p t ( x, θ ( y ))) dy = 0 (15)for any bounded measurable f and x ∈ ∂D .Thus, π of (11) and p of (13) can be chosen as a specific example of the framework of [1],but it turns out that the integrability conditions (6) or (9) may fail.The formula (15) may economically mean the following. The kernel p is a kind of fictitioustransition probability of the underlying process. If it were the real one, the price at τ of theoption with pay-off π ( f ) would be zero, and therefore the static hedge by the option with pay-off f ( θ ( x )) works without error. 7 .2 Underlying asset price dynamics This Subsection aims to describe the mathematical setting characterizing the assumptions on theunderlying asset price dynamics. Specific assumptions on both parameters A and b are providedand discussed. Assumption 3.2 There exist positive constants m and M such that m | y | ≤ h A ( x ) y, y i ≤ M | y | ∀ x, y ∈ R d (16) where a ij , b j have any order of derivatives, all bounded above. Notice that A and b are Lipschitz continuous under Assumption 3.2. In particular, by consideringthe case a ∞ := X i,j d max k sup x ∈ R d | ∂ k a i,j ( x ) | ! , we have k A ( x ) − A ( y ) k ≤ a ∞ | x − y | , (17)where k M k ≡ (Tr M M ∗ ) for a matrix M . Moreover, Assumption 3.2 implies what follows (seee.g. [10, Theorem 1.11, Theorem 1.15]) on the transition density of X .Under Assumption 3.2, the transition density q t ( x, y ) associated to Xq t ( x, y ) = P ( X t ∈ dy | X = x ) /dy exists, it is twice continuously differentiable in ( x, y ) and continuously differentiable in t . More-over, there exists a constant C q > M > M the transition density satisfies thefollowing inequalities q t ( x, y ) ≤ C q t − d exp {− | x − y | M t } , (18) |∇ q t ( x, y ) | ≤ C q t − d +12 exp {− | x − y | M t } , (19)and ∂ s q s ( x, y ) = ( L x q s )( x, y ) = ( L ∗ y q s )( x, y ) , (20)where L x is the infinitesimal generator of X (see (12)) acting on the variable x , and L ∗ y is theadjoint of L , acting on the variable y . The adjoint L ∗ y can be written under the following form: L ∗ y = 12 ∇ ⊗ y · A ( y ) − ∇ y · b ( y ) ≡ X i,j a i,j ( y ) ∂ ∂y i ∂y j + X i X j ∂a ij ∂y j ( y ) − b i ( y ) ∂∂y i + 12 X i,j ∂ a ij ∂y i ∂y j ( y ) − X i ∂b i ∂y i ( y ) . Notice that we have Z R d ( L ∗ y q s )( x, y ) g ( y ) dy = Z R d q s ( x, y ) L y g ( y ) dy (21)8or any test function g ∈ C ∞ ( R d ) (see e.g. [10]).Let us consider the operator L yz defined as L yz = 12 ˜ A ( y ) · ∇ ⊗ , acting on the variable z . By considering ( x, y ) ∈ R d × R d , we then have ∂ s p s ( x, y ) = ( L yx p t )( x, y ) . (22) We shall establish the Error formula corresponding to Theorem 2.1. Due to the lack of continuityin ˜ A , this requires extra efforts.Recall that h ( t, x, y ) = ( L x − ∂ t ) p t ( x, y )= ( L x − L yx ) p t ( x, y ) . Lemma 3.3 For y ∈ R d , q t ( x, y ) − p t ( x, y ) = Z t ds Z R d dz q s ( x, z ) h ( t − s, z, y ) . (23)The equation (23) is the key to the parametrix theory (see e.g. [3]). To give a proof toLemma 3.3 is somewhat difficult since we have explicitly, h ( t, z, y )= 12 { A ( z ) − ˜ A ( y ) } · ∇ ⊗ p t ( z, y ) + b ( z ) · ∇ p t ( z, y )= 12 { A ( z ) − ˜ A ( y ) } · (cid:18) t { ˜ A ( y ) } − ( z − y ) ⊗ { ˜ A ( y ) } − ( z − y ) − t { ˜ A ( y ) } − (cid:19) p t ( z, y ) − b ( z ) · t { ˜ A ( y ) } − ( z − y ) p t ( z, y )= 12 t { A ( z ) − ˜ A ( y ) }{ ˜ A ( y ) } − ( z − y ) · { ˜ A ( y ) } − ( z − y ) p t ( z, y ) − t { ˜ A ( y ) } − · (cid:16) { A ( z ) − ˜ A ( y ) } + 2 b ( z ) ⊗ ( z − y ) (cid:17) p t ( z, y ) . (24)We recall here that the integrability in ( t, z ) ∈ [0 , T ] × R d of the terms from the second orderderivative are normally retrieved by the continuity of A in the classical parametrix theory (seee.g. [10, Chapter 1. Section 4.]). Here, it becomes a very naive problem since the symmetrizeddiffusion matrix ˜ A in most cases fails to be continuous at ∂D . To overcome this difficulty, weintroduce a parameter that can be as sufficiently small as possible when necessary. Set δ := 2 sup x ∈ ∂D k [ A ( x ) , γ ⊗ γ ] k . (25)Then the constant δ controls the discontinuity in the following sense:9 emma 3.4 For x ∈ D and y ∈ D c , k A ( x ) − ˜ A ( y ) k ≤ a ∞ | x − y | + δ. (26)A proof of the Lemma 3.4 will be given in the Appendix A.1.Thus, if δ = 0, we have the Lipschitz continuity of ˜ A and therefore, the integrability of h .If this is the case, we can establish the convergent expansion by using standard theory (see [10],[3], and [1]). Without the continuity, the standard approach does not work. However, we havethe following estimate which is critical to obtain the result contained in Theorem 3.6. Lemma 3.5 For x, y ∈ R d , | h ( t, x, y ) | ≤ C t − p Mt ( x, y ) + ( δ { x ∈ D } + 2 M d { x ∈ D c } ) C t − p Mt ( x, y )1 { y / ∈ D } , (27) where C := 2 d m − d M d (4 m − M K a ∞ + d K a ∞ + b ∞ ) ,C := 2 d m − d M d (2 m − M K + 2 − d ) (28) with δ and a ∞ as defined in (25) and (17), respectively, b ∞ = max ≤ i ≤ d k b i k ∞ , sup x ≥ | x β e − x | =: K β < ∞ , (29) and p Mt ( x, y ) = (4 πM t ) − d/ e −| x − y | / Mt ,M being the same as the one appearing in (16) of Assumption 3.2. Proof: See Appendix A.2.Let h ( t, x, y ) := h ( t, x, y ) − h ( t, x, θ ( y )) , The following estimates are also essential in obtaining our economic results, so we state themseparately as Theorem. Theorem 3.6 Under Assumption 3.2, we have the following inequalities.(i) There exists a constant C such that, for t ∈ [0 , T ] and x ∈ R d , Z D | h ( t, x, y ) | dy ≤ Z R d | h ( t, x, y ) | dy ≤ C (cid:18) t − + t − (cid:18) e − ( k −h γ,x i )24 Mt { x ∈ D } + 1 { x D } (cid:19)(cid:19) . ii) There exists a constant C depending on T such that, for s, t ∈ [0 , T ] , ( x, y, z, w ) ∈ R d × R d × R d × R d and M > M , q s ( x, y ) | h ( t, z, w ) | ≤ q s ( x, y ) | h ( t, z, w ) | ≤ C s − d t − exp (cid:18) − | x − y | M s (cid:19) exp (cid:18) − | z − w | M t (cid:19) . In particular, they are integrable in ( z, y ) ∈ R d × R d .(iii) Further, there exists a constant C depending on T such that (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, z ) h ( t, z, y ) dz (cid:12)(cid:12)(cid:12)(cid:12) ≤ C s − t − ( s + t ) − d exp (cid:18) − | x − y | M ( t + s ) (cid:19) , for any y ∈ R d . In particular, Z R d q s ( x, z ) h ( t − s, z, y ) dz, and hence Z R d q s ( x, z ) h ( t − s, z, y ) dz, are integrable in ( s, y ) ∈ [0 , t ] × R d for any t ∈ [0 , T ] . Proof: See Appendix A.3.The point here is that the singularity of t − in the estimate (i) is handled by integrationby part in (iii), using the integrability of (ii) and the Gaussian estimates (18) and (19) of q and ∇ q . Remark 3.7 We note that we do not have the integrability of (6) here, so we cannot applyTheorem 2.1. The first assertion of Theorem 3.6 ensures that we can define an operator S t on L ∞ ( D ) foreach t > S t f ( x ) = Z D h ( t, x, y ) f ( y ) dy, just as (4). Corollary 3.8 For for each t > , S t is an operator on L ∞ ( D ) into L ∞ ( R d ) . Proof: It directly follows from (i) of Theorem 3.6.By leveraging on Lemma 3.3 that is derived mathematically from Theorem 3.6, we can nowstate the hedging error formula (integral decomposition) under the proposed multi-dimensionalsetting, corresponding to the one provided in Theorem 2.1 by [1] as follows. Theorem 3.9 Suppose that f is bounded. Under the Assumption 3.2, the formulas (7) and (8) hold, by replacing the notation S with S : in other words, for any t < T , E [ E [1 { τ See Appendix A.5. 11 .4 Second order semi-static hedges As we have seen in the previous section, the hedge error is represented by the integral withrespect to s of knock-in options with pay-off S T − s f ( X s ). For each of them we construct thestatic hedge by π ⊥ S T − s f ( X s ) with infinitesimal amount ds .To be more precise, for the knock-in option with pay-off S T − s f for each s , we adopt theBowie-Carr type strategy by the option with pay-off π ⊥ S T − s f ; we construct a portfolio composedof options with pay-off π ⊥ S T − s f ( X s ) = { S T − s f ( X s ) + S T − s f ( θ ( X s )) } { X s D } , at the volume “ e − r ( T − s ) ds ” for each s . Note that π ⊥ S T − s f ( X s ) may not be integrable in ( s, ω ) ∈ [0 , T ] × Ω, although it is in L ( P ) for each s since S T − s f is bounded. Once it is conditioned,however, we retrieve the integrability; Lemma 3.10 The random variable E [ π ⊥ S T − s f ( X s ) |F τ ] is jointly integrable in ( s, ω ) ∈ [0 , T ] × Ω Proof: See Appendix A.6.Let us consider the value of the “portfolio”. Until the knock-in time τ , all the options whosematurity is before τ are cleared with pay-off zero. At the knock-in time, the hedger sells all theoptions at the price E [ π ⊥ S T − s f ( X s ) |F τ ] . Thus, the value at time t of the strategy should be defined asΠ ,st := e − r ( T − t ) E [ E [ π ⊥ S T − s f ( X s ) |F τ ] |F t ∧ τ ] , which, on { t < τ } , is equal to e − r ( T − t ) E [ π ⊥ S T − s f ( X s ) |F t ] . Since it is integrable in s ∈ [0 , T ], the total value at time t of the portfolios is given by Z T Π ,st ds = e − r ( T − t ) Z T E [ π ⊥ S T − s f ( X s ) |F t ∧ τ ] ds. (31) Remark 3.11 Lemma 3.10 ensures the change of the order of the integrals to have anotherexpression of the totality of the portfolio as Z T Π ,st ds = e − r ( T − t ) E [ Z T E [ π ⊥ S T − s f ( X s ) |F τ ] ds |F t ∧ τ ] , In particular, discounted by e rt , it is a martingale. This means that the portfolio is arbitrage-free,or should we say, it is still within the classical arbitrage theory. As we have discussed in section 2 as (2) and (3), the hedge error of the strategy that holding π ⊥ ( · ) for a knock-in option coincides with the one by the π ( · ) strategy for the correspondingknock-out option. So the error, evaluated at t for each maturity s is given by, in the infinitesimalform, Err s ,t ds := e − r ( s − t ) E [ E [1 { τ Err s ,t for maturity s is integrable in s ∈ [0 , T ] and Z T Err s ,t ds = e − r ( T − t ) Z T Z Tu E [1 { τ
See Appendix A.7.Combining (31) and Lemma 3.12 we have the following Theorem 3.13 It holds that, for each t > , e − r ( T − t ) (cid:18) − E [ f ( X T )1 { τ>T } |F t ∧ τ ] + E [ πf ( X T ) |F t ∧ τ ] − Z T ds E [ π ⊥ S T − s f ( X s ) |F t ∧ τ ] (cid:19) = e − r ( T − t ) Z T Z Tu E [1 { τ ≤ u } S s − u S T − s f ( X u ) |F t ∧ τ ] dsdu (cid:18) = e − r ( T − t ) E [ Z T Z Tu E [1 { τ ≤ u } S s − u S T − s f ( X u ) |F τ ] dsdu |F t ∧ τ ] (cid:19) . (33)In (33), the left-hand-hand side is the value of the knock-out option in short position and thestatic hedging position of the first and the second order. So the formula claims that the hedgingerror evaluated at time t equals to the price of the doubly integrated knock-in options. Proof: has been already done. Remark 3.14 Notice that the proposed framework is weaker than the one studied in [1]; here weidentify the second order hedge via two-parameters, while in [1] we have a one-parameter familyof hedges. The reason why we express it by double integral is that we are missing the integrabilityto ensure the change of the order. The double integrability comes from (iii) of Theorem 3.6 withthe aid of integration by part. This Subsection is devoted to the discussion of asymptotics of semi-static hedges, for ordershigher than two. Let us consider for a moment the third order as an example. Equation (33)may suggest that the third order semi-static hedge can be written as function of the optionswith pay-off π ⊥ S s − u S T − s f ( X u )maturing at u ∈ (0 , T ], parameterized by s ∈ ( u, T ] with infinitesimal amount e − r ( T − u ) dsdu .Once the integrability of E [ π ⊥ S s − u S T − s f ( X u ) |F τ ] , in ( u, s ) is established, we can say that the value of the hedging portfolio is given by e − r ( T − t ) Z T Z Tu E [ π ⊥ S s − u S T − s f ( X u ) |F t ∧ τ ] dsdu, which is equivalent to e − r ( T − t ) E [ Z T Z Tu E [ π ⊥ S s − u S T − s f ( X u ) |F τ ] dsdu |F t ∧ τ ] . u, s ), the error Err u,s ,t should be defined asErr u,s ,t := e − r ( T − t ) E [ E [ πS s − u S T − s f ( X s ) ds |F τ ∧ s ] |F t ] . Notice that, by showing the integrability of Err u,s ,t in ( u, s ), by following Proposition 3.12 we canwrite: Z T Z s Err s ,t duds = e − r ( T − t ) Z T Z Tu Z Ts E [1 { τ ≤ u } S s − u S v − s S T − v f ( X u ) |F t ∧ τ ] dvdsdu. Based on the above observation, we can thus construct the n -th order static hedge and thecorresponding error with the aggregation of 1 , · · · , n -th hedges for any n ≥ 3. The followingTheorem extends the results stated in Theorem 3.6 and has a key role in the determination ofhigher order hedges. Theorem 3.15 The following holds: (i) For n ≥ , and for y n +1 ∈ R d and u < u < · · · < u n ≤ T , Z D n n Y i =1 | h ( u i − u i − , y i +1 , y i ) | dy · · · dy n ≤ (cid:18) C + 1 { y n +1 ∈ D c } M dC ( u n − u n − ) − + 1 { y n +1 ∈ D } δC u n − u n − ) − e − ( h yn +1 ,γ i− k )24 M ( un − un − (cid:19) × n Y i =1 ( u i − u i − ) − X I ⊂{ , ··· ,n − } ( δC | I | C | I c | Y j ∈ I ( u j +1 − u j − ) − , where C and C are the ones given in (28) , and M is the constant in (16) .(ii) For y n +1 ∈ D , u n ∈ (0 , T ) and f ∈ L ∞ ( D ) , Z u
See Appendix A.8.By Theorem 3.15 we can define operators S ∗ nu n for u n ∈ [0 , T ], n ≥ 2, on L ∞ ( D ) by S ∗ nu n f ( y n +1 ) = Z u
Theorem 3.17 Under Assumption 3.2, we have, for n ≥ :(i) the options for the n -th hedge, E [ π ⊥ S s − u S ∗ ( n − T − s f ( X u ) |F τ ] , are integrable in ( s, u, ω ) ∈{ ( s, u ) : 0 ≤ u ≤ s ≤ T } × Ω ,(ii) and the corresponding error is E [1 { τ ≤ u } S s − u S ∗ nT − s f ( X u ) |F t ∧ τ ] , which is also integrable in ( s, u, ω ) ∈ { ( s, u ) : 0 ≤ u ≤ s ≤ T } × Ω .(iii) As a consequence, we have, for each t , e − r ( T − t ) (cid:18) − E [ f ( X T )1 { τ>T } |F t ∧ τ ] + E [ πf ( X T ) |F t ∧ τ ] − Z T E [ π ⊥ S T − s f ( X s ) |F t ∧ τ ] ds − n X h =2 Z T Z Tu E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F t ∧ τ ] dsdu (cid:19) = e − r ( T − t ) Z T Z Tu E [1 { τ ≤ u } S s − u S ∗ nT − s f ( X u ) |F t ∧ τ ] dsdu. (35) (iv) If δ is sufficiently small, the right-hand-side of (35) converges uniformly in t to al-most surely as n → ∞ .(v) If δ is sufficiently small, the series n X h =2 Z T Z Tu E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F t ∧ τ ] dsdu is absolutely convergent uniformly in t almost surely as n → ∞ , and E [ f ( X T )1 { τ>T } |F t ∧ τ ] = E [ πf ( X T ) |F t ∧ τ ] − Z T E [ π ⊥ S T − s f ( X s ) |F t ∧ τ ] ds − Z T Z Tu ∞ X h =2 E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F t ∧ τ ] dsdu. roof: See Appendix A.12.Roughly speaking, the results (i)–(iii) are obtained by repeating the procedure we did forTheorem 3.13. To get the convergence result (iv) and (v) we need extra efforts. The right-hand-side of (35) basically gives the error estimate as multiple integral like Taylor expansion case. If,let say, the integrand were bounded, the term would be dominated by ( C n ) /n ! for some constant C to ensure the convergence, but in our case, the naive integrability appearing all the time inthis paper prevents from such a nice estimate. Instead we work on a more precise estimate, witha reduction to a determinantal equation in Lemma A.1 and the hyper-geometrical estimate inLemma A.4, in place of the standard exponential type estimate. In addition, a careful treatmentof Gaussian type estimates is required to obtain (v). In the context of static hedge, the present paper introduces a methodology allowing to obtainasymptotic static hedge results for a fairly large class of multi-dimensional underlying assets’dynamics. From a financial point of view, we consider the problem of an investor who wantsto hedge a portfolio of barrier options. The present paper extends the existing literature onstatic hedge by discussing the existence of asymptotic static hedging error and its convergence.Starting from the main results stated in [1], the paper extends the asymptotic static hedge errorconstruction to a more general mathematical setting. Both parametrix techniques and kernelsymmetrization are considered to build in a systematic way the exact static hedging strategiesof barrier options. References [1] Akahori, J., Barsotti, F. and Imamura, Y. (2017) “The Value of Timing Risk”, workingpaper. arXiv: 1701.05695 [q-fin.PR][2] Akahori, J. and Imamura, Y. (2014) “On a symmetrization of diffusion processes”, Quant.Finance Ann. Appl. Probab. , 25(6): 3095-3138.[4] Bayraktar, E. and Nadtochiy, S. (2015) “Weak Reflection Principle for Levy processes”, Ann. Appl. Probab. , 25(6): 3251-3294.[5] Bowie, J. and Carr, P. (1994) “Static Simplicity”, Risk, 7(8): 44–50.[6] Carr, P., and Lee R. (2009) “Put-Call Symmetry: Extensions and Applications”, Mathe-matical Finance , 19(4): 523–560.[7] Carr, P. and Nadtochiy, S. (2011) “Static Hedging under Time-Homogeneous Diffusions”, SIAM Journal on Financial Mathematics , 2(1): 794–838.[8] Carr P., and Picron, J. (1999), “Static Hedging of Timing Risk”, Journal of Derivatives SIAM J. Financial Math. 1: 837–867.[10] Friedman, A. Partial differential equations of parabolic type , 1964 by Prentice-Hall, 2008 byDover.[11] Imamura, Y. Ishigaki, Y. and Okumura, T. (2014) ”A numerical scheme based on semi-static hedging strategy”, Monte Carlo Methods and Applications . Volume 20, Issue 4, Pages223-235[12] Imamura, Y., and Takagi, K. (2013) “Semi-Static Hedging Based on a Generalized ReflectionPrinciple on a Multi Dimensional Brownian Motion”, Asia-Pacific Financial Markets International Journal of Stochastic Analysis , Volume 2014: Article ID268086.[14] Nalholm, M., and Poulsen, R. (2006) “Static Hedging of Barrier Options under GeneralAsset Dynamics: Unification and Application,” Journal of Derivatives, 13 (4), pp. 46-60.[15] Shiraya, K.,Takahashi, A. and Yamada, T. (2012) “Pricing Discrete Barrier Options UnderStochastic Volatility” Asia-Pacific Financial Markets , Vol. 19(3): 205-232. A Appendix A.1 Proof of Lemma 3.4 Let us introduce x D defined as x D := ( k − h y, γ i ) x + ( h x, γ i − k ) y h x − y, γ i , representing the intersection between the hyperplane and the straight line from x to y . Noticethat k − h y, γ i ≥ h x, γ i − k > x ∈ D and y ∈ D c . As a consequence, we can write: A ( x ) − ˜ A ( y )= A ( x ) − Ψ A ( θ ( y ))Ψ= A ( x ) − A ( x D ) + A ( x D ) − Ψ A ( θ ( x D ))Ψ + Ψ A ( θ ( x D ))Ψ − Ψ A ( θ ( y ))Ψ ≤ k A ( x ) − A ( x D ) k + k Ψ(Ψ − A ( x D ) − A ( θ ( x D ))Ψ) k + k Ψ( A ( θ ( x D )) − A ( θ ( y )))Ψ k . Since Ψ = I − γ ⊗ γ is orthogonal and Ψ = 1, we have k Ψ(Ψ − A ( x D ) − A ( θ ( x D ))Ψ) k = 2 k [ A ( x D ) , γ ⊗ γ ] k and k Ψ( A ( θ ( x D )) − A ( θ ( y )))Ψ k = k A ( θ ( x D )) − A ( θ ( y )) k . k A ( x ) − A ( x D ) k + k A ( θ ( x D )) − A ( θ ( y )) k≤ a ∞ ( | x − x D | + | x D − y | )= a ∞ (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) h x, γ i − k h x − y, γ i ( x − y ) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) k − h y, γ ih x − y, γ i ( x − y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:19) = a ∞ | x − y | . Thus, the result stated in inequality (26) follows. A.2 Proof of Lemma 3.5 Before entering into the proof, we list below direct consequences of the inequalities (16) ofAssumption 3.2. We write eigenvalues of A ( y ) by λ ( y ) , · · · , λ d ( y ). Then, m ≤ λ i ( y ) ≤ M for any i and y ∈ R d , and therefore, md ≤ k A ( y ) k = ( X i λ i ( y )) ≤ M d , and m d ≤ det A ( y ) = Y i λ i ( y ) ≤ M d . Moreover, since the eigenvalues of A − ( y ) are λ − ( y ) , · · · , λ − d ( y ), we have that, for x ∈ R d , M − | x | ≤ ( A ( y ) − x ) · x ≤ m − | x | ,M − d ≤ k A ( y ) − k ≤ m − d , (36)and | A ( y ) − x | = | A ( y ) − ( A − x ) · A − x |≤ m − | A − x | ≤ m − | A ( y ) − x · x |≤ m − | x | . (37)Since Ψ in (14) is an orthogonal matrix, the inequalities in (16), and hence the ones in the above,are valid for ˜ A as well.Now we start with looking at the equation (24) to see that | h ( t, x, y ) | ≤ I + I + I , where I := 12 t k A ( x ) − ˜ A ( y ) k| ˜ A ( y ) − ( x − y ) | p t ( x, y ) ,I := 12 t k A ( x ) − ˜ A ( y ) kk ˜ A ( y ) − k p t ( x, y ) , I := 1 t | b ( x ) · ˜ A ( y ) − ( x − y ) | p t ( x, y ) . By using the inequalities listed above, we have that p t ( x, y ) = (2 π ) − d { det ˜ A ( y ) t } − e − t h ˜ A ( y ) − ( x − y ) ,x − y i ≤ (2 π ) − d m − d t − d e − Mt | x − y | = 2 d m − d M d p Mt ( x, y ) e − Mt | x − y | , (38)and | b ( x ) · ˜ A ( y ) − ( x − y ) | ≤ | b ( x ) || ˜ A ( y ) − ( x − y ) |≤ b ∞ m − | x − y | . (39)By (38) and (39), we obtain that, for y ∈ R d , I ≤ (2 d m − − d M d b ∞ ) t − | x − y | p Mt ( x, y ) e − Mt | x − y | ≤ (2 d m − − d M + d b ∞ ) t − p Mt ( x, y ) (cid:18) M t | x − y | (cid:19) e − Mt | x − y | ≤ d m − − d M + d b ∞ K t − p Mt ( x, y ) =: C ′ t − p Mt ( x, y ) . (40)To estimate I and I , we first consider the case of y ∈ D . Since ˜ A ( y ) = A ( y ) in that case, wecan use (17), and by (37) and (38), we obtain that I ≤ (2 − d m − − d M d a ∞ ) t − | x − y | p Mt ( x, y ) e − Mt | x − y | = (2 d +2 m − − d M + d a ∞ ) t − p Mt ( x, y ) (cid:18) M t | x − y | (cid:19) e − Mt | x − y | ≤ d +42 m − − d M + d a ∞ K t − p Mt ( x, y ) =: C ′ t − p Mt ( x, y ) . (41)Similarly, with (36) in addition, I ≤ (2 − d m − − d M d a ∞ d ) t − | x − y | p Mt ( z, y ) e − Mt | x − y | = (2 d m − − d M + d a ∞ d ) t − p Mt ( x, y ) (cid:18) M t | x − y | (cid:19) e − Mt | x − y | ≤ d m − − d M + d a ∞ d K t − p Mt ( x, y ) =: C ′ t − p Mt ( x, y ) . (42)Thus, we obtained that, for y ∈ D , | h ( t, x, y ) | ≤ ( C ′ + C ′ + C ′ ) t − p Mt ( x, y ) = C t − p Mt ( x, y ) . (43)Next, we consider the case where y D . As has been remarked already, ˜ A is not continuousin general. We first consider the case x ∈ D c , where we can only use, instead of (17), k A ( x ) − ˜ A ( y ) k ≤ x ∈ D c k A ( x ) k ≤ M d. (44)19e need to modify the estimates of I and I . By (44) instead of (17) but still with (36) and(38), we have I ≤ (2 d m − d M d d ) t − | x − y | p Mt ( x, y ) e − Mt | x − y | = (2 d +2 m − d M d d ) t − p Mt ( x, y ) (cid:18) M t | x − y | (cid:19) e − Mt | x − y | ≤ (2 d +2 m − − d M d K d ) t − p Mt ( x, y ) =: C t − p Mt ( x, y ) , (45)and with (36), (38), and (44), I ≤ (2 d m − − d M d d ) t − p Mt ( x, y ) e − Mt | x − y | ≤ (2 d m − − d M d d ) t − p Mt ( x, y ) =: C t − p Mt ( x, y ) . (46)Combining (45), (46) with (40), we obtain that for x, y ∈ D c , | h ( t, x, y ) | ≤ C ′ t − p Mt ( x, y ) + ( C + C t − p Mt ( x, y ) ≤ C t − p Mt ( x, y ) + 2 M dC ( C + C ) t − p Mt ( x, y ) . (47)Finally, we consider the case x ∈ D and y ∈ D c . We can then rely on (26). We can actuallycombine (41) and (45) to obtain I ≤ C ′ t − p Mt ( x, y ) + C δ M d t − p Mt ( x, y ) , and by (42) and (46), I ≤ C ′ t − p Mt ( x, y ) + C δ M d t − p Mt ( x, y ) . Since we still have (40), we obtain | h ( t, x, y ) | ≤ ( C ′ + C ′ + C ′ ) t − p Mt ( x, y ) + δ M d ( C + C ) t − p Mt ( x, y )= C t − p Mt ( x, y ) + δC t − p Mt ( x, y ) . (48)By putting (43), (48) and (47) together we have (27). A.3 Proof of Theorem 3.6 A.3.1 Proof of (i) of Theorem 3.6 We first note that Z D | h ( t, x, y ) | dy ≤ Z D | h ( t, x, y ) | dy + Z D | h ( t, x, θ ( y )) | dy = Z D | h ( t, x, y ) | dy + Z D c | h ( t, x, y ) | dy. C := max( C , M dC , δC ), Z D | h ( t, x, y ) | dy ≤ C t − Z D p M t ( x, y ) dy + C Z D c (cid:16) t − p M t ( x, y ) + t − p M t ( x, y ) (cid:17) dy = C t − Z R d p M t ( x, y ) dy + C t − Z D c p M t ( x, y ) dy = C t − + C t − Z k −∞ √ πM t e − ( h x,γ i− z )24 Mt dz. In the case x ∈ D , since h x, γ i − k > k − z ≥ 0, we have that( h x, γ i − z ) = ( h x, γ i − k + k − z ) = ( h x, γ i − k ) + ( k − z ) + 2( h x, γ i − k )( k − z ) ≥ ( h x, γ i − k ) + ( k − z ) . Therefore, Z k −∞ √ πM t e − ( h x,γ i− z )24 Mt dz ≤ I { x ∈ D } Z k −∞ √ πM t e − ( h x,γ i− z )24 Mt dz + I { x ∈ D c } Z ∞−∞ √ πM t e − ( h x,γ i− z )24 Mt dz ≤ { x ∈ D } e − ( h x,γ i− k )24 Mt Z k −∞ √ πM t e − ( k − z )24 Mt dz + 1 { x ∈ D c } = 12 1 { x ∈ D } e − ( h x,γ i− k )24 Mt + 1 { x ∈ D c } . This completes the proof. A.3.2 Proof of (ii) of Theorem 3.6 It is a direct consequence of (18) and Lemma 3.5. A.3.3 Proof of (iii) of Theorem 3.6 Let us recall that, for y ∈ R d , Z R d q s ( x, z ) h ( t − s, z, y ) dz = Z R d q s ( x, z )( L z − L yz ) p t − s ( z, y ) dz = Z R d q s ( x, z ) (cid:18) { A ( z ) − ˜ A ( y ) } · ∇ ⊗ z p t − s ( z, y ) + b ( z ) · ∇ z p t − s ( z, y ) (cid:19) dz. Z R d q s ( x, z ) 12 { A ( z ) − ˜ A ( y ) } · ∇ ⊗ z p t − s ( z, y ) dz = Z R d d X i,j =1 ( q s ( x, z ) 12 { a i,j ( z ) − ˜ a i,j ( y ) } ) ∂ z j ∂ z i p t − s ( z, y ) dz = 12 Z R d d X i,j =1 ∂ z j ( q s ( x, z ) { a i,j ( z ) − ˜ a i,j ( y ) } ) ∂ z i p t − s ( z, y ) dz = 12 Z R d d X i,j =1 ( ∂ z j q s )( x, z ) { a i,j ( z ) − ˜ a i,j ( y ) } + q s ( x, z ) ∂ z j { a i,j ( z ) − ˜ a i,j ( y ) } ) ∂ z i p t − s ( z, y ) dz = 12 Z R d ( { A ( z ) − ˜ A ( y ) }∇ z q s ( x, z ) + q s ( x, z ) t ∇ z A ( z )) · ∇ p t − s ( z, y ) dz. Therefore we obtain that (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, z ) h ( t − s, z, y ) dz (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)Z R d (cid:18) { A ( z ) − ˜ A ( y ) }∇ z q s ( x, z ) + 12 t ∇ z A ( z ) q s ( x, z ) + b ( z ) q s ( x, z ) (cid:19) · ∇ z p t − s ( z, y ) dz (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z R d (cid:18) (cid:12)(cid:12)(cid:12) { A ( z ) − ˜ A ( y ) }∇ z q s ( x, z ) (cid:12)(cid:12)(cid:12) + 12 (cid:12)(cid:12) t ∇ z A ( z ) (cid:12)(cid:12) q s ( x, z ) + | b ( z ) | q s ( x, z ) (cid:19) × ( t − s ) − (cid:12)(cid:12)(cid:12) ˜ A ( y ) − ( z − y ) (cid:12)(cid:12)(cid:12) p t − s ( z, y ) dz. By (16), p t − s ( z, y ) = (2 π ) − d { det ˜ A ( y )( t − s ) } − e − t − s ) h ˜ A ( y ) − ( z − y ) ,z − y i ≤ (2 π ) − d m − d ( t − s ) − d e − M ( t − s ) | z − y | and since M in (18) and (19) is greater than M , we have that p t − s ( z, y ) ≤ m − d M d p M t − s ( z, y ) . Therefore, ( t − s ) − (cid:12)(cid:12)(cid:12) ˜ A ( y )( z − y ) (cid:12)(cid:12)(cid:12) p t − s ( z, y ) ≤ ( t − s ) − m − | z − y | p t − s ( z, y ) ≤ ( t − s ) − | z − y | (2 π ) − d m − d ( t − s ) − d e − M t − s ) | z − y | = ( t − s ) − (2 π ) − d m − d ( t − s ) − d e − M t − s ) | z − y | × (4 M ) ((cid:18) M ( t − s ) | z − y | (cid:19) e − M t − s ) | z − y | ) ≤ d ( t − s ) − m − d M d + K p M ( t − s ) ( z, y ) . 22n the other hand, since k A ( z ) − ˜ A ( y ) k ≤ k A ( z ) k + k ˜ A ( y ) k ≤ M d, the inequality (19) implies that,12 (cid:12)(cid:12)(cid:12) { A ( z ) − ˜ A ( y ) }∇ z q s ( x, z ) (cid:12)(cid:12)(cid:12) ≤ M d |∇ z q s ( x, z ) | ≤ M d C q s − d +12 e − | x − z | M s , and 12 (cid:12)(cid:12) t ∇ z A ( z ) (cid:12)(cid:12) q s ( x, z ) ≤ q s ( x, z ) X i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)X i ∂ z i a ij ( z ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ q s ( x, z ) max ≤ i,j ≤ d k ∂ i a i,j k ∞ d ≤ C q s − d max ≤ i,j ≤ d k ∂ i a i,j k ∞ d e − | x − z | M s =: C q pd s − d e − | x − z | M s by (18). Also by (18), we see that | b ( z ) | q s ( x, z ) ≤ b ∞ C q s − d e − | x − z | M s . Combining these altogether, we have that (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, z ) h ( t − s, z, y ) dz (cid:12)(cid:12)(cid:12)(cid:12) ≤ d ( t − s ) − m − d M d + K × Z R d (cid:26) M d C q s − d +12 e − | x − z | M s + C q pd s − d e − | x − z | M s + KC q s − d e − | x − z | M s (cid:27) p M ( t − s ) ( z, y ) dz = 2 d π d ( t − s ) − m − d M d + K × Z R d n M d C q s − + C q pd + KC q o p M s ( x, z ) p M ( t − s ) ( z, y ) dz ≤ C ( t − s ) − ( s − + 1) Z R d p M s ( x, z ) p M ( t − s ) ( z, y ) dz = C ( t − s ) − ( s − + 1) p M t ( x, y ) , where C := 2 d π d m − d M d + K C q max { M d , pd + b ∞ } . This completes the proof. 23 .4 Proof of Lemma 3.3. We first notice that ∂ s { q s ( x, z ) p t − s ( z, y ) } = ( L ∗ z q s )( x, z ) p t − s ( z, y ) − q s ( x, z )( L yz p t − s )( z, y ) , by (20) and (22). Since lim s ↓ Z R d q s ( x, z ) p t − s ( z, y ) dz = p t ( x, y )and lim s ↑ t Z R d q s ( x, z ) p t − s ( z, y ) dz = q t ( x, y ) , we have, q t ( x, y ) − p t ( x, y ) = lim ǫ ↓ Z t − ǫǫ ds Z R d dz { ( L ∗ z q s )( x, z ) p t − s ( z, y ) − q s ( x, z )( L yz p t − s )( z, y ) } = lim ǫ ↓ Z t − ǫǫ ds Z R d dz q s ( x, z )( L z − L yz ) p t − s ( z, y )= lim ǫ ↓ Z t − ǫǫ ds Z R d dz q s ( x, z ) h ( t − s, z, y )= Z t ds Z R d dz q s ( x, z ) h ( t − s, z, y ) . The second equality and the last equality follows from (21) and the integrability implied by (iii)of Theorem 3.6, respectively. A.5 Proof of Theorem 3.9. By leveraging on the optional sampling theorem, E [1 { τ A.6 Proof of Lemma 3.10. Since E [ π ⊥ S T − s f ( X s ) |F τ ] = 1 { τ ≤ s } Z D c q s − τ ( X τ , y ) { S T − s f ( y ) + S T − s f ( θ ( y )) } dy +1 { τ>s } { X s ∈ D c } ( S T − s f ( X s ) + S T − s f ( θ ( X s )))and since { τ > s } ∩ { X s ∈ D c } = ∅ , the second term is zero. Therefore we see that | E [ π ⊥ S T − s f ( X s ) |F τ ] | ≤ { τ Then by a similar calculation we did for II ′ in the proof of Lemma 3.10, we obtain that III ≤ C C k f k ∞ ( u − τ ) − ( s − u ) − ( T − s ) − (4 πM ) d × (cid:18) s − τ ) − ( T − s ) − (4 πM ) − Z ∞ k e − ( k − z )24 M (( s − τ ) − +( T − s ) − ) dz (cid:19) = C C k f k ∞ ( u − τ ) − ( s − u ) − ( T − s ) − (4 πM ) d (1 + ( T − τ )) . Now we see that 1 { τ ≤ u } E [ S s − u S T − s f ( X u ) |F τ ] is integrable in ( s, u, ω ) on { ( s, u ) : 0 ≤ u ≤ s ≤ T } × Ω and the totality of the error is then obtained as Z T Err s ,t ds = e − r ( T − t ) Z T ds Z s du E [1 { τ ≤ u } S s − u S T − s f ( X u ) |F t ∧ τ ] . (55)By the change of the order of the integrals in (55), we get (32).28 .8 Proof of Theorem 3.15 A.8.1 Estimates for the n -time “convolution” in space-variable of h “convolution”in space-variable of h Lemma A.1 For n ≥ and y n +1 ∈ D , we have Z D n n Y i =1 | h ( s i , y i +1 , y i ) | dy i ≤ X A ⊂{ , ··· ,n } (cid:18) { n ∈ A c } + s − n e − ( h yn +1 ,γ i− k )24 Msn { n ∈ A } (cid:19) ( δC | A | C | A c | Y j ∈ A \{ n } ( s j + s j +1 ) − n Y i =1 s − i = (cid:18) C s − n + δC s − n e − ( h yn +1 ,γ i− k )24 Msn (cid:19) X A ⊂{ , ··· ,n − } ( δC | A | C | A c | Y j ∈ A ( s j + s j +1 ) − n − Y i =1 s − i . Proof: We first note that, by Lemma 3.5, | h ( t, x, y ) | ≤ | h ( t, x, y ) | + | h ( t, x, θ ( y )) |≤ C t − { p Mt ( x, y ) + p Mt ( x, θ ( y )) } + C δt − p Mt ( x, θ ( y )) . Therefore, we have Z D n n Y i =1 | h ( s i , y i +1 , y i ) | dy i ≤ n Y i =1 s − i X A ⊂{ , ··· ,n } (cid:26) Z D n Y i ∈ A ( δC ) | A | s − i p Ms i ( y i +1 , θ ( y i )) dy i × Y j ∈ A c C | A c | { p Ms j ( y j +1 , y j ) + p Ms j ( y j +1 , θ ( y j )) } dy j (cid:27) . (56)The integral of the right-hand-side of (56) is reduced to one dimensinal one. In fact, thechange of variables( z j , z j , · · · , z d − j ) = ( h y j , γ i , h y j , γ i · · · , h y j , γ d − i ) =: G ( y j )separates both the domain D and the heat kernel p as G ( D ) = [ k, ∞ ) × R d − , p Mt ( y j +1 , y j ) = (4 πM t ) − e − ( z j +1 − z j ) / Mt (4 πM t ) − d − e − P d − l =1 ( z lj +1 − z lj ) / Mt , and p Mt ( y j +1 , θ ( y j )) = (4 πM t ) − e − ( z j +1 + z j − k ) / Mt (4 πM t ) − d − e − P d − l =1 ( z lj +1 − z lj ) / Mt . Therefore, for each A ⊂ { a, · · · , n } , Z D n Y i ∈ A p Ms i ( y i +1 , θ ( y i )) dy i Y j ∈ A c { p Ms j ( y j +1 , y j ) + p Ms j ( y j +1 , θ ( y j )) } dy j = Z [ k, ∞ ) n Y i ∈ A (4 πM s i ) − e − ( z i +1 + z i − k ) / Ms i dz i × Y j ∈ A c (4 πM s j ) − { e − ( z j +1 − z j ) / Ms j + e − ( z j +1 + z j − k ) / Ms j } dz j . (57)29ince z i − k ≥ i , we have in particular e − ( z i +1 + z i − k ) / Ms i ≤ e −{ ( z i +1 − k ) +( z i − k ) } / Ms i . Therefore, the integral of (57) is dominated by Z [ k, ∞ ) n Y i ∈ A (4 πM s i ) − e −{ ( z i +1 − k ) +( z i − k ) } / Ms i dz i × Y j ∈ A c (4 πM s j ) − { e − ( z j +1 − z j ) / Ms j + e − ( z j +1 + z j − k ) / Ms j } dz j =: I A . We can further reduce the integral I A as follows: by the shift z j z j + k , I A = Z [0 , ∞ ) n Y i ∈ A (4 πM s i ) − e −{ ( z i +1 ) +( z i ) } / Ms i dz i × Y j ∈ A c (4 πM s j ) − { e − ( z j +1 − z j ) / Ms j + e − ( z j +1 + z j ) / Ms j } dz j , and for an even function f , Z ∞ { e − ( z j +1 − z j ) / Ms j + e − ( z j +1 + z j ) / Ms j } f ( z j ) dz j = Z ∞−∞ e − ( z j +1 − z j ) / Ms j f ( z j ) dz j . These two facts imply that I A = Z [0 , ∞ ) | A | × R | Ac | Y i ∈ A (4 πM s i ) − e −{ ( z i +1 ) +( z i ) } / Ms i dz i Y j ∈ A c (4 πM s j ) − e − ( z j +1 − z j ) / Ms j dz j . Even further, since the integral is invariant under the translations z i 7→ − z i for i ∈ A , I A = 2 −| A | Z R n Y i ∈ A (4 πM s i ) − e −{ ( z i +1 ) +( z i ) } / Ms i dz i Y j ∈ A c (4 πM s j ) − e − ( z j +1 − z j ) / Ms j dz j . We note that Y i ∈ A e −{ ( z i +1 ) +( z i ) } / Ms i Y j ∈ A c e − ( z j +1 − z j ) / Ms j = e − M hH A ~z,~z i e − Msn z n +1 (1 { n ∈ A } + 1 { n ∈ A c } e M znzn +1 sn )= ( e − M hH A ~z,~z i e − Msn z n +1 n ∈ Ae − M hH A ( ~z − q ) , ( ~z − q ) i e − Msn z n +1 e hH Aq,q i M n ∈ A c , where ~z = ( z , · · · , z n ), H A = ( h ij ) is a symmetric matrix given by h ij = h ji = s − i = j = 1 s − i − + s − i i = j ≥ − s − i i ∈ A c \ { n } , j = i + 10 otherwise, q = H − A t (0 , · · · , , z n +1 /s n ) . (58)Hence, we have I A = 2 −| A | n Y i =1 s − i (det H A ) − e − Msn z n +1 (1 { n ∈ A } + 1 { n ∈ A c } e hH Aq,q i M )The proof will be complete if we show hH A q, q i ≤ z n +1 s n (59)and det H A ≥ Y i ∈ A \{ n } s − i ( s i + s i +1 ) Y j ∈ A c ∪{ n } s − j . (60)We first show (60). For a fixed A , we choose i , · · · , i l and j , · · · , j l in the following way: Algorithm A.2 i = 1 if ∈ A , otherwise j = 1 .2. If i = 1 , then define inductively for k ≥ , j k = min { j : j ∈ A c , i k < j < n } and i k +1 = min { i : i ∈ A, j k < i < n } .3. If j = 1 , then i k = min { i : i ∈ A, j k < i < n } and j k +1 = min { j : j ∈ A, i k < j < n } . We stop this algorithm of the set to take minimum becomes empty set.Let H I k := ( h ij ) { i k ≤ i,j ≤ j k − } i = 1 , k = 1( h ij ) { i k +1 ≤ i,j ≤ j k − } i = 1 , k ≥ h ij ) { i k +1 ≤ i,j ≤ j k +1 − } j = 1 , k ≥ H J k := ( ( h ij ) { j k ≤ i,j ≤ i k +1 } i = 1( h ij ) { j k ≤ i,j ≤ i k } j = 1 . Then, H A = H I H J . . . if i = 1, and so on. The point is that in any case this gives a direct sum decomposition. Since H I k , k = 1 , , · · · are diagonal we have thatdet H A = Y k det H I k Y k ′ det H J k ′ . (61)31ne can easily confirm thatdet H I k = s − i = 1 , j = 2 , k = 1 s − Q j k − i = i k +1 s − i s − i − ( s i + s i − )= s − · · · s − j − s − j − ( s + s ) · · · ( s j k − + s j k − ) i = 1 , j > , k = 1 Q j k − i = i k +1 s − i s − i − ( s i + s i − ) i = 1 , k ≥ Q j k +1 − i = i k +1 s − i s − i − ( s i + s i − ) j = 1 . (62)One can also prove thatdet H J k = Q i k j = j k s − j = s − · · · s − i j = 1 , k = 1 Q i k j = j k − s − j P i k j = j k − s j j = 1 , k ≥ Q i k +1 j = j k − s − j P i k +1 j = j k − s j i = 1 . (63)Here we only prove it for J for j > 1, by induction with respect to i . Note that i ≥ 3. Sincedet H J | i = n +1 = ( s − n + s − n − ) det H J | i = n − s n − det H J | i = n − , by inductive assumption,det H J | i = n +1 = ( s − n + s − n − ) n − Y j = j s − j n − X j = j s j − s − n − n − Y j = j s − j n − X j = j s j = n Y j = j s − j n − X j = j s j + s − n − n − Y j = j s − j ( n − X j = j s j − n − X j = j s j )= n Y j = j s − j n − X j = j s j + n − Y j = j s − j = n Y j = j s − j ( n − X j = j s j + s n ) , as desired. Other cases can be treated in the same way.By noting i k X j = j k − s j ≥ s i k + s i k − , and n X j = j l − s j ≥ s n , we see that (61), (62) and (63) prove (60).Now we turn to a validation of (59). By the definition (58) of q , we notice that hH A q, q i = z n +1 s n ( H − A ) nn = z N +1 s N det ] ( H A ) nn det H A , where ] ( H A ) nn is the nn -th cofactor matrix of H A . By the above observations on the determinantsof H I k and H J k , we now see that det ] ( H A ) nn det H A = det ^ ( H J l ) nn det H J l , J l is such that n ∈ J l . Then by (63),det ^ ( H J l ) nn det H J l = s n s j l + · · · + s n − s j l + · · · + s n ≤ s n . Hence we have (59). Lemma A.3 For n ≥ , and y n +1 ∈ D c , Z D n n Y i =1 | h ( s i , y i +1 , y i ) | dy i ≤ (cid:18) C s − n + 2 M dC C s − n (cid:19) X A ⊂{ , ··· ,n − } ( δC | A | C | A c | Y j ∈ A ( s j + s j +1 ) − n − Y i =1 s − i . Proof: Let g ( y ) := C + δC s − n − e − ( h y,γ i− k )24 Msn − . Then, by Lemma A.1, we have that Z D n − n − Y i =1 | h ( s i , y i +1 , y i ) | dy i ≤ s − n − C + δC s − n − e − ( h yn,γ i− k )24 Msn − ! X A ⊂{ , ··· ,n − } ( δC | A | C | A c | Y j ∈ A ( s j + s j +1 ) − n − Y i =1 s − i = s − n − g ( y n ) X A ⊂{ , ··· ,n − } ( δC | A | C | A c | Y j ∈ A ( s j + s j +1 ) − n − Y i =1 s − i . we need to show that Z D ( | h ( s n , y n +1 , y n ) | + | h ( s n , y n +1 , θ ( y n )) | ) | g ( y n ) | dy n = Z D | h ( s n , y n +1 , y n ) || g ( y n ) | dy n + Z D c | h ( s n , y n +1 , y n ) || g ( θ ( y n )) | dy n is dominated by ( C s − n + 2 M dC s − n ) (cid:18) C + δC s n − + s n ) − (cid:19) . By Lemma 3.5, we know that Z D | h ( s n , y n +1 , y n ) || g ( y n ) | dy n ≤ C s − n Z D p Ms n ( y n +1 , y n ) dy n + δC C s − n Z D p Ms n ( y n +1 , y n ) s − n − e − ( h yn,γ i− k )24 Msn − dy n =: IV D Z D c | h ( s n , y n +1 , y n ) || g ( θ ( y n )) | dy n ≤ C s − n Z D c p Ms n ( y n +1 , y n ) dy n + δC C s − n Z D c p Ms n ( y n +1 , y n ) s − n − e − ( h θ ( yn ) ,γ i− k )24 Msn dy n + ( δ { y n +1 ∈ D } + 2 M d { y n +1 ∈ D c } ) × (cid:18) C C s − n Z D c p Ms n ( y n +1 , y n ) dy n + δC s − n Z D c p Ms n ( y n +1 , y n ) s − n − e − ( h θ ( yn ) ,γ i− k )24 Msn dy n (cid:19) =: IV D c + V. Since Z D (resp. D c ) p Mt ( x, y ) e − ( h y,γ i− k )24 Ms dy = (4 πM t ) − d/ Z D (resp. D c ) e − Mt ( ( h x,γ i−h y,γ i ) + P d − i =1 ( h x,γ i i−h y,γ i i ) ) e − ( h y,γ i− k )24 Ms dy = (4 πM t ) − / Z ( k, ∞ ) (resp. ( −∞ , k ]) e − Mt ( h x,γ i− y ) e − ( y − k )24 Ms dy ≤ s / ( t + s ) − / e − M ( t + s ) ( h x,γ i− k ) , and Z D (resp. D c ) p Mt ( x, y ) dy = (4 πM t ) − d/ Z D (resp. D c ) e − Mt ( h x,γ i−h y,γ i ) + P d − i =1 ( h x,γ i i−h y,γ i i ) ) dy = (4 πM t ) − / Z ( k, ∞ ) (resp. ( −∞ , k ]) e − Mt ( h x,γ i− y ) dy ≤ { x ∈ D (resp. D c ) } + 12 1 { x ∈ D c (resp. D ) } e − Mt ( h x,γ i− k ) , where γ i , i = 1 , · · · , d − IV D + IV D c ≤ C s − n + δC C s − n ( s n − + s n ) − / e − ( h yn +1 ,γ i− k )24 M ( sn − sn ) ≤ C s − n + δC C s − n e − ( h yn +1 ,γ i− k )24 Msn ≤ C s − n + δC C s − n e − ( h yn +1 ,γ i− k )24 Msn { y n +1 ∈ D } + δC C s − n { y n +1 ∈ D c } and V ≤ ( δ { y n +1 ∈ D } + 2 M d { y n +1 ∈ D c } ) C s − n × (cid:18) C (cid:18) { y n +1 ∈ D c } + 1 { y n +1 ∈ D } e − Msn +1 ( h y n +1 ,γ i− k ) (cid:19) + δC s n − + s n ) − / e − ( h yn +1 ,γ i− k )24 M ( sn − sn ) (cid:19) . 34e need to work only on the case y n +1 ∈ D c , where we now obtain IV D + IV D c + V ≤ C s − n + δC C s − n + 2 M dC C s − n + M dδC s − n ( s n − + s n ) − / e − ( h yn +1 ,γ i− k )24 M ( sn − sn ) ≤ ( C s − n + 2 M dC s − n ) (cid:18) C + δC s n − + s n ) − (cid:19) , as desired. A.9 Proof of (i) of Theorem 3.15 Proof: Take s i = u i − u i − for i = 1 , · · · , n , both in Lemma A.1 and Lemma A.3. A.10 Proof of (ii) of Theorem 3.15 A.10.1 Estimates of integral with respect to time variables In this section, we give an estimate for an integral with respect to time variables of the boundgiven in Lemma A.1. Lemma A.4 Let n ≥ . There exists a constant C independent of n such that, for any A ⊂{ , , · · · , n − } , Z u n − · · · Z u Y j ∈ A ( u j +1 − u j − ) − n − Y i =1 ( u i − u i − ) − du · · · du n − ≤ Γ( 14 ) C | A | Γ( ) | A c | Γ( | A c | ) (cid:18) u − | Ac | n − { n − A } + u − + | Ac | n − ( u n − u n − ) − { n − ∈ A } (cid:19) , (64) where u = 0 . Proof: Define operators T i , i = 1 , { ( t, s ) ∈ [0 , T ] : s ≤ t } by T ( f )( s, t ) = Z s ( s − u ) − f ( u, s ) du, (constant in t ) and T ( f )( s, t ) = Z s ( t − u ) − ( s − u ) − f ( u, s ) du, if they exist. We also define τ , · · · , τ n − : 2 { , ··· ,n − } → { , } by τ k ( A ) = ( k ∈ A c k ∈ A, for k = 1 , , · · · , n − 1. 35et f A ( s, t ) = ( s − τ ( A ) + (1 − τ ( A ))) T τ ( A ) ( g )( s, t )where g ( u, s ) = u − / . Then, the integral of the left-hand-side of (64) is written as T τ n − ( A ) ◦ · · · ◦ T τ ( A ) ( f A )( u n − , u n ) . We note that, for f ( u, t ) = ( t − u ) − ǫ u − β , (65)where β > ǫ ∈ [0 , ], T ( f )( s, t ) = Z s ( s − u ) − − ǫ u − β du = s − − ǫ − β Z s (cid:16) − us (cid:17) − − ǫ (cid:16) us (cid:17) − β du = s − − ǫ + β B (cid:18) − ǫ, β (cid:19) . Inductively, for m ≥ 2, we have T m ( f )( s, t ) = s − m − ǫ + β B (cid:18) − ǫ, β (cid:19) m Y k =2 B (cid:18) , k − − ǫ + β (cid:19) . (66)We claim that, for the same f as (65), there exists a constant C such that T ( f )( s, t ) ≤ C ( t − s ) − s − + β − ǫ . (67)Note that by a repeated use of (67) we still have, for any m , T m ( f )( s, t ) ≤ C m s − + β − ǫ ( t − s ) − . (68)In fact, by the generalized binomial formula, T ( f )( s, t ) = Z s ( t − u ) − ( s − u ) − − ǫ u − β du = t − Z s (1 − ut ) − ( s − u ) − − ǫ u − β du = t − Z s ∞ X k =0 (cid:18) − k (cid:19) ( − k (cid:16) ut (cid:17) k ( s − u ) − − ǫ u − β du, and since the infinite series inside the integral is absolutely convergent on s < t , T ( f )( s, t ) = t − ∞ X k =0 (cid:18) − k (cid:19) ( − t ) − k Z s ( s − u ) − − ǫ u − β + k du = t − s − − ǫ + β ∞ X k =0 (cid:18) − k (cid:19) ( − t ) − k s k B (cid:18) − ǫ, β + k (cid:19) . T ( f )( s, t ) ≤ C t − s − − ǫ + β ∞ X k =0 (cid:18) − k (cid:19) ( − t ) − k s k = C t − s − − ǫ + β (cid:16) − st (cid:17) − = C t − s − − ǫ + β ( t − s ) − ≤ C s − + β − ǫ ( t − s ) − , as desired. Here we can take C which is greater than B ( , ).We shall inductively apply (66) and (68) to obtain (64). Firstly, we have f A ( s, t ) ≤ ( s − τ ( A ) + (1 − τ ( A ))) T τ ( A ) ( g )( s, t ) ≤ (1 − τ ( A )))(1 − τ ( A )) B (cid:18) , (cid:19) + (1 − τ ( A )) τ ( A ) C s − ( t − s ) − + τ ( A )(1 − τ ( A )) s − B (cid:18) , (cid:19) + τ ( A ) τ ( A ) C s − ( t − s ) − . Observe that, for m ≥ m , · · · , m n > l , · · · , l n > 0, with the convention that l = 0, T l n ◦ T m n ◦ · · · ◦ T l ◦ T m ( f A )( s, t ) ≤ (1 − τ ( A ))(1 − τ ( A )) B (cid:18) , (cid:19) C m + ··· + m n s l ··· + ln × n Y i =1 B (cid:18) 14 (1 + 1 { i =1 , m =0 } ) , l + · · · + l i − ) + 5 − { i =1 , m =0 } (cid:19) × l i Y k =2 B (cid:18) , k + ( l + · · · + l i − ) + 12 (cid:19) + (1 − τ ( A )) τ ( A ) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) , l + · · · + l i − ) + 34 (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − )2 (cid:19) + τ ( A )(1 − τ ( A )) B (cid:18) , (cid:19) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) 14 (1 + 1 { i =1 , m =0 } ) , l + · · · + l i − ) + 3 − { i =1 , m =0 } (cid:19) × l i Y k =2 B (cid:18) , k + ( l + · · · + l i − )2 (cid:19) + τ ( A ) τ ( A ) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) , l + · · · + l i − ) + 14 (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − ) − (cid:19) . (69)37ere we understand T is the identity map. Since beta functions are decreasing in each variable,we obtain that(the right-hand-side of (69)) ≤ (1 − τ ( A ))(1 − τ ( A )) B (cid:18) , (cid:19) C m + ··· + m n s l ··· + ln × n Y i =1 B (cid:18) , ( l + · · · + l i − ) + 24 (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − ) + 14 (cid:19) + (1 − τ ( A )) τ ( A ) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) , ( l + · · · + l i − ) + 14 (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − )4 (cid:19) + τ ( A )(1 − τ ( A )) B (cid:18) , (cid:19) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) , ( l + · · · + l i − ) + 14 (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − )4 (cid:19) + τ ( A ) τ ( A ) C m + ··· + m n +1 s − l ··· + ln × n Y i =1 B (cid:18) , ( l + · · · + l i − ) + 1 { i =1 } (cid:19) l i Y k =2 B (cid:18) , k + ( l + · · · + l i − ) − (cid:19) , = (1 − τ ( A ))(1 − τ ( A )) C m + ··· + m n s l ··· + ln Γ( 12 ) Γ( 14 ) l + ··· + l n Γ( l + · · · + l i − + 24 ) − + (1 − τ ( A )) τ ( A ) C m + ··· + m n +1 s − l ··· + ln Γ( 14 ) l + ··· + l n +1 Γ( l + · · · + l i − + 14 ) − + τ ( A )(1 − τ ( A )) C m + ··· + m n +1 s − l ··· + ln Γ( 12 ) Γ( 14 ) l + ··· + l n +1 Γ( l + · · · + l i − + 14 ) − + τ ( A ) τ ( A ) C m + ··· + m n +1 s − l ··· + ln Γ( 14 ) l + ··· + l n +2 Γ( l + · · · + l i − − , (70)which is consistent with the assertion of the lemma.The case where l n = 0 can be obtained by simply applying (68) to (70). Lemma A.5 Suppose that ǫ ∈ [0 , ] . Then there exists a constant C such that for any k ∈ N and β > , (cid:18) − k (cid:19) B (cid:18) − ǫ, β + k (cid:19) ≤ C (cid:18) − k (cid:19) . roof: First let us observe that we have that (cid:18) − k (cid:19) (cid:18) − k (cid:19) − = ( − k Γ( + k )Γ( k + 1)Γ( ) ! ( − k Γ( + k )Γ( k + 1)Γ( ) ! − = Γ( + k )Γ( )Γ( + k )Γ( ) ∼ Γ( )Γ( ) s + k + k e − ( 12 + k ) + k ( 14 + k ) − − k = Γ( )Γ( ) e − ( 12 + k ) k ( 14 + k ) − k . (71)by Stirling’s approximation Γ( z ) ∼ r πz (cid:16) ze (cid:17) z ( z → ∞ ) . On the other hand, B (cid:18) − ǫ, β + k (cid:19) < B (cid:18) , k (cid:19) = Γ( )Γ( k )Γ( + k ) ∼ Γ( 14 ) s + kk e − k + + k ( 14 + k ) − − k k k = Γ( 14 ) e ( 14 + k ) − k k k − . (72)By (71) and (72), we have that (cid:18) − k (cid:19) (cid:18) − k (cid:19) − B (cid:18) − ǫ, β + k (cid:19) < (cid:18) − k (cid:19) (cid:18) − k (cid:19) − B (cid:18) , k (cid:19) ∼ Γ( ) Γ( ) ( 12 + k ) k ( 14 + k ) − k ( 14 + k ) − k k k − =: g ( k ) . (73)Since lim k →∞ g ( k ) < ∞ , the leftmost of (73) is bounded by a constant C which is independentof ǫ , β , and k . Lemma A.6 For ξ ∈ (0 , , there exists a constant C such that for any x > , x ) ≤ C ξ x . Proof: By Stirling’s approximation (A.10.1), there exists a constant C ′ such thatΓ( x ) ≥ C ′ r πx (cid:16) xe (cid:17) x . x ) ξ x ) ≥ log C ′ + 12 log 2 π + ( x − 12 ) log x − x + x log ξ, and so lim x → log(Γ( x ) ξ x ) ≥ (constant) − lim x → log x = ∞ , (74)and lim x →∞ log(Γ( x ) ξ x ) ≥ (constant) + lim x →∞ ( x (log x + log ξ ) − x ) = ∞ . (75)By (74) and (75), we have the assertion. A.10.2 Proof of (ii) of Theorem 3.15Proof: By Theorem 3.15 (i) and Lemma A.4, we have that for y n +1 ∈ D , u n ∈ (0 , T ) and f ∈ L ∞ ( D ), Z u
14 + | A c | { n − ∈ A } (cid:19) u − + | Ac | n + δC M ) ( h y n +1 , γ i − k ) − K (cid:18) B ( 38 , | A c | { n − A } + B ( 18 , 14 + | A c | { n − ∈ A } (cid:19) u − + | Ac | n . (cid:18) C u − n + δC M ) ( h y n +1 , γ i − k ) − K u − n (cid:19) B ( 18 , | A c | u | Ac | n . Notice that( δ C C ) | A | ( C Γ( ) u n ) | A c | Γ( | A c | ) B ( 18 , | A c | δ C C ) | A | ( C Γ( ) u n ) | A c | Γ( | A c | +18 )= Γ( 18 ) C ξ ( δ C C ) | A | ( C Γ( 14 ) u n ξ ) | A c | for arbitrary ξ > 0, where we applied Lemma A.6 with the constant C .Observe that X A ⊂{ , ··· ,n − } ( δ C C ) | A | ( C Γ( 14 ) u n ξ ) | A c | = n − X k =0 X | A | = k ( δ C C ) | A | ( C Γ( 14 ) u n ξ ) | A c | = n − X k =0 X | A | = k ( δ C C ) k ( C Γ( 14 ) u n ξ ) n − − k = n − X k =0 (cid:18) n − k (cid:19) ( δ C C ) k ( C Γ( 14 ) u n ξ ) n − − k = (cid:18) δ C C + C Γ( 14 ) u n ξ (cid:19) n − . Then, by taking ξ = δ , C := C C C Γ( 14 ) T ,C := max δ Γ( 14 ) Γ( 18 ) C , δ C M ) K Γ( 14 ) Γ( 18 ) C ! , we obtain (34). A.11 Lemmas for Theorem 3.17 A.11.1 An Estimate for the integral of ( qh ) times S ∗ Lemma A.7 For any x ∈ ∂D and n ≥ , it holds that Z D (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, y ) h ( t, y, z ) dy (cid:12)(cid:12)(cid:12)(cid:12) | S ∗ nu f ( z ) | dz ≤ || f || ∞ ( C δ ) n − C C (4 M ) d s − t − (cid:18) M ( s + t )) − Γ( 18 ) (cid:19) . Proof: By (iii) of Theorem 3.6 and (ii) of Theorem 3.15, we have that Z D (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, y ) h ( t, y, z ) dy (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12) S ∗ nT − s f ( z ) (cid:12)(cid:12) dz ≤ Z D C s − t − ( s + t ) − d e − | x − z | M s + t ) × || f || ∞ ( C δ ) n − C (cid:16) ( T − s ) − + ( h y n +1 , γ i − k ) − ( T − s ) − (cid:17) dz. Z D e − | x − z | M s + t ) dz = Z R d − e − P di =2( h x,γi i−h z,γi i )24 M s + t ) dz Z ∞ k e − ( k − w )24 M s + t ) dw = (4 M ( s + t )) d − − (4 M ( s + t )) = 2 − (4 M ( s + t )) d and Z D e − | x − z | M s + t ) ( h z, γ i − k ) − dz = Z R d − e − P di =2( h x,γi i−h z,γi i )24 M s + t ) dz Z ∞ k ( w − k ) − e − ( k − w )24 M s + t ) dw = (4 M ( s + t )) d − − (4 M ( s + t )) Γ( 18 )= 2 − (4 M ( s + t )) d − Γ( 18 ) . A.11.2 An Estimate for the integral of hS ∗ Lemma A.8 There exists a constant C dependent on δ such that for any n ≥ , Z D h ( t, y, z ) S ∗ nu f ( z ) dz ≤ C || f || ∞ ( C δ ) n − (cid:18) u − ( t − + t − e − ( h y,γ i− k )24 Mt ) + u − ( t − + ( t − + t − ) e − ( h y,γ i− k )24 Mt ) (cid:19) , (77) for any y ∈ D . Proof: It suffices to estimate Z D | h ( t, y, z ) | ( h z, γ i − k ) − dz since we have by Theorem 3.15 (ii), Z D | h ( t, y, z ) | | S ∗ nu f ( z ) | dz ≤ || f || ∞ ( C δ ) k − C (cid:18) u − Z D | h ( t, y, z ) | dz + u − Z D | h ( t, y, z ) | ( h z, γ i − k ) − dz (cid:19) . (78)By Lemma 3.5, Z D | h ( t, y, z ) | ( h z, γ i − k ) − dz ≤ C (cid:18) t − Z D p M t ( y, z )( h z, γ i − k ) − dz + ( t − + t − ) Z D p M t ( y, θ ( z ))( h z, γ i − k ) − dz (cid:19) , (79)42here the constant C is the one given in Theorem 3.6 (i). The first integral in (79) is estimatedas follows: Z D p M t ( y, z )( h z, γ i − k ) − dz = d − Y i =1 Z R (4 M πt ) − e − ( h y,γk i− wi )24 Mt dw i ! Z ∞ k (4 M πt ) − e − ( w −h y,γ i )24 Mt ( w − k ) − dw = Z ∞ k (4 M πt ) − e − ( w −h y,γ i )24 Mt ( w − k ) − dw = Z k +( Mπt ) k (4 M πt ) − e − ( w −h y,γ i )24 Mt ( w − k ) − dw + Z ∞ k +( Mπt ) (4 M πt ) − e − ( w −h y,γ i )24 Mt ( w − k ) − dw ≤ (4 M πt ) − Z k +( Mπt ) k ( w − k ) − dw + t − Z ∞ k +( Mπt ) (4 M πt ) − e − ( w −h y,γ i )24 Mt dw ≤ M π ) − t − . (80)On the other hand, Z D p M t ( y, θ ( z ))( h z, γ i − k ) − dz = Z ∞ k (4 M πt ) − e − (2 K − w −h y,γ i )24 Mt ( w − k ) − dw = (4 M πt ) − e − ( h y,γ i− k )24 Mt Z ∞ k e − ( w − k )2+2( w − k )( h y,γ i− k )24 Mt ( w − k ) − dw ≤ (4 M πt ) − e − ( h y,γ i− k )24 Mt Z ∞ k e − ( w − k )24 Mt ( w − k ) − dw, since h y, γ i − k is positive for y ∈ D . Thus the second integral in (79) is dominated by(4 M πt ) − e − ( h y,γ i− k )24 Mt Z ∞ k e − ( w − k )24 Mt ( w − k ) − dw = 2 − π − ( M t ) − Γ( 18 ) e − ( h y,γ i− k )24 Mt . (81)43ombining (79), (80), and (81) with Theorem 3.6 (i), we obtain that(the right-hand-side of (78)) ≤ C C || f || ∞ ( C δ ) k − (cid:18) u − (cid:18) t − + t − e − ( h y,γ i− k )24 Mt (cid:19) + u − (cid:18) t − Z D p M t ( y, z )( h z, γ i − k ) − dz + ( t − + t − ) Z D p M t ( y, θ ( z ))( h z, γ i − k ) − dz (cid:19)(cid:19) ≤ C C || f || ∞ ( C δ ) k − (cid:18) u − (cid:18) t − + t − e − ( h y,γ i− k )24 Mt (cid:19) + u − (cid:18) M π ) − t − t − + ( t − + t − )2 − π − ( M t ) − Γ( 18 ) e − ( h y,γ i− k )24 Mt (cid:19)(cid:19) ≤ C C || f || ∞ ( C δ ) k − (cid:18) u − (cid:18) t − + t − e − ( h y,γ i− k )24 Mt (cid:19) + u − (cid:18) M π ) − t − + 2 − π − M − Γ( 18 )( t − + t − ) e − ( h y,γ i− k )24 Mt (cid:19)(cid:19) . Hence by taking C := C C max(1 , M − π − , − π − M − Γ( 18 )) , we have (77). A.11.3 An Estimate for the integral of qh times | h || S ∗ | Lemma A.9 There exists a constant C such that, for x ∈ ∂D , s, t, u, v ∈ [0 , T ] , and n ≥ , Z D (cid:12)(cid:12)(cid:12)(cid:12)Z R d q s ( x, y ) h ( t, y, z ) dy (cid:12)(cid:12)(cid:12)(cid:12) (cid:18)Z D | h ( u, z, w ) S ∗ nv f ( w ) | dw (cid:19) dz ≤ C || f || ∞ ( C δ ) n − s − t − ( s + t + v ) − u − v − . (82) Proof: By applying (iii) of Theorem 3.6 together with Lemma A.8, we see that theintegrand in the left-hand-side of (82) is dominated by2 C || f || ∞ ( C δ ) n − C s − t − ( s + t ) − d exp (cid:18) − | x − z | M ( s + t ) (cid:19) × (cid:18) v − ( u − + u − e − ( h z,γ i− k )24 Mu ) + v − ( u − + ( u − + u − ) e − ( h z,γ i− k )24 Mu ) (cid:19) , (83)since | x − θ ( z ) | = | x − z | . Since we know that Z D e − | x − z | M s + t ) e − ( h z,γ i− k )24 Mu dz = Z D e − P d − i =1 h x − z,γi i M s + t ) e − h x − z,γ i M s + t ) e − ( k −h γ,z i )24 Mu dz ≤ (4 M ( s + t )) d − Z ∞ k e − ( k − z )24 M (( s + t ) − + u − ) dz = 2 − (4 M ( s + t )) d − (4 M ( s + t ) u ) ( s + t + u ) − , { γ i : i = 1 , · · · d − } is an orthonormal basis of ( ∂D ) ⊥ , we have that the integral of (83)is dominated by C || f || ∞ ( C δ ) n − C s − t − (4 M ) d × (cid:16) u − v − (1 + ( s + t + u ) − ) + v − ( u − + ( s + t + u ) − ( u − + u − )) (cid:17) = C || f || ∞ ( C δ ) n − C s − t − (4 M ) d u − v − ( s + t + u ) − × ( u v (( s + t + u ) + 1) + ( s + t + u ) + ( u + 1)) . By taking C := C C (4 M ) d max s,t,u,v ∈ [0 ,T ] ( u v (( s + t + u ) + 1) + ( s + t + u ) + ( u + 1)) , we obtain the results stated in (82). A.12 Proof of Theorem 3.17 The proof is conducted by considering each point stated in the Theorem, from (i) to (v). Proof of (i) : this statement can be proven with a logic similar to the one underlying theproof of Lemma 3.10. Notice that: E [ π ⊥ S s − u S ∗ ( n − T − s f ( X u ) |F τ ]= 1 { τ ≤ u } Z D c q u − τ ( X τ , y ) { S s − u S ∗ ( n − T − s f ( y ) + S s − u S ∗ ( n − T − s f ( θ ( y )) } dy + 1 { τ>u } { X s ∈ D c } ( S s − u S ∗ ( n − T − s f ( X u ) + S s − u S ∗ ( n − T − s f ( θ ( X u )));then, since the second term is zero, as in the proof of Lemma 3.10 we can write | E [ π ⊥ S s − u S ∗ ( n − T − s f ( X u ) |F τ ] |≤ { τ
V I . By Theorem 3.6 (ii) and Theorem 3.15 (ii), we havethat q u − τ ( X τ , y ) h ( s − u, y, z ) S ∗ ( n − T − s f ( z ) is integrable in ( y, z ) ∈ R d × D , almost surely on { τ < u } . Therefore we can change the order of the integral of V I to obtain V I ≤ Z D (cid:12)(cid:12)(cid:12)(cid:12)Z R d q u − τ ( X τ , y ) h ( s − u, y, z ) dy (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12) S ∗ ( n − T − s f ( z ) (cid:12)(cid:12)(cid:12) dz. V I is bounded by || f || ∞ ( C δ ) n − C ′′ (4 M ) d ( u − τ ) − ( s − u ) − max { C ′ , C ′′ (4 M ) − Γ( 18 ) } (cid:16) s − τ ) − (cid:17) =: V I ′ . Let us now derive a bound for the terms indicated by V I and V I . By considering (18)and Lemma A.8, we have: V I ≤ Z D q u − τ ( X τ , y ) (cid:12)(cid:12)(cid:12)(cid:12)Z D h ( s − u, y, z ) S ∗ ( n − T − s f ( z ) dz (cid:12)(cid:12)(cid:12)(cid:12) dy ≤ C C q || f || ∞ ( C δ ) n − Z D ( u − τ ) − d e − | Xτ − y | M u − τ ) × (cid:16) ( T − s ) − (( s − u ) − + ( s − u ) − e − ( h y,γ i− k )24 M ( s − u ) )+ ( T − s ) − (( s − u ) − + (( s − u ) − + ( s − u ) − ) e − ( h y,γ i− k )24 M ( s − u ) ) (cid:17) dy = C C q || f || ∞ ( C δ ) n − (4 M π ) d − Z ∞ k ( u − τ ) − e − ( y − k )24 M u − τ ) × (cid:16) ( T − s ) − (( s − u ) − + ( s − u ) − e − ( y − k )24 M ( s − u ) )+ ( T − s ) − (( s − u ) − + (( s − u ) − + ( s − u ) − ) e − ( y − k )24 M ( s − u ) ) (cid:17) dy =: V I ′ . Note that the integrand in V I ′ is invariant if y is replaced with θ ( y ), and therefore V I ′ dominates V I . Since M < M , we have Z ∞ k e − ( y − k )24 M u − τ ) e − ( y − k )24 M ( s − u ) dy ≤ Z ∞ k e − ( s − τ )( y − k )24 M ( u − τ )( s − u ) ≤ (4 M π ) ( u − τ ) ( s − u ) ( s − τ ) − . Hence, V I ′ is dominated by C C q || f || ∞ ( C δ ) n − (4 M π ) d × (cid:16) ( T − s ) − ( s − u ) − (1 + ( s − τ ) − )+ ( T − s ) − (( s − u ) − + (( s − u ) − + ( s − u ) − )( s − τ ) − )) (cid:17) =: V I ′′ . Notice that both bounds for V I ′ and V I ′′ are integrable in ( u, s ) ∈ { ( s, u ) : 0 ≤ u ≤ s ≤ T } × { τ < u } : since for V I ′ , I { τ 32 )( T − τ ) + (8 + 85 ) B ( 38 , 98 )( T − τ ) + 8 B ( 38 , 58 ) (cid:19) ∈ L ∞ (Ω) . This completes the proof of (i). Proof of (ii) and (iii): these statements can be proved by recalling some techniques usedfor the proof of Lemma 3.12.Let us first apply (53) (contained in the proof of Theorem 3.9) to the error of the hedgingstrategy with E [ π ⊥ S s − u S ∗ ( n − T − s f ( X u ) |F t ∧ τ ]: E [1 { τ
V II h, + t is bounded by I { τ ≤ t } C ( C δ ) h − , and we see that V II h, − t = I { t<τ } | Z T Z Tu E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F t ] dsdu | = I { t<τ } | Z T Z Tu E [ E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F τ ] |F t ] dsdu |≤ I { t<τ } E [ Z T Z Tu | E [ π ⊥ S s − u S ∗ ( h − T − s f ( X u ) |F τ ] | dsdu |F t ] ≤ I { t<τ } C ( C δ ) h − . Hence we have n X h =2 ( sup t ∈ [0 ,T ] V II h, + t + sup t ∈ [0 ,T ] V II h, − t ) ≤ C n X h =2 ( C δ ) h − , converging almost surely as n → ∞ when δ < /C6