A class of copulae associated with Brownian motion processes and their maxima
Michel Adès, Matthieu Dufour, Serge B. Provost, Marie-Claude Vachon
AA class of copulae associated with Brownianmotion processes and their maxima
April 23, 2020
Michel Adès ∗ , Matthieu Dufour , Serge B. Provost , Marie-Claude Vachon Département de Mathématiques, Université du Québec à Montréal, Québec, Canada Department of Statistical and Actuarial Sciences, The University of Western On-tario, London, Canada ∗ Corresponding author: [email protected] a r X i v : . [ m a t h . S T ] A p r bstract The main objective of this paper consists in creating a new class of copulaefrom various joint distributions occurring in connection with certain Brownianmotion processes. We focus our attention on the distributions of univariateBrownian motions having a drift parameter and their maxima and on corre-lated bivariate Brownian motions by considering the maximum value of one ofthem. The copulae generated therefrom and their associated density functionsare explicitly given as well as graphically represented.
Keywor d s : Brownian motion, Copulas, Correlated Brownian processes, Depen-dence, Two-dimensional Brownian motion. This section first presents useful background information on Brownian motion ( BM ) .Then, copulae are defined and relevant related results are provided.In 1918, the mathematician Norbert Wiener gave a rigorous formulation of Brow-nian motion and established its existence, which explains why the alternative name,Wiener process, is also in use. BM is utilized in various fields of scientific investiga-tion such as Economics, Biology, Communications Theory, Business Administration,and Quantitative Finance. For instance, as pointed out by Chuang (1994), distribu-tional results for BM can also be utilized for pricing contingent claims with barrierson price processes; Cao (2017) made use of correlated Brownian motions to solvean optimal investment-reinsurance problem.Let { W t } t ≥ and W T represent the standard BM process and its terminal value, M t = max ≤ s ≤ t W s and M ( s, t ) = max { W u , s ≤ u ≤ t } . We shall consider the jointdistributions of1. W t and its maximum M t ,2. W T and M t ,3. W T and M ( s, t ) ,which have previously been studied by Harrison (1985), Chuang (1996) and Lee(2003), among others.Some further related results are available in the statistical literature. For exam-ple, representations of the joint density function of a BM process and its minimum2nd maximum, which are given for instance in Borodin and Salminen (2002), wereshown to be convergent by Choi and Roh (2013). Upper and lower bounds for thedistribution of the maximum of a two-parameter BM process were obtained byCabaña and Wschebor (1982). Vardar-Acara et al. (2013) provided explicit ex-pressions for the correlation between the supremum and the infimum of a BM withdrift. Kou and Zhong (2016) studied the first-passage times of two-dimensional BM processes. Haugh (2004) explained how to generate correlatated Brownian motionsand points out some applications involving security pricing and porfolio evaluation.We now review some basic definitions and theorems in connection with copu-lae. Additional results are available from several authors including Cherubini et al.(2004, 2012), Denuit et al. (2005), Joe (1997), Nelsen (2006), and Sklar (1959).The main idea behind copulae is that the joint distribution of two or morerandom variables can be expressed in terms of their marginal distributions anda certain correlation structure. Copulae enable one to separate the effect due tothe dependence between the variables from contribution of each of the marginalvariables. We focus on the two-dimensional case in this paper. In this framework,a copula function is a bivariate distribution defined on the unit square I = [0 , with uniformly distributed margins. Formally, we have: Definition 1.1.
A function C : I (cid:55)→ I is a bivariate copula if it satisfies thefollowing properties:1. For every y, w ∈ I , C ( y,
1) = y and C (1 , w ) = w ; C ( y,
0) = C (0 , w ) = 0 .
2. For every y , y , w , w ∈ I such that y ≤ y and w ≤ w , C ( y , w ) − C ( y , w ) − C ( y , w ) + C ( y , w ) ≥ , that is, the C-measure of the box vertices lying in I is nonnegative. In particular,the last inequality implies that C ( y, w ) is increasing in both variables.Copulae are useful for capturing the dependence structure of random distribu-tions with arbitrary marginals. This statement is clarified by Sklar’s theorem whichis now cited for the bivariate case. 3 heorem 1.1. Let F ( x , x ) be the joint cumulative distribution function of ran-dom variables X and X having continuous marginal distributions F ( x ) and F ( x ) . Then, there exists a unique bivariate copula C : I (cid:55)→ I such that F ( x , x ) = C ( F ( x ) , F ( x )) (1) where C ( · , · ) is a joint distribution function with uniform marginals. Conversely,for any continuous distribution function F ( x ) and F ( x ) and any copula C , thefunction F defined in equation (1) is a joint distribution function with marginaldistributions F and F . Sklar’s theorem provides a scheme for constructing copulae. Indeed, the function C ( u , u ) = F (cid:0) F − ( u ) , F − ( u ) (cid:1) (2)is a bivariate copula, where the quasi-inverse F − i for i = 1 , is defined by F − i ( u ) = inf { x | F i ( x ) ≥ u } ∀ u ∈ (0 , . (3)Much of the usefulness of the copulae follows from the fact that they are invariantwith respect to strictly increasing transformations. More formally, let X and X be two continuous random variables with associated copula C . Now, letting α and β be two strictly increasing functions and denoting by C α,β the copula generatedby α ( X ) and β ( X ) , it can be shown that for all ( u , u ) ∈ I ,C α, β ( u , u ) = C ( u , u ) . (4)Finally, let us denote by c ( · , · ) the density function corresponding to the copula C ( · , · ) , that is, c ( u , u ) = ∂ ∂u ∂u C ( u , u ) . The following relationship between the joint density f ( · , · ) and the copula density c ( · , · ) can easily be obtained from equation (1): f ( x , x ) = f ( x ) f ( x ) c ( F ( x ) , F ( x )) (5)where f ( x ) and f ( x ) respectively denote the marginal density functions of X and X . Thus, the copula density function can be expressed as follows: c ( u , u ) = f ( F − ( u ) , F − ( u )) f ( F − ( u )) f ( F − ( u )) . (6)Jaworski and Krzywda (2013) and Bosc (2012) determined the copulae cor-responding to certain correlated Brownian motions. Lagerås (2010) provides an4xplicit representation of the copula associated with Brownian motion processesthat are reflected at 0 and 1. Several recent articles point out the usefulness ofcorrelated Brownian motions and promote the use of copulae generated therefromin connection with various applications. For instance, Chen et al. (2016) point outthat correlated Brownian motions and their associated copulae can be utilized inthe case of correlated assets occurring in risk management, pairs trading and multi-assets derivative’s pricing. Deschatre (2016a,b) proposes to make use of asymmetriccopulae generated from a Brownian motion and its reflection to model and controlthe distribution of their difference with applications to the energy market and thepricing of spread options.This paper which is principally based on the thesis of Vachon (2008), is organizedas follows. Several joint distributions related to certain BM processes and theirmaxima are derived in the second section. The copulae associated with these jointdistributions are then constructed in the third section. As previously defined, { W t } t ≥ shall denote a standard BM and M t = max ≤ s ≤ t W s ,its maximum on the interval [0 , t ] . It is well known (see for instance, Etheridge(2002), Harrison (1990), Karlin and Taylor (1975), Revuz and Yor (2005), Rogersand Williams (2000)) that the joint distribution of ( W t , M t ) and the marginal dis-tribution of M t are respectively given by P { M t ≤ a, W t ≤ x } = Φ (cid:16) x √ t (cid:17) − Φ (cid:16) x − a √ t (cid:17) if x ≤ a (cid:16) a √ t (cid:17) − if x > a ∀ t ∈ R + (7)and P { M t ≤ a } = 2Φ (cid:18) a √ t (cid:19) − ∀ t ∈ R + , (8)where Φ( · ) is the standard normal distribution function.The first proposition of this section provides the joint distribution of { W ( µ, σ ) t } t ≥ ,a BM with drift µ and variance σ , and M ( µ, σ ) t , its maximum over the interval ≤ s ≤ t . This section conveniently provides detailed proofs of the distributionalresults stated in the propositions, whereupon the corresponding copulae will bederived in the next section. Proposition 2.1. (Harrison (1990)) P { W ( µ,σ ) t ≤ x, M ( µ,σ ) t ≤ y } = Φ (cid:16) x − µtσ √ t (cid:17) − e µyσ Φ (cid:16) x − y − µtσ √ t (cid:17) if x ≤ y Φ (cid:16) y − µtσ √ t (cid:17) − e µyσ Φ (cid:16) − y − µtσ √ t (cid:17) if x > y. roof. We first consider the case where x ≤ y . In light of equation (7), we have P { W t ∈ dx, M t ≤ y } = 1 √ t (cid:18) φ (cid:18) x √ t (cid:19) − φ (cid:18) x − y √ t (cid:19)(cid:19) d x, where φ ( · ) denotes the standard normal density function.Now define Q ( A ) = (cid:90) A L t ( ω )d P ( ω ) , A ∈ F t where L t = e µW t − µ t is the Radon-Nikodym’s derivative of Q with respect to P and F t = σ ( { W s , ≤ s ≤ t } ) for all t ∈ R + , is the smallest σ -algebra generated bythe BM up to time t . It follows from Girsanov’s theorem that { W t } t ≥ is a BM with drift µ under the new measure Q . Therefore Q { W t ≤ x, M t ≤ y } = (cid:90) { W t ≤ x,M t ≤ y } L t ( ω )d P ( ω )= E P [ { W t ≤ x,M t ≤ y } L t ]= (cid:90) x −∞ e µz − µ t √ t (cid:18) φ (cid:18) z √ t (cid:19) − φ (cid:18) z − y √ t (cid:19)(cid:19) d z = (cid:90) x −∞ √ πt e − ( z − µt )22 t d z − e µy (cid:90) x −∞ √ πt e − ( z − (2 y + µt ))22 t d z = Φ (cid:18) x − µt √ t (cid:19) − e µy Φ (cid:18) x − y − µt √ t (cid:19) . (cid:50) Remark . Note that the marginal distribution of M t , which is given by P { M ( µ,σ ) t ≤ y } = Φ (cid:18) y − µtσ √ t (cid:19) − e µyσ Φ (cid:18) − y − µtσ √ t (cid:19) , (9)can easily be derived from Proposition 2.1 since for x > y , { M ( µ,σ ) t ≤ y } ⊂{ W ( µ,σ ) t ≤ x } .One can generalize these last results by making use of the following propertiesof the multivariate normal distribution: Φ ( z , z ; ρ ) = Φ ( z , z ; ρ ) , (10) Φ( z ) − Φ ( z , z ; ρ ) = Φ ( z , − z ; − ρ ) , (11) Φ ( z , z , z ; ρ , ρ , ρ ) = Φ ( z , z , z ; ρ , ρ , ρ ) (12) = Φ ( z , z , z ; ρ , ρ , ρ ) Φ ( z , z ; ρ ) − Φ ( z , z , z ; ρ , ρ , ρ ) (13) = Φ ( − z , z , z ; − ρ , − ρ , ρ ) . Lemma 2.1.
Let z , z , z be real constants and ρ ≥ . If z = − ρz + (cid:112) − ρ z ,then Φ ( z , z ; − ρ ) + Φ ( − z , z ; − (cid:112) − ρ ) = Φ( z )Φ( z ) (14) and Φ ( z , z ; − ρ ) + Φ( − z )Φ( z ) = Φ ( z , z ; (cid:112) − ρ ) . (15) Proof.
Let Z and Z be two independent standard normal random variables,and Z a random variable defined by Z = − ρZ + (cid:112) − ρ Z . Note that Z hasalso a standard normal distribution, the random vectors ( Z , Z ) and ( − Z , Z ) have bivariate normal distributions with correlation coefficients given by − ρ and − (cid:112) − ρ , respectively. Then, Φ ( z , z ; − ρ ) + Φ ( − z , z ; − (cid:112) − ρ )= P { Z ≤ z , Z ≤ z } + P {− Z ≤ − z , Z ≤ z } = P { Z ≤ z , Z ≤ z , Z ≤ z } + P { Z ≤ z , Z ≤ z , Z ≥ z } + P { Z ≥ z , Z ≤ z , Z ≤ z } + P { Z ≥ z , Z ≥ z , Z ≤ z } . We now replace Z by − ρZ + (cid:112) − ρ Z and z by − ρz + √ − ρz . Since theevents { Z ≤ z , Z ≤ z , Z ≥ z } and { Z ≥ z , Z ≥ z , Z ≤ z } are clearlyempty, we obtain Φ ( z , z ; − ρ ) + Φ ( − z , z ; − (cid:112) − ρ )= P { Z ≤ z , Z ≤ z , Z ≤ z } + P { Z ≥ z , Z ≤ z , Z ≤ z } = P { Z ≤ z , Z ≤ z } = Φ( z )Φ( z ) . It follows from equations (11) and (14) that Φ ( z , z ; − ρ ) + Φ ( − z , z ; − (cid:112) − ρ ) = Φ( z )Φ( z ) ⇒ Φ ( z , z ; − ρ ) + Φ( z ) − Φ ( z , z ; (cid:112) − ρ ) = (1 − Φ( − z ))Φ( z ) ⇒ Φ ( z , z ; − ρ ) + Φ( − z )Φ( z ) = Φ ( z , z ; (cid:112) − ρ ) . (cid:50) The joint distributions that will be considered further involve integrals for whichclosed form representations are given in the next proposition.7 roposition 2.2.
Let a , h , θ i , i = 1 , , , δ j and η j > , j = 0 , , , be constant,and R = [ ρ ij ] i, j =1 , , be a correlation matrix, then (cid:90) a −∞ exp ( hs )Φ (cid:18) δ + θ sη , δ + θ sη , δ + θ sη ; R (cid:19) φ (cid:18) s − δ η (cid:19) d sη (16) = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) a − δ ∗ η , δ + θ δ ∗ κ , δ + θ δ ∗ κ , δ + θ δ ∗ κ ; R ∗ (cid:19) and (cid:90) + ∞ a exp ( hs )Φ (cid:18) δ + θ sη , δ + θ sη , δ + θ sη ; R (cid:19) φ (cid:18) s − δ η (cid:19) d sη (17) = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) − a + δ ∗ η , δ + θ δ ∗ κ , δ + θ δ ∗ κ , δ + θ δ ∗ κ ; R ∗∗ (cid:19) where δ ∗ = δ + hη ; κ i = (cid:112) θ i η + η i for i = 1 , , ; R ∗ = [ ρ ∗ ij ] i, j =1 , , , with ρ ∗ i +1 = − θ i η /κ i , i = 1 , , ; ρ ∗ i +1 = ( ρ i η η i + θ θ i η ) / ( κ κ i ) , i = 2 , ; ρ ∗ =( ρ η η + θ θ η ) / ( κ κ ) ; and finally R ∗∗ = [ ρ ∗∗ ij ] i, j =1 , , , with ρ ∗∗ i = − ρ ∗ i , i =2 , , ; ρ ∗∗ ij = ρ ∗ ij , i, j = 2 , , . These results are established by making use of properties of the conditionalmultivariate normal distribution. Note that this proposition is related to a resultappearing in Lee (2003) whose derivation relies on the Esscher transform.
Proof.
Let X = ( X , X , X , X ) (cid:48) be a normally distributed random vector such E [ X i ] = µ i , V ar [ X i ] = σ i and R ∗ = [ ρ ∗ ij ] for i, j = 1 , , , . Then the conditionaldistribution of ( X , X , X ) given X = x is a trivariate normal distribution (An-derson, 2003) with mean vector µ (1) + Σ Σ − (cid:0) x (2) − µ (2) (cid:1) = µ + σ σ ρ ( x − µ ) µ + σ σ ρ ( x − µ ) µ + σ σ ρ ( x − µ ) and covariance matrix Σ − Σ Σ − Σ = σ (1 − ρ ) σ σ ( ρ − ρ ρ ) σ σ ( ρ − ρ ρ ) σ σ ( ρ − ρ ρ ) σ (1 − ρ ) σ σ ( ρ − ρ ρ ) σ σ ( ρ − ρ ρ ) σ σ ( ρ − ρ ρ ) σ (1 − ρ ) . Φ (cid:18) x − µ σ , x − µ σ , x − µ σ , x − µ σ ; R ∗ (cid:19) = P { X ≤ x , X ≤ x , X ≤ x , X ≤ x } = (cid:90) x −∞ P { X ≤ x , X ≤ x , X ≤ x | X = s } P { X ∈ d s } = (cid:90) x −∞ Φ (cid:32) x − ( µ + ρ ∗ σ σ ( s − µ )) σ (cid:112) − ( ρ ∗ ) , . . . , x − ( µ + ρ ∗ σ σ ( s − µ )) σ (cid:112) − ( ρ ∗ ) ; (18) ρ ∗ − ρ ∗ ρ ∗ (cid:112) − ( ρ ∗ ) (cid:112) − ( ρ ∗ ) , ρ ∗ − ρ ∗ ρ ∗ (cid:112) − ( ρ ∗ ) (cid:112) − ( ρ ∗ ) , ρ ∗ − ρ ∗ ρ ∗ (cid:112) − ( ρ ∗ ) (cid:112) − ( ρ ∗ ) (cid:33) × φ (cid:18) s − µ σ (cid:19) d sσ . Now letting x = a , x i +1 = δ i , µ = δ , µ i +1 = − θ i δ , σ = η , σ i +1 = κ i for i = 1 , , and replacing in equation (18) the elements of the matrix R ∗ with theirrespective values, we obtain Φ (cid:18) a − δ η , δ + θ δ κ , δ + θ δ κ , δ + θ δ κ ; R ∗ (cid:19) = (cid:90) a −∞ Φ (cid:18) δ + θ sη , δ + θ sη , δ + θ sη ; R (cid:19) φ (cid:18) s − δ η (cid:19) d sη . This establishes equation (16) for h = 0 .The case h (cid:54) = 0 follows from the last expression by completing the square in theexponent of exp ( hs ) φ (cid:16) s − δ η (cid:17) , so that exp ( hs ) φ (cid:18) s − δ η (cid:19) = exp ( hδ + h η φ (cid:18) s − δ ∗ η (cid:19) . Finally, the last result, that is, equation (17) is similarly obtained on noting that Φ (cid:18) − x + µ σ , x − µ σ , x − µ σ , x − µ σ ; R ∗∗ (cid:19) = P { ( − X ) ≤ − x , X ≤ x , X ≤ x , X ≤ x } = P { X ≥ x , X ≤ x , X ≤ x , X ≤ x } . (cid:50) Additionally, as δ → ∞ , we have (cid:90) a −∞ exp ( hs )Φ (cid:18) δ + θ sη , δ + θ sη ; ρ (cid:19) (19) × φ (cid:18) s − δ η (cid:19) d sη = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) a − δ ∗ η , δ + θ δ ∗ κ ; ρ ∗ , ρ ∗ , ρ ∗ (cid:19) (cid:90) + ∞ a exp ( hs )Φ (cid:18) δ + θ sη , δ + θ sη ; ρ (cid:19) (20) × φ (cid:18) s − δ η (cid:19) d sη = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) − a + δ ∗ η , δ + θ δ ∗ κ ; − ρ ∗ , − ρ ∗ , ρ ∗ (cid:19) . Similarly, as δ → ∞ , it follows from equations (19) and (20) that (cid:90) a −∞ exp ( hs )Φ (cid:18) δ + θ sη (cid:19) φ (cid:18) s − δη (cid:19) d sη (21) = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) a − δ ∗ η , δ + θ δ ∗ κ ; − θ ηκ (cid:19) and (cid:90) + ∞ a exp ( hs )Φ (cid:18) δ + θ sη (cid:19) φ (cid:18) s − δη (cid:19) d sη (22) = exp (cid:18) hδ + h η (cid:19) Φ (cid:18) − a + δ ∗ η , δ + θ δ ∗ κ ; θ ηκ (cid:19) . These results enable one to establish the distribution of (cid:16) W ( µ, σ ) T , M ( µ, σ ) t (cid:17) withinthe interval < t ≤ T as specified in the next proposition. Proposition 2.3. (Chuang (1996) and Lee (2003)) P { W ( µ,σ ) T ≤ x, M ( µ, σ ) t ≤ y } = Φ (cid:32) x − µTσ √ T , y − µtσ √ t ; (cid:114) tT (cid:33) (23) − e µyσ Φ (cid:32) x − y − µTσ √ T , − y − µtσ √ t ; (cid:114) tT (cid:33) . roof. P { W ( µ,σ ) T ≤ x, M ( µ,σ ) t ≤ y } = (cid:90) + ∞−∞ P { W ( µ,σ ) T ≤ x, M ( µ,σ ) t ≤ y | W ( µ, σ ) T − W ( µ,σ ) t = z }× P { W ( µ, σ ) T − W ( µ,σ ) t ∈ d z } = (cid:90) + ∞−∞ P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y | W ( µ, σ ) T − W ( µ,σ ) t = z }× φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t = (cid:90) + ∞−∞ P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y } φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t = (cid:90) x − y −∞ P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y } φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t (24) + (cid:90) + ∞ x − y P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y } φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t . By replacing in equation (24) the result of Proposition 2.1, we obtain for the firstpart of the equation: (cid:90) x − y −∞ P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y } φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t = Φ (cid:18) y − µtσ √ t (cid:19) Φ (cid:18) x − y − µ ( T − t ) σ √ T − t (cid:19) (25) − e µyσ Φ (cid:18) − y − µtσ √ t (cid:19) Φ (cid:18) x − y − µ ( T − t ) σ √ T − t (cid:19) ; as for the second part, (cid:90) + ∞ x − y P { W ( µ,σ ) t ≤ x − z, M ( µ,σ ) t ≤ y } φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t = (cid:90) + ∞ x − y Φ (cid:18) − z − ( µt − x ) σ √ t (cid:19) φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t − e µyσ (cid:90) + ∞ x − y Φ (cid:18) − z − (2 y + µt − x ) σ √ t (cid:19) φ (cid:18) z − µ ( T − t ) σ √ T − t (cid:19) d zσ √ T − t = Φ (cid:32) y − x + µ ( T − t ) σ √ T − t , x − µTσ √ T ; − (cid:114) − tT (cid:33) (26) − e µyσ Φ (cid:32) y − x + µ ( T − t ) σ √ T − t , x − y − µTσ √ T ; − (cid:114) − tT (cid:33) where the last equality follows from equation (22). On combining the last two11esults and applying Lemma 2.1, we obtain P { W ( µ,σ ) T ≤ x, M ( µ,σ ) t ≤ y } = Φ (cid:32) x − µTσ √ T , y − µtσ √ t ; (cid:114) tT (cid:33) − e µyσ Φ (cid:32) x − y − µTσ √ T , − y − µtσ √ t ; (cid:114) tT (cid:33) . (cid:50) Remark . As expected, when t → T , P { W ( µ, σ ) T ≤ x, M ( µ, σ ) t ≤ y } → P { W ( µ, σ ) T ≤ x, M ( µ, σ ) T ≤ y } .Next, the joint distribution of W ( µ, σ ) T and M ( µ, σ )( s, t ) , where M ( µ, σ )( s,t ) = max s ≤ u ≤ t W ( µ, σ ) u and < s < t ≤ T , is considered. Proposition 2.4. (Lee (2003)) P { W ( µ,σ ) T ≤ x, M ( µ, σ )( s, t ) ≤ y } = Φ (cid:32) x − µTσ √ T , y − µtσ √ t , y − µsσ √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33) (27) − e µyσ Φ (cid:32) x − y − µTσ √ T , − y − µtσ √ t , y + µsσ √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33) and P { M ( µ, σ )( s, t ) ≤ y } = Φ (cid:18) y − µtσ √ t , y − µsσ √ s ; (cid:114) st (cid:19) (28) − e µyσ Φ (cid:18) − y − µtσ √ t , y + µsσ √ s ; − (cid:114) st (cid:19) . Proof.
Let us first consider the joint distribution of a BM and its maximum onthe interval [ s, t ] . In that case, P { W ( µ,σ ) T ≤ x, M ( µ, σ )( s, t ) ≤ y } = (cid:90) y −∞ P { W ( µ,σ ) T ≤ x, M ( µ, σ )( s, t ) ≤ y | W ( µ, σ ) s = z } P { W ( µ, σ ) s ∈ d z } = (cid:90) y −∞ P { W ( µ,σ ) T − W ( µ,σ ) s ≤ x − z, M ( µ, σ )( s, t ) − W ( µ,σ ) s ≤ y − z | W ( µ, σ ) s = z }× φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s = (cid:90) y −∞ P { W ( µ,σ ) T − W ( µ,σ ) s ≤ x − z, max s ≤ u ≤ t { W ( µ, σ ) u − W ( µ,σ ) s } ≤ y − z }× φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s (cid:90) y −∞ P { W ( µ,σ ) T − s ≤ x − z, max s ≤ u ≤ t W ( µ, σ ) u − s ≤ y − z } φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s = (cid:90) y −∞ P { W ( µ,σ ) T − s ≤ x − z, max ≤ v ≤ t − s W ( µ, σ ) v ≤ y − z } φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s = (cid:90) y −∞ P { W ( µ,σ ) T − s ≤ x − z, M ( µ, σ ) t − s ≤ y − z } φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s . (29)On applying the result of Proposition 2.3 to the first term in the integrand of (29),we obtain P { W ( µ,σ ) T ≤ x, M ( µ, σ )( s, t ) ≤ y } = (cid:90) y −∞ Φ (cid:32) x − z − µ ( T − s ) σ √ T − s , y − z − µ ( t − s ) σ √ t − s ; (cid:114) t − sT − s (cid:33) × φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s − (cid:90) y −∞ e µ ( y − z ) σ Φ (cid:32) x + z − y − µ ( T − s ) σ √ T − s , − y + z − µ ( t − s ) σ √ t − s ; (cid:114) t − sT − s (cid:33) × φ (cid:18) z − µsσ √ s (cid:19) d zσ √ s = Φ (cid:32) x − µTσ √ T , y − µtσ √ t , y − µsσ √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33) − e µyσ Φ (cid:32) x − y − µTσ √ T , − y − µtσ √ t , y + µsσ √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33) where the last equation follows from equation (19).Finally, we obtain result (28) by letting x tend to + ∞ in equation (27). (cid:50) Remark . When s → , P { W ( µ, σ ) T ≤ x, M ( µ, σ )( s, t ) ≤ y } → P { W ( µ, σ ) T ≤ x, M ( µ, σ ) t ≤ y } and P { M ( µ, σ )( s, t ) ≤ y } → P { M ( µ, σ ) t ≤ y } .Consider W = ( W , W ) (cid:48) a Brownian vector where W and W are two indepen-dent standard BM processes. On letting { B t = σ ( ρW t + (cid:112) − ρ W t ) + µ t } t ∈ R + and { B t = σ W t + µ t } t ∈ R + , one can construct a correlated two-dimensional BM process. Then, it can easily be verified that { B it } t ∈ R + is a ( µ i , σ i ) - BM for i = 1 , and the correlation between B t and B t is equal to ρ . We say that ( B , B ) (cid:48) is a ( µ, Σ ) - BM with drift vector µ = ( µ , µ ) (cid:48) and covariance matrix Σ = (cid:32) σ ρσ σ ρσ σ σ (cid:33) . (cid:16) B t , M s, t ) (cid:17) for correlated BM swhere M s, t ) = max s ≤ u ≤ t B u and < s < t ≤ T . Proposition 2.5. (Lee (2004)) P { B T ≤ x, M s, t ) ≤ y } = Φ (cid:32) x − µ Tσ √ T , y − µ tσ √ t , y − µ sσ √ s ; ρ (cid:114) tT , ρ (cid:114) sT , (cid:114) st (cid:33) (30) − e µ y/σ Φ (cid:32) x − ρ σ σ y − µ Tσ √ T , − y − µ tσ √ t , y + µ sσ √ s ; ρ (cid:114) tT , − ρ (cid:114) sT , − (cid:114) st (cid:33) . Proof.
Let { Z t } t ∈ R + be a stochastic process defined by Z t = σ σ B t − ρB t ∀ t ∈ R + . It follows from the construction of B and B that the process Z is a BM indepen-dent of B with drift and variance parameters given by (cid:16) σ σ µ − ρµ (cid:17) and σ (1 − ρ ) ,respectively. Thus, P { B T ≤ x, M s, t ) ≤ y } = P { σ σ (cid:0) Z T + ρB T (cid:1) ≤ x, M s, t ) ≤ y } = (cid:90) + ∞−∞ P { ρB T ≤ σ σ x − z, M s, t ) ≤ y | Z T = z } P { Z T ∈ d z } = (cid:90) + ∞−∞ P { ρB T ≤ σ σ x − z, M s, t ) ≤ y } (31) × φ z − (cid:16) σ σ µ − ρµ (cid:17) Tσ (cid:112) (1 − ρ ) T d zσ (cid:112) (1 − ρ ) T .
Define z ∗ = 2 x/σ − z and consider first the case where ρ < . The followingprobability has to be determined: P { ρB T ≤ σ σ x − z, M s, t ) ≤ y } = P { B T ≥ ρ z ∗ , M s, t ) ≤ y } = P { M s, t ) ≤ y } − P { B T ≤ ρ z ∗ , M s, t ) ≤ y } (cid:20) Φ (cid:18) y − µ tσ √ t , y − µ sσ √ s ; (cid:114) st (cid:19) − Φ (cid:32) ρ z ∗ − µ Tσ √ T , y − µ tσ √ t , y − µ sσ √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33)(cid:35) − e µ yσ (cid:20) Φ (cid:18) − y − µ tσ √ t , y + µ sσ √ s ; − (cid:114) st (cid:19) − Φ (cid:32) ρ z ∗ − y − µ Tσ √ T , − y − µ tσ √ t , y + µ sσ √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33)(cid:35) , which on applying equation (13) becomes P { ρB T ≤ σ σ x − z, M s, t ) ≤ y } = Φ (cid:32) − ( z ∗ − ρµ T ) ρσ √ T , y − µ tσ √ t , y − µ sσ √ s ; − (cid:114) tT , − (cid:114) sT , (cid:114) st (cid:33) − e µ yσ Φ (cid:32) − ( z ∗ − ρ (2 y + µ T )) ρσ √ T , − y − µ tσ √ t , y + µ sσ √ s ; − (cid:114) tT , (cid:114) sT , − (cid:114) st (cid:33) = Φ (cid:32) z ∗ − ρµ T | ρ | σ √ T , y − µ tσ √ t , y − µ sσ √ s ; − (cid:114) tT , − (cid:114) sT , (cid:114) st (cid:33) (32) − e µ yσ Φ (cid:32) z ∗ − ρ (2 y + µ T ) | ρ | σ √ T , − y − µ tσ √ t , y + µ sσ √ s ; − (cid:114) tT , (cid:114) sT , − (cid:114) st (cid:33) . Similary, when ρ > , we have P { ρB T ≤ σ σ x − z, M s, t ) ≤ y } = P { B T ≤ ρ z ∗ , M s, t ) ≤ y } = Φ (cid:32) z ∗ − ρµ Tρσ √ T , y − µ tσ √ t , y − µ sσ √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33) (33) − e µ yσ Φ (cid:32) z ∗ − ρ (2 y + µ T ) ρσ √ T , − y − µ tσ √ t , y + µ sσ √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33) . On combining equations (32) and (33), we obtain the probability formula P { ρB T ≤ σ σ x − z, M s, t ) ≤ y } = Φ (cid:32) z ∗ − ρµ T | ρ | σ √ T , y − µ tσ √ t , y − µ sσ √ s ; s ( ρ ) (cid:114) tT , s ( ρ ) (cid:114) sT , (cid:114) st (cid:33) (34) − e µ yσ Φ (cid:32) z ∗ − ρ (2 y + µ T ) | ρ | σ √ T , − y − µ tσ √ t , y + µ sσ √ s ; s ( ρ ) (cid:114) tT , − s ( ρ ) (cid:114) sT , − (cid:114) st (cid:33) s ( ρ ) = 1 if ρ > and − otherwise.In light of equation (34) , the result given in equation (31) can be written asfollows: P { B T ≤ x, M s, t ) ≤ y } = (cid:90) + ∞−∞ Φ (cid:32) z ∗ − ρµ T | ρ | σ √ T , y − µ tσ √ t , y − µ sσ √ s ; s ( ρ ) (cid:114) tT , s ( ρ ) (cid:114) sT , (cid:114) st (cid:33) × φ z − (cid:16) σ σ µ − ρµ (cid:17) Tσ (cid:112) (1 − ρ ) T d zσ (cid:112) (1 − ρ ) T − e µ yσ (cid:90) + ∞−∞ Φ (cid:32) z ∗ − ρ (2 y + µ T ) | ρ | σ √ T , − y − µ tσ √ t , y + µ sσ √ s ; s ( ρ ) (cid:114) tT , − s ( ρ ) (cid:114) sT , − (cid:114) st (cid:33) × φ z − (cid:16) σ σ µ − ρµ (cid:17) Tσ (cid:112) (1 − ρ ) T d zσ (cid:112) (1 − ρ ) T = Φ (cid:32) x − µ Tσ √ T , y − µ tσ √ t , y − µ sσ √ s ; ρ (cid:114) tT , ρ (cid:114) sT , (cid:114) st (cid:33) − e µ yσ Φ (cid:32) x − ρ σ σ y − µ Tσ √ T , − y − µ tσ √ t , y + µ sσ √ s ; ρ (cid:114) tT , − ρ (cid:114) sT , − (cid:114) st (cid:33) where the last equality follows from Proposition 2.2 by letting a tend to + ∞ . (cid:50) Remark . When ρ → , P { B T ≤ x, M s, t ) ≤ y } → P { B T ≤ x } P { M s, t ) ≤ y } . Additionally, when µ = µ , σ = σ and ρ = 1 , P { B T ≤ x, M s, t ) ≤ y } = P { W ( µ , σ ) T ≤ x, M ( µ , σ )( s, t ) ≤ y } . In this section, several bivariate copulae are constructed from the joint distributionfunctions specified in the previous section.In light of the invariance properties of copulae, we consider a BM with σ = 1 ,since a ( µ, σ ) - BM can be derived from a (cid:0) µσ , (cid:1) - BM via a simple transformation(rescaling).As well, it follows from equations (7) and (8) that F M t ( a ) = P { M t ≤ a } = 2Φ (cid:18) a √ t (cid:19) − , F W t ,M t ( x, a ) = P { W t ≤ x, M t ≤ a } = Φ (cid:16) x √ t (cid:17) − Φ (cid:16) x − a √ t (cid:17) if x ≤ a (cid:16) a √ t (cid:17) − if x > a. Let F W t ( x ) = P { W t ≤ x } = Φ (cid:18) x √ t (cid:19) be the marginal distribution of a standard BM .It follows from equation (2) that the copula C W t , M t ( u, v ) generated by a BM and its maximum is C W t , M t ( u, v ) = (cid:40) u − Φ (cid:0) Φ − ( u ) − − (cid:0) v +12 (cid:1)(cid:1) if u ≤ v +12 v if u > v +12 , (35)its associated density function c M t ( u, v ) being c W t , M t ( u, v ) = ∂ ∂u∂v C W t , M t ( u, v )= (cid:2) − (cid:0) v +12 (cid:1) − Φ − ( u ) (cid:3) φ (cid:0) − (cid:0) v +12 (cid:1) − Φ − ( u ) (cid:1) φ (cid:0) Φ − (cid:0) v +12 (cid:1)(cid:1) φ (Φ − ( u )) (36)whenever u ≤ v +12 , and zero otherwise. This density is plotted in Figure 1.Figure 1: Density of the copula generated by W t and M t .Since the copulae discussed in this paper involve variables that are not eveninterchangeable, they do not belong to the Archimedean class of copulae. Moreover,17hey clearly do not belong to the class of Gaussian copulae. They actually constitutea new type of copulae whose distributions conglomerate in the neighborhood ofthe point (1,1) and, to a lesser extent, near the origin, the corresponding copuladensity functions being equal to zero beyond a certain treshold that is specified bya relationship between the variables.Let F W t ,M t ( x, y ; µ ) = P { W ( µ, t ≤ x, M ( µ, t ≤ y } = Φ (cid:16) x − µt √ t (cid:17) − e µy Φ (cid:16) x − y − µt √ t (cid:17) if x ≤ y Φ (cid:16) y − µt √ t (cid:17) − e µy Φ (cid:16) − y − µt √ t (cid:17) if x > y, and F M t ( y ; µ ) = P { M ( µ, t ≤ y } = Φ (cid:18) y − µt √ t (cid:19) − e µy Φ (cid:18) − y − µt √ t (cid:19) , which are the distribution functions obtained in Proposition 2.1 and equation (9).Let F W t ( x ; µ ) = P { W ( µ, t ≤ x } = Φ (cid:18) x − µt √ t (cid:19) be the distribution function of a ( µ, - BM . For y > , the density function of M ( µ, t is f M t ( y ; µ ) = 1 √ t φ (cid:18) y − µt √ t (cid:19) − e µy (cid:20) µ Φ (cid:18) − y − µt √ t (cid:19) − √ t φ (cid:18) − y − µt √ t (cid:19)(cid:21) . Therefore, the copula C W t , M t ( u, v ; µ ) generated by W ( µ, t and M ( µ, t is C W t , M t ( u, v ; µ )= u − e µζ ( v ) Φ (cid:16) Φ − ( u ) − ζ ( v ) √ t (cid:17) if u ≤ Φ (cid:16) ζ ( v ) − µt √ t (cid:17) v if u > Φ (cid:16) ζ ( v ) − µt √ t (cid:17) , (37)and the corresponding density c W t , M t ( u, v ; µ ) is c W t , M t ( u, v ; µ ) = 2 e µζ ( v ) φ (cid:16) Φ − ( u ) − ζ ( v ) √ t (cid:17) f M t ( ζ ( v ); µ ) φ (Φ − ( u )) (38) × (cid:20) √ t (cid:18) ζ ( v ) √ t − Φ − ( u ) (cid:19) − µ (cid:21) if u ≤ Φ (cid:18) ζ ( v ) − µtσ √ t (cid:19) , where ζ ( v ) = F − M t ( v ; µ ) .This density function appears in Figure 2 for increasing values of µ ( µ = − , µ = 0 and µ = 10 , respectively). Clearly, the strength of the dependence increaseswith µ ; additionally, as µ → , C M t ( u, v ; µ ) → C M t ( u, v ) .18igure 2: Density functions of the copulae generated by W ( µ, σ ) t and M ( µ, σ ) t forincreasing values of µ . 19e know from Propositions 2.3 et 2.4 that F W T , M t ( x, y ; µ ) = P { W ( µ, T ≤ x, M ( µ, t ≤ y } = Φ (cid:32) x − µT √ T , y − µt √ t ; (cid:114) tT (cid:33) − e µy Φ (cid:32) x − y − µT √ T , − y − µt √ t ; (cid:114) tT (cid:33) ,F W T , M ( s, t ) ( x, y ; µ ) = P { W ( µ, T ≤ x, M ( µ, s, t ) ≤ y } = Φ (cid:32) x − µT √ T , y − µt √ t , y − µs √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33) − e µy Φ (cid:32) x − y − µT √ T , − y − µt √ t , y + µs √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33) , and F M ( s, t ) ( y ; µ ) = P { M ( µ, s, t ) ≤ y } = Φ (cid:18) y − µtσ √ t , y − µsσ √ s ; (cid:114) st (cid:19) − e µyσ Φ (cid:18) − y − µtσ √ t , y + µsσ √ s ; − (cid:114) st (cid:19) . The copula C W T , M t ( u, v ; µ ) (resp. C W T , M ( s, t ) ( u, v ; µ ) ) describes the dependencestructure induced by W ( µ, T and its maximum value on the time interval [0 , t ] (resp. [ s, t ] ). Invoking (2), we obtain C W T , M t ( u, v ; µ )= Φ (cid:32) Φ − ( u ) , ζ ( v ) − µt √ t ; (cid:114) tT (cid:33) (39) − e µζ ( v ) Φ (cid:32) Φ − ( u ) − ζ ( v ) √ T , − ζ ( v ) − µt √ t ; (cid:114) tT (cid:33) , and C W T , M ( s, t ) ( u, v ; µ )= Φ (cid:32) Φ − ( u ) , ζ ( v ) − µt √ t , ζ ( v ) − µs √ s ; (cid:114) tT , (cid:114) sT , (cid:114) st (cid:33) (40) − e µζ ( v ) Φ (cid:32) Φ − ( u ) − ζ ( v ) √ T , − ζ ( v ) − µt √ t , ζ ( v ) + µs √ s ; (cid:114) tT , − (cid:114) sT , − (cid:114) st (cid:33) , where ζ ( v ) = F − M t ( v ; µ ) and ζ ( v ) = F − M ( s, t ) ( v ; µ ) .20inally, consider ( B , B ) a ( µ ∗ , Σ ) - BM where µ ∗ = (0 , µ ) (cid:48) and Σ = (cid:32) ρρ (cid:33) . The first BM has a zero drift because of the invariance property of copulae. Hence,from Proposition 2.5, we have F B T , M s,t ) ( x, y ; µ, ρ )= P { B T ≤ x, M s, t ) ≤ y } = Φ (cid:32) x √ T , y − µt √ t , y − µs √ s ; ρ (cid:114) tT , ρ (cid:114) sT , (cid:114) st (cid:33) − e µy Φ (cid:32) x − ρy √ T , − y − µt √ t , y + µs √ s ; ρ (cid:114) tT , − ρ (cid:114) sT , − (cid:114) st (cid:33) . Let us now denote by F B T ( x ) = Φ (cid:18) x √ T (cid:19) the distribution function of the first BM and by F M s, t ) ( y ; µ ) the distribution func-tion of M s, t ) where F M s, t ) ( y ; µ ) = F M ( s, t ) ( y ; µ ) for all y > .The bivariate copula C B T , M s, t ) ( u, v ; µ, ρ ) generated by B T and M s, t ) is thendefined by C B T , M s, t ) ( u, v ; µ, ρ )= Φ (cid:32) Φ − ( u ) , ζ ( v ) − µt √ t , ζ ( v ) − µs √ s ; ρ (cid:114) tT , ρ (cid:114) sT , (cid:114) st (cid:33) (41) − e µζ ( v ) Φ (cid:32) Φ − ( u ) − ρζ ( v ) √ T , − ζ ( v ) − µt √ t , ζ ( v ) + µs √ s ; ρ (cid:114) tT , − ρ (cid:114) sT , − (cid:114) st (cid:33) where ζ ( v ) = F − M s, t ) ( v ; µ ) . This copula contains all the copulae considered in thissection. Indeed, when ρ tends to , the copula specified by equation (41) convergesto that generated by a BM with drift µ at T and its own maximum on the interval [ s, t ] , which is given in equation (40). From this result, we obtain the copula givenin equation (39) by letting s tend to . Finally, as ρ = 1 , s → and t → T , thecopula specified by equation (41) converges to that given in (37).The copula C B T , M s, t ) ( u, v ; µ ) is plotted in Figure 3 for ρ = − . , ρ = 0 and ρ = 0 . . 21igure 3: Copulae generated by B T and M s, t ) for increasing values of ρ .22ote that when ρ = 0 , C B T , M s, t ) ( u, v ; µ, ρ ) = C I ( u, v ) where C I is the inde-pendent copula defined by C I ( u, v ) = uv for all ( u, v ) ∈ I .The proposed copulae are applicable to certain bivariate data sets for which oneof the variables involves maxima. Such observations occur for instance in hydrology,meteorology and financial modeling. Acknowledgements
The financial support of the Natural Sciences and Engineering Research Councilof Canada is gratefully acknowledged by the first and third authors. Thanks arealso due to Arthur Charpentier, Jean-François Plante and Bruno Rémillard for theircomments on an initial draft of the paper.
Orcid
Serge B Provost ID 0000-0002-2024-0103
References [1] Anderson, T. W. (2003).
An Introduction to Multivariate Statistical Analy-sis , Third Edition. John Wiley & Sons, New York.[2] Borodin, A. N. and Salminen, P. (2002).
Handbook of Brownian Motion–Facts and Formulae , Second Edition. Birkhäuser, Basel.[3] Bosc, D. (2012). Three essays on modeling the dependence between fi-nancial assets. Theses, Ecole Polytechnique X. https://pastel.archives-ouvertes.fr/pastel-00721674[4] Cabaña, E. M. and Wschebor, M. (1982). The two-parameter Brownianbridge: Kolmogorov inequalities and upper and lower bounds for the distri-bution of the maximum.
The Annals of Probability , 10, 289–302.[5] Cao, Y. (2017). Optimal investment-reinsurance problem for an insurer withjump-diffusion risk process: correlated Brownian motions.
Journal in Inter-disciplinary Mathematics , 2, 497–511.236] Chen, T., Cheng, X. and Yang, J. (2019). Common decomposition of corre-lated Brownian motions and its financial applications.
Quantitative Finance- Financial Mathematics . arXiv:1907.03295 [q-fin.MF], 47 pages.[7] Cherubini, U., Fabio, G., Mulinacci, S. and Romagno, S. (2012).
DynamicCopula Metho d s in Finance . John Wiley & Sons, New York.[8] Cherubini, U., Luciano, E. and Vecchiato, W. (2004). Copula Metho d s inFinance . John Wiley & Sons, New York.[9] Choi, B.-S. and Roh, J.-H. (2013). On the trivariate joint distribution ofBrownian motion and its maximum and minimum. Statistics & ProbabilityLetters , 83, 1046–1053.[10] Chuang, C. S. (1996). Joint distribution of Brownian motions and its maxi-mum, with a generalization to correlated Brownian motions and applicationsto barrier options.
Statistics & Probability Letters , 28, 81–90.[11] Denuit, M., Daehe, J., Goovaerts, M. and Kaas, R. (2005).
Actuarial Theoryfor Dependent Risks: Measures, Orders and Models . John Wiley & Sons,New York.[12] Deschatre, T. (2016a). On the control of the difference between two Brown-ian motions: a dynamic copula approach.
Dependence Modeling , 4, 141–160.[13] Deschatre, T. (2016b). On the control of the difference between two Brow-nian motions: an application to energy market modeling.
Dependence Mod-eling , 4, 161–183.[14] Etheridge, A. (2002).
A Course in Financial Calculus . Cambridge UniversityPress.[15] Harrison, M. (1990).
Brownian Motion and Stochastic Flow Systems . RobertE. Krieger, Malabar, FL.[16] Haugh, M. (2004). The Monte Carlo framework, examples from finance andgenerating correlated random variables.
Monte Carlo Simulation: IEORE4703 , 1–10.[17] Jaworski, P. and Krzywda, M. (2013). Coupling of Wiener processes by usingcopulas.
Statistics & Probability Letters , 83 (9), 2027–2033.[18] Joe, H. (2001).
Multivariate Models and Dependence Concepts . Chapman &Hall/CRC, Boca Raton, FL. 2419] Karlin, S. and Taylor, H. M. (1975).
A First Course in Stochastic Processes ,Second Edition. Academic Press, New York.[20] Kou, S. and Zhong, H. (2016). First passage times of two-dimensional Brow-nian motion.
Advances in Applied Probability , 48, 1045–1060.[21] Lagerås, A. N. (2010). Copulas for Markovian dependence.
Bernoulli ,16,331–342.[22] Lee, H. (2003). Pricing equity-indexed annuities with path-dependent op-tions.
Insurance: Mathematics and Economics , 33, 677–690.[23] Lee, H. (2004). A joint distribution of two-dimensional Brownian motionwith an application to an outside barrier option.
Journal of the KoreanStatistical Society , 33, 245–254.[24] Nelsen, R.B. (2006).
An Introduction to Copulas , Second Edition. Springer,New York.[25] Revuz, D. and Yor, M. (2005).
Continuous Martingales and Brownian Mo-tion , Third Edition. Springer-Verlag, New York.[26] Rogers, L. C. G. and Williams, D. (2000).
Diffusions, Markov Processesand Martingales . Vol. 1
Foundations , Second Edition. Cambridge UniversityPress.[27] Sklar, A. (1959). Fonctions de répartition à n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Université de Paris , 8, 229–231.[28] Vachon, M.-C. (2008).
Mouvement Brownien et Copules . Mémoire demaîtrise. Université du Québec à Montréal. (https://archipel.uqam.ca/12790/)[29] Vardar-Acara, C., Zirbel, C. L. and Székelyc, G. (2013). On the correlationof the supremum and the infimum and of maximum gain and maximumloss of Brownian motion with drift.