Pricing Bitcoin Derivatives under Jump-Diffusion Models
PPRICING BITCOIN DERIVATIVES UNDERJUMP-DIFFUSION MODELS
PABLO OLIVARES, RYERSON UNIVERSITY
Abstract.
In recent years cryptocurrency trading has captured theattention of practitioners and academics. The volume of the exchangewith standard currencies has knoew a dramatic increasing of late. Thispaper addresses to the need of models describing a bitcoin-US dollarexchange dynamic and their use to evaluate European option havingbitcoin as underlying asset. Introduction
In recent years cryptocurrency trading has captured the attention of prac-titioners and academics. The volume in the exchange of the former withstandard currencies has known a dramatic increasing of late. Due to thespecial circumstances in which the mining of the cryptocurrencies take placeand its lack of transparency, the dynamics of the rate of change is charac-terized by a high volatility and large random oscillations upon time. Thissituation introduces an extra degree of difficulty in the modeling of exchangedata.On the other hand, there is, an informal but emerging market for derivativesbased on cryptocurrencies. Evaluation of future contracts have recently ap-peared on some web sites. The market for more complex derivatives is atan incipient stage. Moreover, to our knowledge the pricing of the latter hasnot been analyzed.This paper addresses to the need of evaluating the latter. To this end,we propose a model for the dynamic of the exchange rates based on amean-reverting exponential Levy process with jump-diffusion log-returns.We study empirical properties of the probability laws in bitcoin-US dollarexchanges and correlation, as well as parameter estimation from three differ-ent perspectives. Next, we study the pricing of European options adaptingwell-known Fast Fourier Transform techniques (FFT) for Levy process es-tablished in Car and Madan (1999) to this context.The organization of the paper is the following:In section 2 we introduce the model, the risk-neutral setting and computethe characteristic function of the log-returns of the exchanges. In section3 we specify these results for Merton(1976) and Kou(2002) jump-diffusion
Key words and phrases.
Bitcoin, Jump-diffusion, Mean-reverting, Esscher transform,FFT pricing method. a r X i v : . [ q -f i n . C P ] F e b models. In section 4 we study empirical behavior of bitcoin-US dollar ex-change data and parameter estimation. Finally, in section 5 we outline thepricing method, while in section 6 we conclude.2. Modeling bitcoin-US dollar exchange dynamic
Let (Ω , A , ( F t ) t ≥ , P ) be a filtered probability space verifying the usualconditions. We denote by Q an equivalent martingale measure(EMM) andby E Q and ϕ X respectively the expected value and characteristic func-tion of a random variable X under Q . Furthermore, the function l V ( u ) = t log ϕ V t ( − iu ) is the Laplace exponent of a Levy process ( V t ) t ≥ defined onthe space above. The symbol ˆ f denotes the Fourier transform of a function f , while D k f ( u ) or f ( k ) denote its k-th derivative with respect to u . We set Df := D f . For a random variable X t the expression ˜ X t = e − rt X t denotesits discounted value with respect to a contant interest rate r > S t ) t ≥ be the bitcoin-US exchange rate process also defined on the samefiltered probability space and ( Y t ) t ≥ its associate log-prices process. Theyare related by:(1) S t = S exp ( Y t )For the latter we assume a mean-reverting dynamic under the historic mea-sure P given by: dY t = α ( µ − Y t ) dt + dV t (2)where ( V t ) t ≥ is a Levy process, to be specified latter on, µ and α are themean-reverting level and rate respectively.The following propositions provide well-known results about the character-istic function of the log-prices under the historic probability and the EMMdefined via an Esscher transform. See for example Eberlein and Raible(1999)and Gerber and Shiu(1994).In order to select the EMM for pricing purposes we take an Esscher trans-form of the historic measure P . See Gerber and Shiu(1994) for a rationalein terms of a utility-maximization criteria.For a stochastic process ( X t ) t ≥ we consider its Esscher transform:(3) d Q θt dP t = exp( θX t − tl X ( θ )) , ≤ t ≤ T, θ ∈ R where P t and Q θt are the respective restrictions of P and Q θ to the σ -algebra F t . Proposition 1.
Let ( S t ) t ≥ be the process defined by equations with (1)and (2). Let the process ( V t ) t ≥ have characteristic function ϕ V t ( u ) , Laplaceexponent l V ( u ) under the probability P .Define by ϕ θV t and l θV ( u ) respectively the characteristic function and Laplaceexponent of the process under the probability Q θ obtained by an Esscher transformation as given in equation (3). Then, the discounted price process ( ˜ S t ) t ≥ is a Q θ -martingale if for any T > the parameter θ verifies: (4) (cid:90) T l θV ( e − α ( T − s ) ) ds = rT − µ (1 − e − αT ) Moreover: ϕ θY t ( u ) = exp( iµ (1 − e − αt ) u − tl V ( θ ) + I t ( u, θ ))(5) where: I t ( u, θ ) = (cid:90) t l V ( θ + iue − α ( t − s ) ) ds Proof.
By Ito lemma, the solution of equation (2) is: Y t = µ (1 − e − αt ) + W t (6)where W t = (cid:82) t e − α ( t − s ) dV s .We recall the following result about the functional of a Levy process ( ξ t ) t ≥ and a measurable function f :(7) E ( exp ( i (cid:90) t f ( s ) ds )) = exp ( (cid:90) t l ξ ( if ( s )) ds )Applied to the process ( W t ) t ≥ its characteristic function under the proba-bility Q θ becomes: ϕ θW t ( u ) = exp( (cid:90) t l θV ( iue − α ( t − s ) ) ds )(8)By equation (3) combined with equations (6) and (8) the discounted process( ˜ S t ) t ≥ is a Q θ -martingale if and only if for any 0 ≤ u < t : E Q θ ( e W t / F u ) = exp( µ ( e − αt − e − αu ) + r ( t − u )) e W u ⇔ ϕ θW t − s ( − i ) = exp( µ ( e − αt − e − αu ) + r ( t − u )) ⇔ (cid:90) t − u l θV ( e − α ( t − s ) ) ds = µ ( e − αt − e − αu ) + r ( t − u )In particular for t = T and u = 0 we have the result in equation (4).For the second part of the proposition we simplify the notations and write Q θ := Q .Next, notice that: ϕ θV t ( u ) = E ( e iuV t e θV t − tl V ( θ ) ) = ϕ V t ( u − iθ ) ϕ V t ( − iθ )and l θV ( u ) = l V ( u + θ ) − l V ( θ ).From equations (6) and (8): ϕ θY t ( u ) = exp (cid:2) iµ (1 − e − αt ) u − tl V ( θ ) (cid:3) exp (cid:18)(cid:90) t l V ( θ + iue − α ( t − s ) ) ds (cid:19) (cid:3) Remark 2.
Notice that the characteristic function under the probability P is obtained from equation (5) taking θ = 0 . To simplify we write I t ( u ) = I t ( u, , ϕ Y t = ϕ Y t and Q = P , etc. Parametric estimation is based on the log-return series given by:(9) X j ∆ = log (cid:18) S ( j +1)∆ S j ∆ (cid:19) = Y ( j +1)∆ − Y j ∆ , j = 1 , , . . . , n where ∆ > Proposition 3.
Let the log-returns series defined by equation (9). For amodel following equations (1) and (2) and under the Esscher transformationthe characteristic function of j-th log-return X j ∆ is: ϕ θX j ∆ ( u ) = exp( C ( u ) + C ( u, θ ) + C ( u, θ ))(10) where: C ( u ) = iuµe − αj ∆ (1 − e − α ∆ ) C ( u, θ ) = (cid:90) ( j +1)∆ j ∆ l θV ( iue − α (( j +1)∆ − s ) ) dsC ( u, θ ) = (cid:90) j ∆0 l θV ( iu ( e − α ∆ − e − α ( j ∆ − s ) ) ds Proof.
From equation (6) we have: X j ∆ = µe − αj ∆ (1 − e − α ∆ ) + e − α ( j +1)∆ (cid:90) ( j +1)∆0 e αs dV s − e − αj ∆ (cid:90) j ∆0 e αs dV s = µe − αj ∆ (1 − e − α ∆ ) + e − α ( j +1)∆ (cid:90) ( j +1)∆ j ∆ e αs dV s + e − αj ∆ ( e − α ∆ − (cid:90) j ∆0 e αs dV s (11)Hence, noting that ( W t ) t ≥ has independent increments: ϕ θX j ∆ ( u ) = E Q (cid:34) exp (cid:32) iu ( µe − αj ∆ (1 − e − α ∆ ) + e − α ( j +1)∆ (cid:90) ( j +1)∆ j ∆ e αs dV s ) (cid:33) exp (cid:18) − e − αj ∆ (1 − e − α ∆ ) (cid:90) j ∆0 e αs dV s (cid:19)(cid:21) = exp[ iu ( µe − αj ∆ (1 − e − α ∆ ))] E Q [exp( iue − α ( j +1)∆ (cid:90) ( j +1)∆ j ∆ e αs dV s )] E Q (cid:20) exp( − iue − αj ∆ (1 − e − α ∆ ) (cid:90) j ∆0 e αs dV s ) (cid:21) The conclusion follows from equation (8). (cid:3) A jump-diffusion model for bitcoin-US dollar exchange
We consider a jump-diffusion dynamics for the Levy noise ( V t ) t ≥ givenby:(12) V t = σB t + Z t where ( B t ) t ≥ is a Brownian motion and the process ( Z t ) t ≥ is a homoge-neous compound Poisson process, independent of ( B t ) t ≥ , such that:(13) Z t = N t (cid:88) k =1 ξ k The process ( N t ) t ≥ is a Poisson process with intensity λ >
0, while ( ξ k ) k ∈ N is a sequence of i.i.d. random variables with common characteristic function ϕ X . Furthermore we assume the existence of the moments up to order M of the jumps, i.e. E ( ξ k ) < + ∞ , k = 1 , , . . . , M and ϕ ξ ∈ L ( R ). Remark 4.
This model includes the case of a single homogeneous compoundPoisson with Gaussian jumps, leading to the classical Merton’s model, seeMerton(1976), or double exponential jump sizes, see Kou(2002).
Results in section 2 are easily adapted to this setting. Notice that for themodel described by equations (12) and (13): l V ( θ + iue − α ( t − s ) ) = ( 12 σ θ − λ ) + iσ θ e − α ( t − s ) u − σ e − α ( t − s ) u + λϕ ξ ( − iθ + ue − α ( t − s ) ) I t ( u, θ ) = (cid:18) σ θ − λ (cid:19) t + i σ θα (1 − e − αt ) u − α σ (1 − e − αt ) u + λ (cid:90) t ϕ ξ ( − i θ + u e − α ( t − s ) ) ds Therefore: ϕ θY t ( u ) = exp (cid:20) − λϕ ξ ( θ ) t + i (cid:18) µ + σ θα (cid:19) (1 − e − αt ) u − α σ (1 − e − αt ) u + λ (cid:90) t ϕ ξ ( − iθ + ue − α ( t − s ) ) ds (cid:21) (14)Moreover: (cid:90) T l θV ( e − α ( T − s ) ) ds = σ α (1 − e − αT ) + σ θα (1 − e − αT )+ λ (cid:90) T ϕ ξ ( θ + e − α ( T − s ) ) ds − λϕ ξ ( θ ) T From proposition 1, equation (4), θ solves the equation: λ (cid:90) T ϕ ξ (cid:16) θ + e − α ( T − s ) (cid:17) ds = ( λϕ ξ ( θ ) + r ) T − ( µ + σ θα )(1 − e − αT ) − σ α (1 − e − αT )Next, we compute the intermediate quantities: C ( u, θ ) = (cid:90) ( j +1)∆ j ∆ l V ( θ + iue − α (( j +1)∆ − s ) ) ds − l V ( θ )∆= ( 12 σ θ − λ − l V ( θ ))∆ + i σ θα (1 − e − α ∆ ) u − σ α (1 − e − α ∆ ) u + λ (cid:90) ( j +1)∆ j ∆ ϕ ξ ( − iθ + ue − α (( j +1)∆ − s ) ) dsC ( u, θ ) = (cid:90) j ∆0 l V ( θ + iue − α (( j +1)∆ − s ) ) ds − l V ( θ ) j ∆= ( 12 σ θ − λ − l V ( θ )) j ∆ − i σ θα (1 − e − αj ∆ )(1 − e − α ∆ ) u − σ α (1 − e − αj ∆ )(1 − e − α ∆ ) u + λ (cid:90) j ∆0 ϕ ξ ( − iθ + u ( e − α ∆ − e − α ( j ∆ − s ) ) ds Hence, from proposition 3 we have: ϕ θX j ∆ ( u ) = exp [ λ ( K ( u, θ ) + K ( u, θ )) − λϕ ξ ( θ )( j + 1)∆ + i ( µ + σ θα ) e − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) (15)where: K ( u, θ ) = (cid:90) ( j +1)∆ j ∆ ϕ ξ ( − iθ + ue − α (( j +1)∆ − s ) ) dsK ( u, θ ) = (cid:90) j ∆0 ϕ ξ ( − iθ + u ( e − α ∆ − e − α ( j ∆ − s ) ) dsE j,k ( α ) = (1 − e − kα ∆ ) + ( − k (1 − e − α ∆ ) k (1 − e − kαj ∆ )In particular for θ = 0: ϕ X j ∆ ( u ) = exp [ λ ( K ( u ) + K ( u )) − λ ( j + 1)∆+ iµe − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) (16) Example 5.
Mean-reverting Black-Scholes modelAlthough it is clear from the empirical analysis in section 4 below that amean-reverting Black-Scholes model does not capture the dynamic of bitcoin-US dollar exchange rate, nonetheless we consider the latter for comparison.To this end we set V t = σB t . Hence: ϕ θY t ( u ) = exp (cid:18) i (cid:18) σ θα + µ (cid:19) (1 − e − αt ) u − α σ (1 − e − αt ) u (cid:19) Therefore, the Gerber-Shui parameter θ solves: rT − ( σ θα + µ )(1 − e − αT ) − σ α (1 − e − αT ) = 0 Hence: θ = ασ (cid:18) rT (1 − e − αT ) − − σ α (1 + e − αT ) − µ (cid:19) and ϕ θX j ∆ ( u ) = exp (cid:20) i ( µ + σ θα ) e − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) Example 6.
Mean-reverting jump-diffusion model with Gaussian jumps.We assume ξ k ∼ N ( µ J , σ J ) . Then: ϕ ξ ( u ) = exp( iµ J u − σ J u ) (cid:90) t ϕ ξ ( − iθ + ue − α ( t − s ) ) ds = ϕ ξ ( − iθ ) (cid:90) t ϕ ξ ( ue − α ( t − s ) ) exp( − iσ J θue − α ( t − s ) ) ds = ϕ ξ ( − iθ ) α A ( u, − iσ J θu, t )(17) after the change of variable y = e − α ( t − s ) , where: A ( u, v, t ) = (cid:90) e − αt y − ϕ ξ ( uy ) exp( − vy ) ds Therefore, combining equations (14) and (17): ϕ θY t ( u ) = exp (cid:20) − λϕ ξ ( θ ) t + i (cid:18) σ θα + µ (cid:19) (1 − e − αt ) u − α σ (1 − e − αt ) u + λϕ X ( − iθ ) α A ( u, iσ J θu, t ) (cid:21) Similar calculations lead to: (cid:90) T ϕ ξ ( θ + ue − α ( T − s ) ) ds = ϕ ξ ( θ ) α A (1 , σ J θu, T ) The Gerber-Shui coefficient θ GS satisfies: λϕ ξ ( θ )( αT − A (1 , σ J θ, T )) − ( σ θ + αµ )(1 − e − αT ) = σ − e − αT ) − αrT Finally, the characteristic function under the probability Q θ of the log-returnsis written: ϕ θX j ∆ ( u ) = exp (cid:20) λ ϕ ξ ( − iθ ) α (cid:2) A ( u, iσ J θu, ( j + 1)∆) − A ( u, iσ J θu, j ∆) (cid:3) + λ ϕ ξ ( − iθ ) α A ( u ( e − α ∆ − , iσ J θ ( e − α ∆ − u, j ∆) − λϕ ξ ( − iθ )( j + 1)∆ + i ( µ + σ θα ) e − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) Example 7.
Mean-reverting jump-diffusion model with double exponentialjumps.In the case of the Kuo model, the common p.d.f. of the jump sizes is de-scribed by: f X ( x ) = qη e − η x { x ≥ } + (1 − q ) η e η x { x< } η > , η > where q and − q represent the respective probabilities of upward and down-ward jumps. The characteristic function of the jumps is: ϕ X ( u ) = λ (cid:18) qη η − iu + (1 − q ) η η + iu (cid:19) Hence: (cid:90) t ϕ ξ ( − iθ + ue − α ( t − s ) ) ds = qη (cid:90) t η + ue − α ( t − s ) − iθ ds + (1 − q ) η (cid:90) t η + ue − α ( t − s ) + θ ds = qη α (cid:90) e − αt y ( η − iuy − θ ) dy + (1 − q ) η α (cid:90) e − αt y ( η + iuy − θ ) dy = qη α A ( u, θ, t ) + (1 − q ) η α A ( u, θ, t ) where: A ( u, θ, t ) = (cid:90) e − αt y ( η − iuy − θ ) dyA ( u, θ, t ) = (cid:90) e − αt y ( η + iuy + θ ) dy Then: ϕ θY t ( u ) = exp (cid:20) − λϕ ξ ( θ ) t + i (cid:18) σ θα + µ (cid:19) (1 − e − αt ) u − α σ (1 − e − αt ) u + λ qη α A ( u, θ, t ) + λ (1 − q ) η α A ( u, θ, t ) (cid:21) Similar calculations lead to: (cid:90) T ϕ ξ ( e − α ( T − s ) ) ds = qη α A ( − i, θ, T ) + (1 − q ) η α A ( i, θ, T ) Therefore, the Gerber-Shui coefficient θ GS verifies: qη α A ( − i, θ, T ) + (1 − q ) η α A ( i, θ, T )= ( λϕ ξ ( θ ) + r ) T − ( σ θα + µ )(1 − e − αT ) − σ α (1 − e − αT ) Finally: ϕ θX j ∆ ( u ) = exp (cid:104) qη α [ A ( u, θ, ( j + 1)∆) − A ( u, θ, j ∆)]+ (1 − q ) η α [ A ( u, θ, ( j + 1)∆) − A ( u, θ, j ∆)]+ qη α A ( u ( e − α ∆ − , θ, j ∆) + (1 − q ) η α A ( u ( e − α ∆ − , θ, j ∆) − λϕ ξ ( θ )( j + 1)∆ + i ( µ + σ θα ) e − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) (18) 4. Parameter calibration
Table 1.
Average, volatility, skewness and kurtosis of bit-coin US dollar exchange and the log-returns of the prices,January 2011-June 2018In figure 2 the autocorrelation series of log-returns(left) is shown. Mostvalues lie within the zero confidence strip at 95%. As it is common in most (a) (b) Figure 1. (a) (b)
Figure 2.
Autocorrelation of log-returns and squared log-returnsfinancial series the autocorrelation of the squared log-returns(right) is sig-nificant for most relevant lags. It provides an argument of non-Gaussianitythat is confirmed by a Kolmogorov-Smirnov test for log-returns. It rejectsthe hypothesis of normality with a p − value = 4 . × − , statistics k = 0 . c = 0 . (a) (b) Figure 3.
Left: empirical pdf of log-return exchangebitcoin-US dollar, compared with a normal pdf. Right: Em-pirical p.d.f. vs stable and t-student distributionsThe empirical p.d.f. of log-return exchanges is obtained using a non-parametricGaussian kernel.Parameter estimation results are shown in table 2 for a t-student and astable distribution. Between brackets the 95% confidence interval of the es-timation, accordingly to the Fisher information estimated from a maximumlikelihood approach. The values of parameters α in the stable distributionand the degrees of freedom ν in a located and scaled normal distributionshows a strong heavy-tailed distribution of the exchanges.In the case of a t-student distribution parameter α represents its number ofdegrees of freedom. For both, stable and t-student, a value of α that lowindicated an extreme high volatility and tail thickness.The results above are confirmed by a fit based on a generalized Pareto dis-tribution. In this case the parameter α means the shape of the distribution.A positive value α = indicates a heavy-tailed probability distribution. Datain excess of 0 .
05 have been considered.
The parameters considered are listedparam. ν σ µ t-student 1.35307 0.0174 0.0056conf. int. [1.23695, 1.4801] [0.01626, 0.01867] [0.00467, 0.00653]param. α β σ µ stable 1.13346 0.00306 0.0157503 0.00564conf. int. [1.08219, 1.18473] [-0.07689, 0.08300] [0.01491, 0.0166] [0.00465, 0.0066]
Table 2.
Maximum likelihood estimates of parameters in ascaled t-student and stable laws in table 3. They correspond respectively to a mean-reverting Black-Scholes, Mertonand Kou models.4.1.
Method of Moments.
We match empirical and theoretical moments. Theo-retical moments are obtained via the derivative of the characteristic function of thelog-returns in equation (23). Notice that we are estimating the parameters under Model ParametersBShMR µ, α, σ
MeMR µ, α, σ, µ J , σ J , λ KouMR µ, α, σ, η , η , q Table 3.
Parameters in different models the historic measure. Hence, we have: D k K (0) = ϕ ( k ) ξ (0) (cid:90) ( j +1)∆ j ∆ e − kα (( j +1)∆ − s ) ds = ϕ ( k ) ξ (0) (1 − e − kα ∆ ) kα = i k E ( ξ k ) (1 − e − kα ∆ ) kαD k K (0) = ϕ ( k ) ξ (0)( e − α ∆ − k (cid:90) j ∆0 e − kα ( j ∆ − s ) ds = ( − k i k E ( ξ k )(1 − e − α ∆ ) k (1 − e − kαj ∆ ) kα On the other hand, after defining: T ( u ) = − λ ( j + 1)∆ + iµe − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u + λ ( K ( u ) + K ( u ))we get: DT (0) = iµe − jα ∆ (1 − e − α ∆ ) − iλ E ( ξ ) α E j, ( α ) D T (0) = − α (cid:0) λE ( ξ ) + σ (cid:1) E j, ( α ) D k T (0) = i k λ E ( ξ k ) kα E j,k ( α ) , k = 3 , , . . . The derivatives of the characteristic function can be computed recursively by: D k ϕ X j ∆ ( u ) = k − (cid:88) l =0 (cid:18) k − l (cid:19) D l +1 T ( u ) D k − l − ϕ X j ∆ ( u )Details in the calculation of the first moments are presented in the appendix.Next, we define the empirical moments with respect to the origin in a natural wayand match to as many theoretical moments as needed. Hence:ˆ m k = 1 n n (cid:88) j =1 X kj ∆ = 1 n n (cid:88) j =1 E ( X kj ∆ ) := µ k , k ∈ N It leads to the equations:ˆ m = ( µ (1 − e − α ∆ ) − λ E ( ξ ) α ) e − αj ∆ = ( µ − λ E ( ξ ) α )(1 − e − α ∆ ) e − αj ∆ ˆ m = − E j, ( α )2 α (cid:0) λE ( ξ ) + σ (cid:1) − µ (1 − e − α ∆ ) e − αj ∆ + 2 λµE ( ξ ) α e − αj ∆ E j, ( α ) − λ ( E ( ξ )) α E j, ( α )ˆ m = λ α E ( ξ ) E j, ( α ) + 3 µ α (cid:0) λE ( ξ ) + σ (cid:1) (1 − e − α ∆ ) e − jα ∆ E j, ( α ) − λ α E ( ξ ) (cid:0) λE ( ξ ) + σ (cid:1) E j, ( α ) E j, ( α ) + (cid:18) µe − jα ∆ (1 − e − α ∆ ) − λ E ( ξ ) α E ,k ( α ) (cid:19) ˆ m = λ α E ( ξ ) E j, ( α ) − λµ α (1 − e − α ∆ ) E ( ξ ) E j, ( α ) e − jα ∆ + 4 λ α E ( ξ ) E ( ξ ) E j, ( α ) E j, ( α )+ 34 α (cid:0) λE ( ξ ) + σ (cid:1) E j, + 3 α (cid:0) λE ( ξ ) + σ (cid:1) (cid:18) µe − αj ∆ (1 − e − α ∆ ) − λE ( ξ ) α E j, ( α ) (cid:19) + (cid:20) µe − jα ∆ (1 − e − α ∆ ) − λ E ( ξ ) α E j, ( α ) (cid:21) where f j = n (cid:80) nj =1 f j .Hence: E j, ( α ) = (1 − e − α ∆ ) e − αj ∆ E j,k ( α ) = (1 − e − kα ∆ ) + (1 − e − α ∆ ) k (1 − e − kαj ∆ ) e − αj ∆ E j,k ( α ) = (1 − e − kα ∆ ) e − αj ∆ + (1 − e − α ∆ ) k e − ( k +1) αj ∆ E j,l ( α ) E j,k ( α ) = (1 − e − kα ∆ )(1 − e − lα ∆ ) + ( − l (1 − e − α ∆ ) l (1 − e − kα ∆ )(1 − e − lαj ∆ )+ ( − k (1 − e − α ∆ ) k (1 − e − lα ∆ )(1 − e − kαj ∆ )+ ( − l + k (1 − e − α ∆ ) l + k (1 − e − lαj ∆ − e − kαj ∆ + e − ( l + k ) αj ∆ )Higher moments can be computed in a similar way. Example 8.
MRBSchIn the case of the BSchMR model notice that K = K = 0 . The matching of thefirst three moments leads to the equations: ˆ m = µ (1 − e − α ∆ ) e − αj ∆ ˆ m = − σ α E j, ( α ) − µ (1 − e − α ∆ ) e − αj ∆ ˆ m = 3 µ α σ (1 − e − α ∆ ) e − αj ∆ E j, ( α ) + µ (1 − e − α ∆ ) e − αj ∆ Example 9.
MRMe ˆ m = ( µ − λ µ J α )(1 − e − α ∆ ) e − α ∆ ˆ m = − E j, ( α )2 α (cid:0) λ ( σ J + µ J ) + σ (cid:1) − µ (1 − e − α ∆ ) e − αj ∆ + 2 λµµ J α e − αj ∆ E j, ( α ) − λ ( µ J ) α E j, ( α )ˆ m = λ α E ( ξ ) E j, ( α ) + 3 µ α (cid:0) λσ J + µ J + σ (cid:1) (1 − e − α ∆ ) e − jα ∆ E j, ( α ) − λ α µ J (cid:0) λσ J + µ J + σ (cid:1) E j, ( α ) E j, ( α ) + (cid:16) µe − jα ∆ (1 − e − α ∆ ) − λ µ J α E ,k ( α ) (cid:17) ˆ m = λ α E ( ξ ) E j, ( α ) − λµ α (1 − e − α ∆ ) E ( ξ ) E j, ( α ) e − jα ∆ + 4 λ α E ( ξ ) µ J E j, ( α ) E j, ( α )+ 34 α (cid:0) λσ J + µ J + σ (cid:1) E j, + 3 α (cid:0) λσ J + µ J + σ (cid:1) (cid:18) µe − αj ∆ (1 − e − α ∆ ) − λµ J α E j, ( α ) (cid:19) + (cid:104) µe − jα ∆ (1 − e − α ∆ ) − λ µ J α E j, ( α ) (cid:105) Example 10.
MRKouThe moments of the jump sizes are: E ( ξ k ) = k ! (cid:18) qη k − − qη k (cid:19) Hence: ˆ m = µe − α ∆ (1 − e − α ∆ ) − λ α (cid:18) qη k − − qη k (cid:19) E j, ( α )= (cid:18) µ − λ α (cid:18) qη k − − qη k (cid:19)(cid:19) (1 − e − α ∆ ) e − α ∆ ˆ m = − E j, ( α )2 α (cid:18) λ (cid:18) qη − − qη (cid:19) + σ (cid:19) − µ (1 − e − α ∆ ) e − αj ∆ + 2 λµα (cid:18) qη − − qη (cid:19) (cid:0) − e − α ∆ (cid:1) e − αj − λ (cid:16) qη k − − qη k (cid:17) α E j, ( α ) ˆ m = 6 λ α (cid:18) qη − − qη (cid:19) E j, ( α ) + 3 µ α (cid:18) λ (cid:18) qη − − qη (cid:19) + σ (cid:19) (1 − e − α ∆ ) e − jα ∆ E j, ( α ) − λ α (cid:18) qη k − − qη k (cid:19) (cid:18) λ (cid:18) qη − − qη (cid:19) + σ (cid:19) E j, ( α ) E j, ( α )+ (cid:18) µe − jα ∆ (1 − e − α ∆ ) − λ α (cid:18) qη − − qη (cid:19) E j, ( α ) (cid:19) ˆ m = 6 λα (cid:18) qη − − qη (cid:19) E j, ( α ) − λµ α (1 − e − α ∆ ) E ( ξ ) E j, ( α ) e − jα ∆ + 12 λ α (cid:18) qη − − qη (cid:19) (cid:18) qη − − qη (cid:19) E j, ( α ) E j, ( α )+ 34 α (cid:18) λ (cid:18) qη − − qη (cid:19) + σ (cid:19) E j, + 3 α (cid:18) λ (cid:18) qη − − qη (cid:19) + σ (cid:19) (cid:18) µe − αj ∆ (1 − e − α ∆ ) − λE ( ξ ) α E j, ( α ) (cid:19) + (cid:20) µe − jα ∆ (1 − e − α ∆ ) − λ α (cid:18) qη − − qη (cid:19) E j, ( α ) (cid:21) Estimation by Maximum Likelihood.
First, we find the p.d.f. of therandom variables X ∆ j . To this end we define the quantities γ t = (cid:82) t e αs dB s and ν t = (cid:82) t e αs dZ s .From the jump-diffusion model given by equation (12) we can re-write equation(11) as X j ∆ = β j + η j , where the independent random variables β j and η j aredefined as: β j = µ (1 − e − α ∆ ) e − αj ∆ + σe − α ( j +1)∆ ( γ ( j +1)∆ − γ j ∆ ) + σe − αj ∆ ( e − α ∆ − γ j ∆ η j = e − αj ∆ (cid:2) e − α ∆ ( ν ( j +1)∆ − ν j ∆ ) + ( e − α ∆ − ν j ∆ (cid:3) From equation (8) we have, that the characteristic functions for γ j ∆ and γ ( j +1)∆ − γ j ∆ are respectively: ϕ γ j ∆ ( u ) = exp ( (cid:90) j ∆0 l B ( iue αs ) ds ) = exp ( − α ( e α − u ) ϕ γ ( j +1)∆ − γ j ∆ ( u ) = exp ( − α e αj ∆ ( e α − u )Therefore, we conclude that: β j ∼ N (cid:0) µ j,β ( α ) , σ j,β (cid:1) where µ j,β ( α ) = µ (1 − e − α ∆ ) e − αj ∆ and σ j,β ( α ) = σ α E j, ( α ).On the other hand, from equation (15) the characteristic functions of ν j ∆ and ν ( j +1)∆ − ν j ∆ are respectively: ϕ ν j ∆ ( u ) = exp (cid:32)(cid:90) j ∆0 l Z ( iue αs ) ds (cid:33) = exp (cid:32) λ (cid:90) j ∆0 ϕ ξ ( ue αs ) ds − λj ∆ (cid:33) ϕ ν ( j +1)∆ − ν j ∆ ( u ) = exp (cid:32) λ (cid:90) ( j +1)∆ j ∆ ϕ ξ ( ue αs ) ds − λ ∆ (cid:33) Hence: ϕ η j ∆ ( u ) = ϕ ν j ∆ (cid:0) ( e − α ∆ − e − αj ∆ u (cid:1) ϕ ν ( j +1)∆ − ν j ∆ ( e − α ( j +1)∆ u )= exp (cid:32) λ (cid:90) j ∆0 ϕ ξ ( e − α ( j +1)∆ u ) ds − λ ( j + 1)∆ (cid:33) exp (cid:32) λ (cid:90) ( j +1)∆ j ∆ ϕ ξ (cid:0) ( e − α ∆ − e − αj ∆ u (cid:1) ds (cid:33) = exp ( λ ( K ( u ) + K ( u ) − ( j + 1)∆))Notice that the probability distributions of ν j ∆ and ν ( j +1)∆ − ν j ∆ have positivemass probability at zero. We write their p.d.f.’s as the Radon-Nikodym derivativewith respect to a measure with positive mass at zero and diffuse everywhere else.We denote by f β j ( x ; θ ), f η j ( x ; θ ) and f X j ∆ ( x ; θ ) respectively the p.d.f.’s functionsof β j , η j and X j ∆ . In order to emphasize the dependence, we let them depend onof the unknown parameter θ , which should not be confused with the Gerber-Shiuparameter in section one. We let other relevant quantities depend on θ as well.Furthermore, we assume the condition: (cid:90) R exp ( λRe ( K ( u ; θ ) + K ( u ; θ ))) du < + ∞ (19)in order to guarantee the existence of the p.d.f. of η j and the log-return variables X j ∆ . The p.d.f. of the random variable η j is computed via inverse Fourier transformas: f η j ( x ; θ ) = 12 π e − λ ( j +1)∆ (1 − exp ( − λ ( j + 1)∆)) (cid:90) R exp ( − iux + λ ( K ( u ; θ ) + K ( u ; θ ))) du + exp ( − λ ( j + 1)∆)(20)Notice that, by the independence between β j and η j we have f X j ∆ = f β j (cid:63) f η j ,where f (cid:63) g is the convolution product of functions f and g .Hence: f X j ∆ ( x ; θ ) = (cid:90) R f β j ( x − y ; θ ) f η j ( y ; θ ) dy = (cid:90) R f η j ( x − σ j,β ( α ) z − µ j,β ( α )) f Z ( z ) dz after the change of variables z = x − y − µ j,β ( α ) σ j,β ( α ) . Then, we substitute equation (20)into the last equation above to get: f X j ∆ ( x ; θ ) = 12 π e − λ ( j +1)∆ (1 − exp ( − λ ( j + 1)∆) (cid:90) R (cid:90) R exp ( − iu ( x − σ j,β ( α ) z − µ j,β ( α )) + λ ( K ( u ; θ ) + K ( u ; θ ))) du f Z ( z ) dz + exp ( − λ ( j + 1)∆)= 12 π e − λ ( j +1)∆ (1 − exp ( − λ ( j + 1)∆)) J ( x ; θ ) + exp ( − λ ( j + 1)∆)(21)after applying Fubini, where: J ( x ; θ ) = (cid:90) R A ( x, u ; θ ) duA ( x, u ; θ ) = exp [ − iu ( x − µ j,β ( α )) + λ ( K ( u ; θ ) + K ( u ; θ ))] exp (cid:18) − σ j,β ( α ) u (cid:19) We denote the vector of data by x ∆ = ( x ∆ , x , . . . , x n ∆ ). The log-likelihoodfunction, disregarding terms non depending on the parameters and the last term,is: l ( x ∆ ; θ ) = n (cid:88) j =1 logf X j ∆ ( x j ∆ ; θ )= − λ ∆ (cid:18) n ( n + 3)2 (cid:19) + n (cid:88) j =1 log(1 − exp( − λ ( j + 1)∆) + n (cid:88) j =1 log J ( x j ∆ , θ )(22)The maximum likelihood estimator ˆ θ MLE := argmin θ l ( x ∆ ; θ ) solves the system: D k l ( x ∆ ; θ ) = 0 , k = 1 , , . . . , d where D k here is the derivative with respect to the parameter θ k . The dimension d depends on the specific model.Hence: D k l ( x ∆ ; θ ) = − ∆ (cid:16) n ( n +3)2 (cid:17) + ∆ (cid:80) nj =1 ( j +1)) exp ( − λ ( j +1)∆)1 − exp ( − λ ( j +1)∆) + (cid:80) nj =1 D k J ( x j ∆ ,θ ) J ( x j ∆ ,θ ) = 0 , θ k = λ (cid:80) nj =1 D k J ( x j ∆ ,θ ) J ( x j ∆ ,θ ) = 0 , θ k (cid:54) = λ Example 11.
Mean-reverting Black-Scholes modelIn this case λ = ξ j = 0 , θ = ( α, µ, σ ) . Hence: l ( x ∆ ; θ ) = n σ − n α + 12 n (cid:88) j =1 log E j, ( α ) − ασ n (cid:88) j =1 ( x j ∆ − µ j,β ( α )) E j, ( α ) Differentiating with respect to α : ∂E j, ( α ) ∂α = 2∆ e − α ∆ + ∆(1 − e − α ∆ )(1 − e − αj ∆ ) e − α ∆ + 2 j ∆(1 − e − α ∆ ) e − αj ∆ = ∆ (cid:104) e − α ∆ − (4 j + 1) e − α (2 j +1)∆ + (2 j + 1) e − α ( j +1)∆ + 2 je − αj ∆ + e − α ∆ (cid:105) we have the following system of equations: σ ∂l∂σ = n n log α + 2 ασ n (cid:88) j =1 ( x j ∆ − µ (1 − e − α ∆ ) e − αj ∆ ) E j, ( α ) = 0 − σ α (1 − e − α ∆ ) ∂l∂µ = n (cid:88) j =1 ( x j ∆ − µ (1 − e − α ∆ ) e − αj ∆ ) e − αj ∆ E j, ( α ) = 0= 4 ασ (1 − e − α ∆ ) n (cid:88) j =1 x j ∆ e − αj ∆ E j, ( α ) − µ (1 − e − α ∆ ) n (cid:88) j =1 e − αj ∆ E j, ( α ) = 0= 4 αnσ (1 − e − α ∆ ) (cid:34)(cid:18) x j ∆ e − αj ∆ E j, ( α ) (cid:19) − µ (1 − e − α ∆ ) (cid:18) e − αj ∆ E j, ( α ) (cid:19)(cid:35) = 0 σ n ∂l∂α = − σ α + σ (cid:18) ∂E j, ( α ) ∂α (cid:19) − (cid:18) ( x j ∆ − µ (1 − e − α ∆ ) e − αj ∆ ) E j, ( α ) (cid:19) + 4 αµ ∆ (cid:34)(cid:18) ( j + 1) x j ∆ e − α ( j +1)∆ E j, ( α ) (cid:19) − (cid:18) je − αj ∆ E j, ( α ) (cid:19)(cid:35) − αµ ∆(1 − e − α ∆ ) (cid:34)(cid:18) ( j + 1) e − α (2 j +1)∆ E j, ( α ) (cid:19) − (cid:18) je − αj ∆ E j, ( α ) (cid:19)(cid:35) + 4 α (cid:32) ∂E j, ( α ) ∂α ( x j ∆ − µ (1 − e − α ∆ ) e − αj ∆ ) E j, ( α ) (cid:33) = 0 Details are left to the appendix.
Example 12.
MRMeThe parameter vector is θ = ( α, µ, σ , λ, µ J , σ J ) with: K ( u ; θ ) = (cid:90) ( j +1)∆ j ∆ ϕ ξ ( ue − α (( j +1)∆ − s ) ) ds = 1 α [ A ( u, , ( j + 1)∆) − A ( u, , j ∆)] K ( u ; θ ) = (cid:90) j ∆0 ϕ ξ ( u ( e − α ∆ − e − α ( j ∆ − s ) ) ds = 1 α A ( u (1 − e − α ∆ ) , , j ∆) J ( x j ∆ , θ ) = (cid:90) R exp ( − iu ( x − µ j,β ( α )) + λ ( K ( u ; θ ) + K ( u ; θ ))) exp (cid:18) − σ j,β ( α ) u (cid:19) du Hence equation (22) becomes: l ( x ∆ ; θ ) = n (cid:88) j =1 logf X j ∆ ( x j ∆ ; θ )= − λ ∆ (cid:18) n ( n + 3)2 (cid:19) + n (cid:88) j =1 log(1 − exp( − λ ( j + 1)∆) + n (cid:88) j =1 log J ( x j ∆ , θ ) Condition (19) allows the interchange of derivative and integral leading to: D k J ( x j ∆ , θ ) = (cid:90) R D k exp ( − iu ( x − µ j,β ( α )) + λ ( K ( u ; θ )+ K ( u ; θ ))) exp (cid:18) − σ j,β ( α ) u (cid:19) du Generalized Estimation using Empirical Characteristic Function.
Em-pirical characteristic methods have been studied in several papers, see Yu(2003) andreferences within for an account of this approach.The log-return US-bitcoin exchange rates assuming a mean-reverting Levy processenters within the framework of i.i.d., although non-stationary, data.We define the empirical characteristic function (ECF) as:ˆ ϕ X j ∆ ( u ) = 1 n n (cid:88) j =1 exp ( iuX j ∆ )The characteristic function is re-written ϕ X j ∆ ( u ) := ϕ X j ∆ ( u ; θ ) to emphasize thedependence on the unknown parameter.This parameter includes mean-reverting, diffusion and jump parameters, namely θ = ( α, µ, σ, λ, θ ξ ) ∈ R d , where θ X are the parameters related to the probabilitydistribution of the jumps.The estimation functions f ( X j ∆ ; θ ) : R × R d → R l and the estimating equationsare defined as: h ( u, X j ∆ ; θ ) = exp ( iuX j ∆ ) − ϕ X j ∆ ( u ; θ ) f ( X j ∆ ; θ ) = ( Reh ( u , X j ∆ ; θ ) , . . . Reh ( u L , X j ∆ ; θ ) ,Imh ( u , X j ∆ ; θ ) , . . . , Imh ( u L , X j ∆ ; θ ))1 n n (cid:88) j =1 f ( X j ∆ ; θ ) = 0where u k = − η + δk, k = 1 , , . . . , L is an equally spaced grid of length δ = ηL onthe interval [ − η, η ], where the estimating functions are evaluated. See Feuervergerand McDunnough(1981) for a discussion about the optimal choice of points u k .The GMM estimator ˆ θ GMM is obtained as:ˆ θ GMM = argmin θ n n (cid:88) j =1 f ( X j ∆ ; θ ) ˆΩ 1 n n (cid:88) j =1 f ( X j ∆ ; θ ) (cid:48) where ˆΩ is a consistent estimator of the matrix:Ω = (cid:18) Ω RR Ω RI Ω IR Ω II (cid:19) with components:(Ω RR ) jk = 12 ( Re ( ϕ X j ∆ ( u j + u k ; θ )) + Re ( ϕ X j ∆ ( u j − u k ; θ ))) − Re ( ϕ X j ∆ ( u j ; θ )) Re ( ϕ X j ∆ ( u k ; θ ))(Ω RI ) jk = 12 ( Im ( ϕ X j ∆ ( u j + u k ; θ )) + Im ( ϕ X j ∆ ( u j − u k ; θ ))) − Im ( ϕ X j ∆ ( u j ; θ )) Re ( ϕ X j ∆ ( u k ; θ ))(Ω II ) jk = 12 ( Re ( ϕ X j ∆ ( u j + u k ; θ )) + Re ( ϕ X j ∆ ( u j − u k ; θ ))) − Im ( ϕ X j ∆ ( u j ; θ )) Im ( ϕ X j ∆ ( u k ; θ ))Ω RI = Ω IR A continuum choice of u , see Carrasco and Florens(2002) leads to the estimator θ C verifying: θ C = argmin θ || ˆ ϕ ∆ − ϕ X j ∆ ( u ; θ ) || W where || f || W = (cid:82) R | f ( u ) | exp ( − u ) duϕ X j ∆ ( u ) = exp [ λ ( K ( u ; θ ) + K ( u ; θ )) − λ ( j + 1)∆ + iµe − αj ∆ (1 − e − α ∆ ) u − σ α E j, ( α ) u (cid:21) (23) 5. Pricing bitcoin options
We study the pricing of a European call option. Its payoff is given by: h = ( S T − K ) + := max ( S T − K, H ( y ) = ( e y − K ) + We give the following basic result in terms of the pricing of a European contract byFFT inversion. It is adapted from Car and Madan(1999) to these specific models.We introduce a damping factor R for stability. See Raible(2001) for the latter. Proposition 13.
Consider a dynamic driven by equations (1),(2)and (12) underan EMM Q θ obtained by an Esscher transformation, and a European call optionwith strike price K and maturity T .Assume there exists a real value R > such that E Q [ e RV t ] < + ∞ .Then, the price of the contract is given by: C := C ( x ) = 12 π e Rx − rT (cid:90) R e − ix x ϕ θY T ( − iR − x ) ˆ H R ( x ) dx (25) where x = log S , k = log( K ) and ˆ H R ( x ) = e ( ix − R )( k − x ) K (cid:18) ix − R − e − x ix − R + 1 (cid:19) Proof.
We write Y T ∼ Q θY T ( dx ), where Q θY T is the probability distribution of Y T under the EMM Q θ .Denoting by H R ( x ) = e − Rx H ( x ) ∈ L ( R ) we have: C ( x ) = e − rT E Q θ [ H ( Y T + x )] = e − rT (cid:90) R H ( y + x ) Q θY T ( dy )= e − rT (cid:90) R e R ( y + x ) H R ( y + x ) Q Y T ( dy )= 12 π e Rx − rT (cid:90) R e Ry (cid:20)(cid:90) R e − i ( y + x +( r − m )(1 − e − αT ))) x ˆ H R ( x ) dx (cid:21) Q θY T ( dy )= 12 π e Rx − rT (cid:90) R e − i ( x +( r − m )(1 − e − αT )) x (cid:20)(cid:90) R e ( R − ix ) y Q θY T ( dy ) (cid:21) ˆ H R ( x ) dx = 12 π e Rx − rT (cid:90) R e − ix x ϕ θY T ( − iR − x ) ˆ H R ( x ) dx On the other hand:ˆ H R ( x ) = (cid:90) R e ixy H R ( y ) dy = (cid:90) + ∞ k − x e ( ix − R ) y ( e y − e k ) dy = (cid:90) + ∞ k − x e ( ix − R +1) y dy − e k (cid:90) + ∞ k − x e ( ix − R ) y dy = − e ( ix − R +1)( k − x ) ix − R + 1 + e k e ( ix − R )( k − x ) ix − R (cid:3) The integral in equation (25) is efficiently calculate by a fast FFT approach. Tothis end we define the grids: x k = − M + ηk, k = 0 , , . . . , n − x ,j = x ,m + δj, j = 0 , , . . . , n − η = Mn and δ = πM are their corresponding lengths, while n is the number ofpoints on both grids, typically a power of twoWe apply the trapezoid rule after truncating the integral on the interval [ − M, M ]: C ( x ,j ) (cid:39) π e Rx − rT n − (cid:88) k =0 w k e − ix ,j x k ϕ θY T ( − iR + M − ηk ) ˆ H R ( − M + ηk ) η = 12 π e Rx − rT e iM ( x ,m + δj ) n − (cid:88) k =0 w k ϕ θY T ( − iR + M − ηk ) ˆ H R ( − M + ηk ) ηe − ix ,m ηk e − iδηjk = 12 π e Rx − rT e iM ( x ,m + δj ) n − (cid:88) k =0 w k ϕ θY T ( − iR + M − ηk ) ˆ H R ( − M + ηk ) ηe − ix ,m ηk e − i πn jk = 12 π e Rx − rT e iM ( x ,m + δj ) n − (cid:88) k =0 h k e − i πn jk = 12 π e Rx − rT e iM ( x ,m + δj ) f f t ( h k )with: h k = w k ηϕ θT T ( − iR + M − ηk ) ˆ H R ( − M + ηk ) w = w n − = and equal to one otherwise.The expression f f t ( h k ) denotes the Fast fourier Transform of the sequence ( h k ).6. Acknowledgments
This research has been supported by the Natural Sciences and Engineering Re-search Council of Canada (NSERC).7.
Conclusions