Finite Mixture Approximation of CARMA(p,q) Models
FFinite Mixture Approximation of CARMA(p,q) Models
Lorenzo Mercuri, Andrea Perchiazzo, Edit RrojiMay 25, 2020
Abstract
In this paper we show how to approximate the transition density of a CARMA(p, q) modeldriven by means of a time changed Brownian Motion based on the Gauss-Laguerre quadrature.We then provide an analytical formula for option prices when the log price follows a CARMA(p,q) model. We also propose an estimation procedure based on the approximated likelihood density.
The aim of this paper is to provide a simple approximation procedure for the transition density ofa Continuous Autoregressive Moving Average Model driven by a Time Changed Brownian Motion.The Continuous Autoregressive Moving Average (CARMA hereafter) model with guassian transitiondensity was first introduced in [12] as a continuous counterpart of the well known ARMA processdefined in discrete time. Recently this model has gained a significant attention in literature due tothe relaxation of the gaussianity assumption.A L´evy CARMA model has been proposed in [8] and the associated marginal distribution is allowedto be skewed and fat-tailed. These features increase the appealing of these processes especially in mod-eling financial time series [see for examples [10, 18] and references therein]. Indeed, in CARMA(p,q)models it is possible to work directly with market data without being forced of considering an equallyspaced time grid necessary in discrete-time models like for example in ARMA(p,q) models.The CARMA(p,q) process can be seen as a generalization of the Ornstein-Uhlenbeck process (OU).The OU process is not sufficiently flexible for financial applications since its autocorrelation functionshows a monotonic decreasing (negative exponential) behaviour. In this context, the CARMA(p,q)model seems to be useful as it is able to capture a more complex shape for the dependence structureas discussed in [9]. The nice statistical and mathematical properties make this class of continuoustime models very suitable for modeling commodities [24, 6], interest rates [3], mortality intensity [16],spot electricity prices [14] and temperature [7].In order to apply the CARMA model on real data, for the evaluation of derivatives on commoditiesand/or for the evaluation of insurance contracts, it is necessary to know the transition density of theprocess. In the case of a CARMA(p,q) model where the driving noise is a Brownian motion, thetransition density is Gaussian. Therefore, an estimation procedure [see [27] for details] can be obtaineddirectly combining the Gaussian likelihood function with the Kalman Filter while for the pricing offinancial/insurance contracts we have to compute just the expected value of a transformation of aGaussian random variable. We refer for instance to the pricing formula for options on futures derivedin [25] where the log-spot price is a gaussian CARMA(p,q) process. Similar results are obtained forinterest rate derivatives [see [3] for details].The main contribution of this paper is to propose a finite mixture of normals that approximatesthe transition density of a Time Changed Brownian Motion CARMA(p,q) process (TCBm-CARMAhereafter). This approximation increases the appealing of the CARMA model in practical applicationssince, as a finite mixture of normals, it has a level of computational complexity similar to the gaussianCARMA for estimation on real data and for evaluation of financial and insurance contracts. The1hoice of a Time Changed Brownian Motion (TCBm) as a driving noise increases also the ability ofthe CARMA to capture the statistical features of data. In the case of the TCBm-CARMA, our resultsgeneralize in a straightforward manner the estimation procedure in [18] based on the Quasi-GaussianLikelihood (QGMLE) contrast function [see [31, 23] and reference therein for a complete discussionof the QGMLE procedure]. Indeed we do not need a two step procedure but we are able to estimateautoregressive, moving average and L´evy measure parameters at the same time. Pricing formulasof financial contracts are again simple linear convex combinations of gaussian pricing formulas. Forinstance for options written on futures we have a convex linear combination of pricing formulas in[25].The construction of our approximated transition density for a TCBm-CARMA(p,q) model is basedon two main components: the dyadic Riemann sum approximation of a stochastic integral [see [4] fora complete discussion] and the Gauss-Laguerre quadrature [see [1] for more details]. The main ideabehind this approach is to approximate the distribution associated to the subordinator process atunitary time with a discrete random variable where the realizations are the zeros of the Laguerrepolynomial with a fixed order and the corresponding probability is obtained using the Gauss-Laguerrequadrature.Based on our knowledge the first authors that applied this approach in two different situation are [21]for a option pricing purpose and [19] for the estimation of the Variance Gamma distribution usingthe EM-algorithm proposed by [11]. Several authors, recently have used the Laguerre polynomialsto derive approximated closed formulas for the pricing of financial contracts [see [29] and referencetherein] and insurance contracts [see [32] and reference therein] for some specific exponential L´evyprocesses. A comparison of some numerical techniques including the Gauss-Laguerre quadrature forpricing derivatives under an exponential Variance Gamma process has been presented in [2].The paper is organized as follows. Section 2 reviews the Gauss-Laguerre approximation for aNormal Variance Mean Mixture random variable. In Section 3 we extend the Gauss-Laguerre ap-proximation to the case of the transition density of a TCBm-CARMA(p,q) model and we propose anestimation method that maximizes the approximated likelihood function. In Section 4 we discuss howto apply our approximated density in the evaluation of a transformation of the exponential TCBm-CARMA(p,q) model. In particular we derive specific formulas for the futures term structure and foroption prices on futures. Section 5 concludes the paper.
First we recall the formal definition of a Normal Variance Mean Mixture discussed in [5]. A randomvariable Y is a Normal Variance Mean Mixture if we have: Y = µ + θ Λ + σ √ Λ Z (1) Z ∼ N (0 , f ( u ) defined as: f ( u ) = e − ϕ + u u λ − L θ ( u ) { u ≥ } , (2) ϕ + ≥ L θ ( u ) : [0 , + ∞ ) → [0 , + ∞ ) function with slowly variation, i.e.:lim u → + ∞ L ( αu ) L ( u ) = 1 . In order to construct a discrete version of the random variable Λ, we use the Gauss-Laguerre quadra-ture. Let f ( x ) be a function with support [0 , + ∞ ) such that Z + ∞ f ( x ) e − x d x < + ∞ ,
2e have the follwing approximation: Z + ∞ f ( x ) e − x d x ≈ m X i =1 w ( k i ) f ( k i ) . (3) k i is the i -th root of the Laguerre polynomial L m ( k i ) and the weights w ( k i ) , i = 1 , . . . , m are: w ( k i ) = k i ( m + 1) L m +1 ( k i ) . We start from the moment generating function of the random variable Λ: E (cid:0) e c Λ (cid:1) = Z + ∞ e cu e − ϕ + u u λ − L θ ( u ) d u. (4)Posing k = ϕ + u in (4), we get: E (cid:0) e c Λ (cid:1) = Z + ∞ e − k e c kϕ (cid:18) kϕ (cid:19) λ − L θ (cid:18) kϕ (cid:19) d kϕ = Z + ∞ e − k e c kϕ k (cid:18) kϕ (cid:19) λ L θ (cid:18) kϕ (cid:19) d k. Applying the formula in (3), we have: E (cid:0) e c Λ (cid:1) ≈ m X i =1 e c (cid:0) kiϕ + (cid:1) w ( k i ) k i (cid:18) k i ϕ + (cid:19) λ L θ (cid:18) k i ϕ + (cid:19) . It is to worth noting that m X i =1 w ( k i ) k i (cid:18) k i ϕ + (cid:19) λ L θ (cid:18) k i ϕ + (cid:19) ≈ Z + ∞ e − k k (cid:18) kϕ + (cid:19) λ L θ (cid:18) kϕ + (cid:19) d k. Using the substitution u = kϕ + we get: m X i =1 w ( k i ) k i (cid:18) k i ϕ + (cid:19) λ L θ (cid:18) k i ϕ + (cid:19) ≈ Z + ∞ e − ϕ + u u λ − L θ ( u ) d u = 1 , therefore we have: E (cid:0) e c Λ (cid:1) ≈ m X i =1 e c (cid:0) kiϕ + (cid:1) w ( k i ) k i (cid:16) k i ϕ + (cid:17) λ L θ (cid:16) k i ϕ + (cid:17)P mi =1 w ( k i ) k i (cid:16) k i ϕ + (cid:17) λ L θ (cid:16) k i ϕ + (cid:17) . (5)The right hand side of the equation (5) can be seen as the moment generating function of a positiverandom variable Λ m with a finite support and defined as:Λ m = u = k ϕ + P ( u ) = w ( k k (cid:0) k ϕ + (cid:1) λ L θ (cid:0) k ϕ + (cid:1)P mi =1 w ( ki ) ki (cid:0) kiϕ + (cid:1) λ L θ (cid:0) kiϕ + (cid:1) ... ... u i = k i ϕ + P ( u i ) = w ( ki ) ki (cid:0) kiϕ + (cid:1) λ L θ (cid:0) kiϕ + (cid:1)P mi =1 w ( ki ) ki (cid:0) kiϕ + (cid:1) λ L θ (cid:0) kiϕ + (cid:1) ... ... u m = k m ϕ + P ( u m ) = w ( kn ) km (cid:0) kmϕ + (cid:1) λ L θ (cid:0) kmϕ + (cid:1)P mi =1 w ( ki ) ki (cid:0) kiϕ + (cid:1) λ L θ (cid:0) kiϕ + (cid:1) . (6)3he next step is to consider a sequence of random variables Y m defined as: Y m = µ + θ Λ m + p Λ m Z, (7)with Z ∼ N (0 ,
1) independent of Λ m . For any m the density of Y m is a finite mixture of normal withthe following form: f Y m ( y ) = m X i =1 φ ( y, µ + µu i , σ u i ) P ( u i ) (8)where φ ( x, a, b ) is a normal density at point x with mean a and variance b . Using the definition of Λ m f Y m ( y ) = m X i =1 φ (cid:18) y, µ + θ k i ϕ + ; σ k i ϕ + (cid:19) w ( k i ) k i (cid:16) k i ϕ + (cid:17) λ L θ (cid:16) k i ϕ + (cid:17)P mi =1 w ( k i ) k i (cid:16) k i ϕ + (cid:17) λ L θ (cid:16) k i ϕ + (cid:17) (9)Applying the Gauss-Laguerre quadrature we get: f Y m ( y ) m → + ∞ −→ Z + ∞ φ (cid:18) y, µ + µ kϕ + ; σ kϕ + (cid:19) e − k k (cid:18) kϕ + (cid:19) λ L θ (cid:18) kϕ + (cid:19) d k. Substituting u = kϕ + , we have: f Y m ( y ) m → + ∞ −→ Z + ∞ φ (cid:0) y, µ + µu ; σ u (cid:1) e − ϕ + u u λ − L θ ( u ) d u. The right-hand side is the density of the random variable in (1). Observe that approximation discussedhere can be applied in three wide applied distributions: Variance Gamma, Normal Inverse Gaussian,Generalized Hyperbolic. In all cases, the density of the mixing random variable belongs to the classdefined in (2). Indeed we obtain the density of a Gamma random variable with shape α and rate β parameters posing the following condition: ϕ + = β, λ = α, L ( α,β ) ( u ) = β α Γ ( α ) , therefore the density in (8) approximate the density of a Variance Gamma random variable.The density of an Inverse Gaussian IG( a, b ) can be obtained from (1) by posing: ϕ + = b , λ = − , L a,b ( u ) = (cid:20) a √ π (cid:21) e ab − a x . In this case we obtain an approximation of the Normal Inverse Gaussian density using (8).The Generalized Inverse Gaussian density with a > b > p ∈ R is a special case of (1) when: ϕ + = α , λ = p, L a,b,p ( u ) = (cid:0) ab (cid:1) p K p (cid:16) √ ab (cid:17) e − b u where K p ( x ) is a modified Bessel function of the second kind. Using (8) we approximate the densityof a Generalized Hyperbolic distribution.Figure 1 shows the behavior of the analytic and approximated moment generating functions forthe Gamma, Variance Gamma, Inverse gaussian, Normal Inverse Gaussian model. To generate theapproximated moment generating function we use m = 40.In Appendix 6.1 derive the Expectation Maximization algorithm for the approximated density in(9). 4 . . . . G −0.4 0.0 0.2 0.4 . . . . VG −0.4 0.0 0.2 0.4 . . . IG −0.4 0.0 0.2 0.4 . . . . NIG
Figure 1: Comparison between theoretical and approximated moment generating function for aΓ (1 , ,
1) and its associatedsymmetric Normal Inverse Gaussian centered in zero.
In this section, we review the main features of L´evy CARMA(p,q) models. The CARMA model, firstlyintroduced by [12] as a generalization in continuous time setup of the Gaussian ARMA model, hasrecently gained a rapid development in different areas due to the substitution of the Brownian Motionwith a general L´evy process as driving noise [see [8] for a discussion of a CARMA model driven by aL´evy process with finite second order moments].The formal definition of a L´evy CARMA(p,q) model Y t with p > q ≥ Y t = b > X t (10)where X t satisfies: d X t = A X t − d t + e d Z t . (11) { Z t } t ≥ is a L´evy process. The matrix A with dimension p × p is defined as: A = . . .
00 0 1 . . . . . . − a p − a p − − a p − . . . − a p × p . The vectors e and b with dimension p × e = [0 , , . . . , > e = [ b , , . . . , b p − ] > b q +1 = . . . = b p − = 0. Given the initial point X s , the solution of thee Eq. (11) is: X t = e A ( t − s ) X s + Z + ∞ e A ( t − s ) d Z u , ∀ t > s, where e A = + ∞ P h =0 1 h ! A h .We report in the following the scale property of a CARMA(p,q) process. This property introduces aconstraint between the L´evy measure parameters and the moving average vector . Indeed it is possibleto introduce a new L´evy process L t defined as: L t = 1 a Z t , a > . We also define the state process X t as: X t = 1 a X t and a new moving average vector ˜ b = a b , the CARMA(p,q) process in (10) can be written equivalentlyas: Y t = ˜ b > X t where X t satisfies the following Stochastic Differential Equations:d X t = AX t − d t + e d L t . As reported in [10], under the assumption that all eigenvalues λ , . . . , λ p of matrix A are distinct andtheir real part is negative, we can write the CARMA(p,q) model as a summation of a finite numberof continuous autoregressive models of order 1, i.e. CAR(1) models. Therefore: Y t = b > e A ( t − s ) X s + Z + ∞ p X i =1 h α ( λ i ) e λ i ( t − u ) i I s ≤ u ≤ t d Z u (12)with α ( z ) = b ( z ) a ( z ) where a ( z ) and b ( z ) are polynomial functions defined as: a ( z ) = z p + a z p − + . . . + a p ,b ( z ) = b + b z + . . . + b p − z p − . Under the additional requirement of the existence of a cumulant generating function for Z , theconditional moment generating function of a CARMA(p,q) model Y t given the information at time s < t is obtained: E s (cid:2) e cY t (cid:3) = e c b > e A ( t − s ) Xs exp "Z ts κ c p X i =1 h α ( λ i ) e λ i ( t − u ) i! d u (13)with κ ( c ) = ln E (cid:2) e cZ (cid:3) < + ∞ . Once the state variable X s is filtered from observable data, from a theoretical point of view, theresult in (13) can be used to compute the transition density from time s to time t by means ofthe Fourier Transform because the characteristic function is obtained from the moment generatingfunction evaluated at iu .In order to get an estimate of the state variable from the observed data Y t , Y t , . . . , Y t i , . . . , it ispossible to use the approach discussed in [10] and recently implemented in [18]. As first step, thevector ˆ X q,t containing the first q − X t can be written in terms of Y t − as follows: d ˆ X q,t = B ˆ X q,t − d t + e Y t − d t (14)6here B = . . .
00 0 1 . . . . . . − b − b − b . . . − b q − p × p and e q = [0 , . . . , , > . The remaining p − q components of X t are obtained from the higher order derivatives of the firstcomponent X ,t in the state vector, i.e.: X t with respect to time: X j,t = ∂ j − X ,t ( ∂t ) j − , j = q, . . . , p − . Combining the approach in [10] with the result in (13), it is possible to introduce an estimationprocedure of the L´evy CARMA(p,q) model based on the Maximum Likelihood method. This procedurerequires the numerical evaluation of two integrals, the first in the definition of the moment generatingfunction (13) and the second in the inversion formula of the characteristic function. In this section,we show that in the case of a Time Changed Brownian motion, we can can approximate the densityusing the Laguerre polynomials overcomung the numerical integration problems that arise in thestandard approach. We start considering the case of the Ornstein Uhlenbeck that does not requirethe estimation of the state process then we move to the general CARMA(p,q) model.
Let (Ω , F , F , P ) be a filtered probability space where F = ( F t ) t ≥ is a filtration, the process Y t is a Time Changed Brownian Ornstein-Uhlenbeck (TCBm-OU hereafter) Y t satisfies the followingstochastic differential equation: d Y t = − aY t − d t + d W Λ t , Y t = y . (15)where W Λ t is a Brownian Motion stopped by the subordinator process Λ t . The solution of the SDEin (15) is: Y t = y e − a ( t − t ) + Z tt e − a ( t − u ) d W Λ u . It is worth noting that the distribution at time 1 of the process W Λ t is a Normal Variance MeanMixture centered in zero. Defining the σ -field G t ,t = σ (cid:16) F t ∪ σ (cid:16) { Λ u } u ≤ t (cid:17)(cid:17) with t < t , we have: W Λ t − W Λ t |G t ,t ∼ N (0 , Λ t − Λ t ) . The σ -field G t ,t is crucial for the construction of the approximated transition density of the TCBm-OUprocess. Proposition 3.1.
Given the information associated to the σ -field G t ,t , the conditional distributionfor Y t becomes : Y t |G t ,t ∼ N (cid:18) y e − a ( t − t ) , Z tt e − a ( t − u ) d Λ u (cid:19) . (16) Using the result in (16) and the interated expected value, we obtain the moment generating function of a TCBm-OU V tt as: V tt = Z tt e − a ( t − u ) dΛ u . (17)We can approximate the integral in (17) with a left Riemann sum as follows: V tt ≈ V tt ( n ) = [2 n ( t − t )] − X k =0 e − a ( t − t − k − n ) (cid:0) Λ t +( k +1)2 − n − Λ t + k − n (cid:1) . (18)The increments Λ t +( k +1)2 − n − Λ t + k − n in (18) have a density of the shape in (2). Therefore we canapproximate these densities using the Laguerre polynomials. To this aim, we first introduce a discreterandom variable U k : U k = u P ( u )... ... u m P ( u m )that approximates the k − th increment Λ t +( k +1)2 − n − Λ t + k − n . The random variable V tt ( n ) canbe approximated introducing the new random variable V tt ( n, m ) defined using dyadic Riemann sumsreads: V tt ( n, m ) = [2 n ( t − t )] − P k =0 e − a ( t − t − k − n ) u [2 n ( t − t )] − , , . . . , P [2 n ( t − t )] − ( u )... ... [2 n ( t − t )] − P k =0 e − a ( t − t − k − n ) u k n , . . . , n m m Q i =1 P n i ( u i )... ... [2 n ( t − t )] − P k =0 e − a ( t − t − k − n ) u m , . . . , , [2 n ( t − t )] − P [2 n ( t − t )] − ( u m ) . (19)Observe that the random variable V tt ( n, m ) has m [2 n ( t − t )] − realizations. Denoting V tt ( n, m, i ) thei − th realization of the random variable V tt ( n, m ) and P (cid:2) V tt ( n, m, i ) (cid:3) its probability, we obtain thefollowing approximated density: f Y t | F t ( y ) = m [ n ( t − t ] − X i =1 φ (cid:16) y, y e − a ( t − t ) , V tt ( n, m, i ) (cid:17) P (cid:2) V tt ( n, m, i ) (cid:3) . (20)To check the accuracy of this approximation, we compare the theoretical moment generating functionof an Ornstein-Uhlenbeck driven by a Variance Gamma model obtained through the result in [16]with the moment generating function of the finite mixture of normals with density (20).Figure 2 reports a graphical comparison of the theoretical and the approximated moment generatingfunction of a VG-CAR(1) with a = 0 . t = and t . The interval [ t , t ) has been divided intosubintervals of length ∆ t = 2 − ≈ . m = 2 we get 65536 realizations of the randomvariable V tt ( n, m ). process. : E F t (cid:2) E (cid:2) e cY t |G t ,t (cid:3)(cid:3) = e cy e − a ( t − t E F t (cid:20) e c R tt e − a ( t − u ) d Λ u (cid:21) = e cy e − a ( t − t + R tt κ Λ (cid:0) c e − a ( t − u ) (cid:1) d u . where κ Λ ( u ) = ln (cid:2) E (cid:0) e u Λ (cid:1)(cid:3) . The quantity e cy e − a ( t − t + R tt κ Λ (cid:0) c e − a ( t − u ) (cid:1) d u is the moment generating functionof an TCBm-OU process and it can be alternatively obtained applying the result in [13]. . . . . . VG−CAR(1) Moment Generating Function
Figure 2: Comparison of theoretical and approximated moment generating function for a VG-CAR(1)modelThe result in (20) can be used to construct a Maximum Likelihood Estimation procedure. In thefollowing we perform a simulation and estimation study for the VG-CAR(1) model. As benchmarkwe use the Quasi-Gaussian Likelihood method extended to the SDE driven by a standardized L´evynoise introduced in [23]. We perform the following steps:1. We simulate a sample for a VG-CAR(1) model where a = 0 .
25 while the distribution at time 1of the subordinator process is Γ (1 , t = 0 . t = 1.3. We estimate the parameters, using the data obtained in step 2, by maximizing the log-likelihoodconstructed using the Laguerre approximation. In this section we review the literature for the estimation methods of CARMA(p,q) model driven bya Brownian Motion. As discussed in [27], we have two different approaches for the estimation of aGaussian CARMA process. The first is based on the frequency domain representation of the CARMAprocess. The estimated parameters are obtained by minimizing a distance between theoretical f ( ω )and empirical ˆ f ( ω ) spectral density, for instance:argmin a ,...,a p b ,...,b q Z + ∞−∞ ( ln [ f ( ω )] + ˆ f ( ω ) f ( ω ) ) d ω
100 200 300 400 500 − − t X Figure 3: Sample path of a VG-OU process.where f ( ω ) = b ( iω ) b ( − iω )2 π a ( iω ) b ( − iω ) . The alternative estimation approach is based on the time domain representation of the CARMAprocess. In this case, the unobservable state process can be extrapolated using the Kalman filtertherefore we get the estimates for the model parameters by maximizing the loglikelihood function orminimizing the least-squares error. A detailed description of the Kalman filter and the constructionof the gaussian loglikelihood function can be found in [18].
Here we discuss how to estimate the CARMA(p,q) model when the driving noise is a Time ChangedBrownian Motion. In this case we propose two alternatives. The first approach combines the KalmanFilter with the approximation transition density of the CARMA(p,q) process while the second usethe methodology for recovering noise with the estimation method discussed for the Normal VarianceMean Mixture.
In order to obtain an approximated transition density for a CARMA(p, q) process we first needto determine the conditional mean and the conditional variance of the state process X t given theinformation contained in the σ -field G t ,t and the state process at X t defined respectively as: E [ X t |G t ,t , X t ] = e A ( t − t ) X t . V ar [ X t |G t ,t , X t ] = Z tt e A ( t − u ) ee > e A > ( t − u ) dΛ u . Therefore the transition density of the CARMA(p,q) model Y t given G t ,t and X t is: Y t | ( G t ,t , X t ) ∼ N (cid:18) b > e A ( t − t ) X t , Z tt b e A ( t − u ) ee > e A > ( t − u ) b > dΛ u (cid:19) Defining the quantity V tt = R tt b e A ( t − u ) ee > e A > ( t − u ) b > dΛ u , the transition density of theCARMA(p,q) process Y t given X t can be written in the following form: f Y t | X t ( y ) = Z + ∞ ϕ (cid:16) y ; b e A ( t − t ) X t , v (cid:17) g V tt ( v ) d v, (21)10here ϕ (cid:0) y, µ, σ (cid:1) is a normal density with mean µ and variance σ ; g V tt ( v ) is the density of V tt . Asdone in Section 3.1, we approximate the integral in V tt with a left Reimann sum and we have: V tt ≈ V tt ( n, m ) = [2 n ( t − t )] − P k =0 b e A ( t − t − k − n ) ee > e A > ( t − t − k − n ) b > u [2 n ( t − t )] − , , . . . , P [2 n ( t − t )] − ( u )... ... [2 n ( t − t )] − P k =0 b e A ( t − t − k − n ) ee > e A > ( t − t − k − n ) b > u k n , . . . , n m m Q i =1 P n i ( u i )... ... [2 n ( t − t )] − P k =0 b e A ( t − t − k − n ) ee > e A > ( t − t − k − n ) b > u m , . . . , , [2 n ( t − t )] − P [2 n ( t − t )] − ( u m ) , (22)Thus f Y t | X t ( y ) can be approximated with the finite mixture density function ˆ f Y t | X t ( y ) that reads:ˆ f Y t | X t ( y ) = m [ n ( t − t ] − X i =1 φ (cid:16) y, b > e A ( t − t ) X t , V tt ( n, m, i ) (cid:17) P (cid:2) V tt ( n, m, i ) (cid:3) , (23)where V tt ( n, m, i ) denotes the i − th realization of the random variable V tt ( n, m ) in (22).For the approximated loglikelihood fuction ˆ L ( θ ) we need to infer the state process X t . From theestimated process ˆ X t , we can determine the optimal value for the parameter vector θ solving thefollowing optimization problem θ = argmax N X i =1 ln h ˆ f Y ti | ˆ X ti − ( y t i ) i . In this paper we consider two alternatives for the estimation of the state process X t : the KalmanFilter and the filtering approach discussed in Section 3 and proposed in [10].In the following table we compare the GQMLE approach discussed in [18] for a General L´evyCARMA(p,q) model and our approaches. The labels GL-HF and
GL-HFKF denote the Maximum Like-lihood estimation method based on our approximated transition density, the only difference is relatedto the method for filtering the state process from the observable data. In
GL-HF case, the estimatedstate process n ˆ X t o t ≥ is obtained using the dynamic in (14) [see [10] for more information] while in GL-HFKF case the standard Kalman Filter is used.
In this section we discuss, using the approximated transition density, how to evaluate the expectedvalue of the transformation g ( X T ) where X T can be a Normal Variance Mean Mixture or a CARMAwith a Time Changed Brownian Motion driving noise.In the Normal Variance Mean Mixture case we discuss also the behaviour of the error term while inthe second case we analyze it by a comparison with the Monte Carlo simulation. The result here canbe applied to extend the option pricing formula for options on futures contracts proposed in [25] forthe gaussian CARMA model. This approach can be used also for the evaluation of the term structureof futures. 11 .1 Normal Variance Mean Mixture Starting from the formal definition of Normal Variance Mean Mixture in (1), we define the sequenceof function E [ g ( X mT ) |F ] as following: m X i =1 E h g (cid:16) µ + θ Λ m + p Λ m Z (cid:17) |F , Λ m = u i i P ( u i ) (24)where Λ m and P ( u i ) are defined in (6). The quantity (cid:2) g (cid:0) µ + θ Λ n + √ Λ n Z (cid:1) |F , Λ n = u i (cid:3) is theexpectation of a gaussian distribution with mean µ + θ Λ n and variance Λ n .The formulas proposed in this section can be applied for the evaluation of the contingent claimwhen the underlying is a transformation of a Time Change Brownian Motion. In the next section weshow a comparison of our approach with a Monte Carlo simulation when the log price is a VarianceGamma process and the function g is the final payoff of a European Call Option. Figure 4 shows the behaviour of a European Call option price for varying value of n in the Gauss-Laguerre approximation approach. In this example the model parameters are r = 0, θ = − . α = 1, β = 1, underlying price S = 1 and time to maturity T = 1. . . . . . . Laguerre Approximation with varying value of n n.points c a ll Figure 4: Comparison of prices obtained using Monte Carlo simulation and the Laguerre Optionpricing formula (24) for an ATM European Call option.We analyze also the behaviour of the approximation for different strike levels in Figure 5 and forvarying Time to maturity in Figure 6. In the latter it is important to satisfy the condition αT ≥ .9 1.0 1.1 1.2 . . . . . . Laguerre Approximation with varying Strike
Strike c a ll Figure 5: Comparison of price obtained using Monte Carlo simulation and the Laguerre Option pricingformula (24) for different levels of strike price. . . . Laguerre Approximation with varying Time To Maturity
Time c a ll Figure 6: Comparison of prices obtained using Monte Carlo simulation and the Laguerre Optionpricing formula (24) for different levels of strike price.13 .2 Time Changed CARMA process
We discuss here how to extend the general result in Section 4.1 for the Time Changed BrownianMotion to the TCBm-CARMA process. The main idea is to use the approximation of V tt introducedin Equation (21). The general pricing formula of the final payoff g ( Y T ) can be derived following thesame steps as in the previous section. The resulting formula reads: E [ g ( Y T ) |F t ] = m [ n ( T − t ] − X k =1 E (cid:2) g ( Y m,nT ) (cid:12)(cid:12) F t , V Tt = V Tt ( m, n, k ) (cid:3) P (cid:0) V Tt ( m, n, k ) (cid:1) , (25)where Y m,nT (cid:12)(cid:12) F t , V Tt = V Tt ( m, n, k ) ∼ N (cid:0) b > e A ( T − t ) X t , V Tt ( m, n ) (cid:1) .This result can easily find applications in different financial modeling topics such as the constructionof futures term structure, option pricing of bond pricing under the hypothesis that the dynamics ofthe underlying follows a Time Change CARMA model. In the filtered probability space we assume that it exists an equivalent martingale measure Q ∼ P exists. We also assume that the price S t of the commodity asset follows an exponential TCBm-CARMA(p,q) model under the measure Q defined as: S t = S t e Y t , where Y t is a CARMA(p,q) model described in Section 3; the driving noise in a Time Change Brownianmotion i.e. L t = W Λ t where W t is a Brownian Motion and Λ t is an independent subordinator process with cumulant gener-ating function k Λ ( u ) defined as: k Λ ( u ) := ln (cid:2) E (cid:0) e u Λ (cid:1)(cid:3) . Arbitrage theory is based on the assumption that price of a future should be equal to the expectedvalue of the price at maturity under the risk neutral measure Q . Therefore, the log future price withmaturity T ≥ t can be written as: ln F Tt = ln E Q [ S T |F t ] (26)Defining the σ -field G tt = σ (cid:16) F t ∪ σ (cid:16) { Λ u } u ≤ t (cid:17)(cid:17) with t ≥ t we have: W Λ t − W Λ t (cid:12)(cid:12) G tt ∼ N (0 , Λ t − Λ t ) . Using the iterative property of the conditional expected value, equation (26) can be rewritten as:ln F Tt = ln E Q (cid:2) E Q ( S T | G Tt ) |F t (cid:3) . (27)It is worth to notice that the random variable ln S T (cid:12)(cid:12) G Tt is normally distributed. Therefore, we havethat: E Q ( S T | G Tt ) = exp (cid:18) ln S t + E Q (cid:2) ln S T |G Tt (cid:3) + 12 VAR Q (cid:2) ln S T |G Tt (cid:3)(cid:19) . Then: ln F Tt = ln E Q t h e ln S t e E Q [ ln S T |G Tt ] + VAR Q [ ln S T |G Tt ] |F t i , (28)and rearranging: ln F Tt = ln S t + ln h E Q (cid:16) e E Q [ ln S T |G Tt ] + VAR Q [ ln S T |G Tt ] (cid:17) |F t i . (29)14t this stage, it is possible to introduce the conditional transition density of a CARMA(p,q) modeldriven by a Time Changed Brownian Motion Y T given G Tt as: Y T (cid:12)(cid:12) G Tt ∼ N b > e A ( T − t ) X t , Z Tt b > e A ( T − u ) ee > e A > ( T − u ) b dΛ u ! Given this result, we obtain:ln F Tt = ln S t + ln (cid:20) E Q (cid:18) e b > e A ( T − t X t + R Tt b > e A ( T − u ) ee > e A > ( T − u ) b d Λ u |F t (cid:19)(cid:21) . (30)Simplifying:ln F Tt = ln S t + b > e A ( T − t ) X t + ln (cid:20) E Q (cid:18) e R Tt b > e A ( T − u ) ee > e A > ( T − u ) b d Λ u |F t (cid:19)(cid:21) . (31)We use the following theorem proposed in [13] in order to evaluate the expected value in (31). Theorem 4.1.
Let Λ t be a subordinator process with cumulant generating function k Λ ( u ) and f ( u ) :[0 , + ∞ ) → C be a complex left continuous function such that | Re ( f ) | ≤ M then: E (cid:20) exp (cid:18)Z + ∞ f ( u ) d Λ u (cid:19)(cid:21) = exp (cid:18)Z + ∞ k Λ ( f ( u )) d u (cid:19) . Using the above theorem and the following property of the cumulant function k Λ ( u A ) = A k Λ ( u )we obtain the final resultln F Tt = ln S t + b > e A ( T − t ) X t + Z Tt k Λ (cid:18) b > e A ( T − u ) ee > e A > ( T − u ) b (cid:19) d u. (32)The approximated transition density of the TCBm-CARMA(p,q) model gives the possibility ofevaluating the formulas in (32) in a easy way. By applying the general result in (25) we get thefollowing approximation:ln F Tt ( m, n ) = ln S t + b > e A ( T − t ) X t + ln m [ n ( T − t ] − X k =1 e V Tt ( m,n,k ) P (cid:0) V Tt ( m, n, k ) (cid:1) A numerical comparison of the approximated formula with the pricing results obtained throughMonte Carlo simulation is reported below. The MC value is evaluated using 10.000 simulated tra-jectories of a symmetric VG-CARMA(2,1) model with autoregressive parameters a = 1 . a = 0 . b = 0 . b = 1 and Gamma subordinator process (Λ t ) t ≥ with shapeparameter α = 1 and scale parameter β = 1. The simulated trajectories are obtained using the Eu-ler discretization scheme for a L´evy CARMA(p,q) model as described in [18] on a regular grid with∆ t = T where T is the maturity of the Future.It is to worth to observe that since we have that αT <
1, we can use the Generalized Gauss LaguerreQuadrature to avoid numerical issues that may arise due to the singularity at point 0.See Table 1 for the futures term structure and Figures 7-10 for an analysis based on the numberof points m used in the approximation. 15 Lag. MC Ub Lb l l l l l l l l l . . . . . Future Price in a VG−CARMA(2,1) with Maturity T = 1/12 log ( M ) F T Figure 7: Future price with maturity 1 month. l l l l l l l l l . . . . . Future Price in a VG−CARMA(2,1) with Maturity T = 2/12 log ( M ) F T Figure 8: Future price with maturity 2 months.16 l l l l l l l l l l . . . . . . Future Price in a VG−CARMA(2,1) with Maturity T = 3/12 log ( M ) F T Figure 9: Future price with maturity 3 months. l l l l l l l l l l l . . . . . Future Price in a VG−CARMA(2,1) with Maturity T = 4/12 log ( M ) F T Figure 10: Future Price with Maturity 4 Months.17 .2.2 Futures Option Pricing formula in a TCBm CARMA(p,q) model
Here we discuss how to modify our general result in order to extend the result about the Futuresoption prices in [25] for a Gaussian CARMA(p,q) model to the TCBm-CARMA(p,q) model. Here wedo not consider here the non-stationary factor Z t in equation (7) of [25] but we assume that the logprice is simply CARMA(p,q) model with gaussian innovations. We highlight the fact that extensionto the ABM-CARMA(p,q) model proposed in [25] is also straightforward in our context.In [25] model the futures log Price has the following form:ln F ( t, T ) = b > A ( t, T ) X t + 12 b > B ( t, T ) b where A ( t, T ) = e A ( T − t ) B ( t, T ) = Z Tt e A ( T − u ) ee > e A ( T − u ) d u. If we want to evaluate a European Call Option on the Futures price, we have to consider three pointsin time: time t the day where we evaluate the contract derivative, time T > t the maturity of theoption contract and time T F > T the maturity of the underlying future contract. The price of thecall option at time t can be obtained using no arbitrage arguments as follows: C t = e − r ( T − t ) E Q (cid:2) [ F ( T , T F ) − K ] + |F t (cid:3) . If the state process ( X t ) t ≥ is driven by a Brownian Motion, the price is analytic and reads as follows: C t = e − r ( T − t ) [ F ( t, T F ) Φ ( d ) − K Φ ( d )]where d , = ln (cid:16) F ( t,T F ) K (cid:17) ± σ ( t, T , T F ) σ ( t, T , T F ) . The forward integrated variance is defined as: σ ( t, T , T F ) = b > "Z T t e A ( T F − u ) ee > e A > ( T F − u ) d u b . To extend in our setup this result we use the sigma field G T F t therefore if the case of a TCBm-CARMA(p,q) model we have: C t = e − r ( T − t ) E h E h ( F ( T , T F ) − K ) + (cid:12)(cid:12)(cid:12) G T F t i |F t i The internal expectation under G T F t is exactly the formula in [25] for a fixed value of the integratedVariance: E h ( F ( T , T F ) − K ) + (cid:12)(cid:12)(cid:12) G T F t i = E (cid:2) ( F ( T , T F ) − K ) + (cid:12)(cid:12) F t , σ ( t, T , T F ) (cid:3) where σ ( t, T , T F ) (cid:12)(cid:12)(cid:12) G T F t = b > "Z T t e A ( T F − u ) ee > e A > ( T F − u ) dΛ u b . The conditional mean becomes: E (cid:2) ( F ( T , T F ) − K ) + (cid:12)(cid:12) F t , σ ( t, T , T F ) = σ (cid:3) = F ( t, T F ) Φ (cid:0) d ,σ (cid:1) − K Φ (cid:0) d ,σ (cid:1) . σ m,n ( t, T , T F ) followingthe same approach in (22). The generic k th realization of the random variable σ m,n ( t, T , T F ) has thisform: σ m,n,k ( t, T , T F ) = [2 n ( T − t )] − X k =0 b > e A ( T F − t − k − n ) ee > e A > ( T F − t − k − n ) b u k (33)with probability P (cid:0) σ m,n,k ( t, T , T F ) (cid:1) = m Y i =1 P n i ( u i )where n i is the times that the realization u i appears in the trajectory of the approximated subordi-nators and we have this constraint: m X i =1 n i = [2 n ( T − t )] − . Now the pricing formula has the same representation in (25) where instead of the random variable V Tt ( n, m ) that can be seen as an approximation of the spot integrated variance we have the GaussLaguerre approximation of the Forward Integrated Variance which realization are in (33).The same result can be applied in a straightforward manner to the case of the European Put pricewhen the underlying is a Future contract. Indeed it is worth to notice the construction proposed inthis paper implies a Law convergence consequently the convergence of the formulas in (25) for theTCBm-CARMA(p,q) model and in (24) for the Time Changed Brownian motion is ensured whenthe function g is a bounded continuous function while for a lower-semi continuous function boundedfrom below only a lower bound can be established. Therefore the convergence behavior is clear inthe case of the put option prices and to avoid issues due to this fact we perform the following steps.We first use the Gauss-Laguerre approximation scheme for the Put option price. Then we obtain thecorresponding Call price using the put-call parity formula.We report in the following Tables and figures the comparison between the Gauss-Laguerre andMC prices for different call option prices. l l l l l l l l l l l l l l l l l l l l . . . . . . Strike c a ll Figure 11: Option Call Price with Maturity 1 Month on a Future with maturity 2 Months19 Gauss L MC UB LB0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T = 1M and T F = 2M. l l l l l l l l l l l l l l l l l l l l . . . . . . Strike c a ll Figure 12: Option Call Price with Maturity 2 Months on a Future with maturity 3 Months20 Gauss L MC UB LB0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T = 2M and T F = 3M. l l l l l l l l l l l l l l l l l l l l . . . . . . Strike c a ll Figure 13: Option Call Price with Maturity 1 month on a Future with maturity 3 months.21trike
P rice
Laguerre
MC-mid MC-lwb MC-upb0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T = 1 month and T F = 3 months.We compute the price using the approximation procedure ( P rice
Laguerre ) and compare it with MonteCarlo prices (MC-mid, MC-lwb is the 5% quantile of MC simulations while MC-upb is the 5% quantileof MC simulations). 22
Conclusion
In this paper we propose an approximation procedure for the evaluation of the transition densityof a TCBm-CARMA(p,q) process that resultsto be a finite mixture of normals. Exploiting thisstructure we obtain a simple estimation procedure and pricing formulas for financial contracts whosevalue depend only on the value of the underlying at maturity modelled as an exponential TCBm-CARMA(p,q). A possible extension of our proposed approximation methodology to the pricing of pathdependent contracts may be based on the result in [15] for the evaluation of the first passage time fora Time Changed Brownian Motion. Indeed the process V tt has the same structure of a subordinatorwhile the TCBm-CARMA can be seen as a TCBm where the random time is the process V tt . Thiscould also give us the possibility to extend our approach to the evaluation of the density functionfor the time-until death variable that is necessary for the evaluation of contracts with minimimumguaranteed death benefit. References [1] M. Abramowitz and I. Stegun.
Handbook of mathematical functions . Dover Publications Inc.,New York, 1970.[2] J.-P. Aguilar. Some pricing tools for the variance gamma model.
International Journal ofTheoretical and Applied Finance , 12 2020.[3] A. Andresen, F. E. Benth, S. Koekebakker, and V. Zakamulin. The carma interest rate model.
International Journal of Theoretical and Applied Finance , 17(02):1450008, 2014.[4] S. Attal and J. M. Lindsay.
Quantum Probability Communications , volume 11. World Scientific,2003.[5] O. Barndorff-Nielsen, J. Kent, and M. Sorensen. Normal variance-mean mixtures and z distri-butions.
International Statistical Review / Revue Internationale de Statistique , 50(2):145–159,1982.[6] F. E. Benth, C. Kluppelberg, G. Muller, and L. Vos. Futures pricing in electricity markets basedon stable carma spot models.
Energy Economics , 44:392 – 406, 2014.[7] F. E. Benth, J. ˇSaltyt˙e Benth, and S. Koekebakker. Putting a price on temperature.
ScandinavianJournal of Statistics , 34(4):746–767, 2007.[8] P. J. Brockwell. L´evy-driven carma processes.
Annals of the Institute of Statistical Mathematics ,53(1):113–124, Mar 2001.[9] P. J. Brockwell. Representations of continuous-time arma processes.
Journal of Applied Proba-bility , 41:375–382, 2004.[10] P. J. Brockwell, R. A. Davis, and Y. Yang. Estimation for non-negative l´evy-driven carmaprocesses.
Journal of Business & Economic Statistics , 29(2):250–259, 2011.[11] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data viathe em algorithm.
Journal Of The Royal Statistical Society, Series B , 39(1):1–38, 1977.[12] J. L. Doob. The elementary gaussian processes.
Ann. Math. Statist. , 15(3):229–282, 09 1944.[13] E. Eberlein and S. Raible. Term structure models driven by general l´evy processes.
MathematicalFinance , 9(1):31–53, 1999. 2314] I. Garc´ıa, C. Kl¨uppelberg, and G. M¨uller. Estimation of stable carma models with an applicationto electricity spot prices.
Statistical Modelling , 11(5):447–470, 2011.[15] P. Hieber and M. Scherer. A note on first-passage times of continuously time-changed brownianmotion.
Statistics & Probability Letters , 82(1):165 – 172, 2012.[16] A. Hitaj, L. Mercuri, and E. Rroji. L´evy carma models for shocks in mortality.
Decisions inEconomics and Finance , Apr 2019.[17] S. Iacus, L. Mercuri, and E. Rroji. Cogarch(p, q): Simulation and inference with the yuimapackage.
Journal of Statistical Software, Articles , 80(4):1–49, 2017.[18] S. M. Iacus and L. Mercuri. Implementation of l´evy carma model in yuima package.
Computa-tional Statistics , 30(4):1111–1141, 2015.[19] A. Loregian, L. Mercuri, and E. Rroji. Approximation of the variance gamma model with afinite mixture of normals.
Statistics & Probability Letters , 82(2):217 – 224, 2012.[20] D. Lubinsky. Geometric convergence of lagrangian interpolation and numerical integration rulesover unbounded contours and intervals.
Journal of approximation theory , 39(4):338–360, 1983.[21] D. B. Madan, M. Pistorius, and W. Schoutens. The valuation of structured products usingmarkov chain models.
Quantitative Finance , 13(1):125–136, 2013.[22] G. Mastroianni and G. Monegato.
Error Estimates for Gauss-Laguerre and Gauss-HermiteQuadrature Formulas , pages 421–434. Birkh¨auser Boston, Boston, MA, 1994.[23] H. Masuda. Convergence of gaussian quasi-likelihood random fields for ergodic levy driven sdeobserved at high frequency.
The Annals of Statistics , 41(3):1593–1641, 2013.[24] D. Nualart and W. Schoutens. Chaotic and predictable representations for L´evy processes.
Stochastic Processes and their Applications , 90(1):109 – 122, 2000.[25] R. Paschke and M. Prokopczuk. Commodity derivatives valuation with autoregressive andmoving average components in the price dynamics.
Journal of Banking & Finance , 34(11):2742– 2752, 2010.[26] P. Rabinowitz. Gaussian integration in the presence of a singularity.
SIAM Journal on NumericalAnalysis , 4(2):191–201, 1967.[27] H. T´omasson. Some computational aspects of gaussian carma modelling.
Statistics and Com-puting , 25(2):375–387, Mar 2015.[28] J. V. Uspensky. On the convergence of quadrature formulas related to an infinite interval.
Transactions of the American Mathematical Society , 30(3):542–559, 1928.[29] J. Van Belle, S. Vanduffel, and J. Yao. Closed-form approximations for spread options in l´evymarkets.
Applied Stochastic Models in Business and Industry , 35(3):732–746, 2019.[30] S. Xiang. Asymptotics on laguerre or hermite polynomial expansions and their applications ingauss quadrature.
Journal of Mathematical Analysis and Applications , 393(2):434 – 444, 2012.[31] N. Yoshida. Polynomial type large deviation inequalities and quasi-likelihood analysis forstochastic differential equations.
Annals of the Institute of Statistical Mathematics , 63(3):431–479, 2011.[32] Z. Zhang and Y. Yong. Valuing guaranteed equity-linked contracts by laguerre series expansion.
Journal of Computational and Applied Mathematics , 357:329 – 348, 2019.24
Appendix
We derive the Expectation Maximization algorithm for the approximated density in (9). As a firststep we determine the complete-data log-likelihood function defined as: L ? ( µ , µ, σ, ϕ + , λ, θ ) = T X t =1 ln (cid:2) φ (cid:0) y t ; µ + µU t ; σ U t (cid:1) P ( U t , ϕ + , λ, θ ) (cid:3) = T X t =1 m X i =1 D t,i ln (cid:2) φ (cid:0) y t ; µ + µu i ; σ u i (cid:1) P ( u i , ϕ + , λ, θ ) (cid:3) (34)where D t,i assumes value 1 when U t = u i and 0 otherwise. Following the seminal work of [11],we perform the Expectation-step (E-step henceforth) evaluating the conditional distribution of thevariables { U t } t =1 ,...,T given the observed data. Applying the Bayes’ theorem we have: P ( U t = u i | y t , Θ h − ) = φ (cid:0) y t ; µ ,h − + µ h − u i ; σ h − u i (cid:1) P ( u i , ϕ + ,h − , λ h − , θ h − ) m P i =1 φ (cid:0) y t ; µ ,h − + µ h − u i ; σ h − u i (cid:1) P ( u i , ϕ + ,h − , λ h − , θ h − )where Θ h − = (cid:0) µ ,h − , µ h − , σ h − , ϕ + ,h − , λ h − , θ h − (cid:1) . The E-step consists of computing the condi-tional expectation of L ? ( µ , µ, ϕ + , λ, θ ) in the following way: E [ L ? ( µ ,h , µ h , σ h , ϕ + ,h , λ h , θ h )] = m X i =1 T X t =1 ln (cid:2) φ (cid:0) y t ; µ ,h + µ h u i ; σ h u i (cid:1) P ( u i , ϕ + ,h , λ h , θ h ) (cid:3) P ( U t = u i | y t , Θ h − ) . Recalling that u i = k i ϕ + we get: E (cid:2) L ? (cid:0) µ ,h , µ h , σ h , ϕ + ,h , λ, θ (cid:1)(cid:3) = m X i =1 T X t =1 ln (cid:20) φ (cid:18) y t ; µ ,h + µ h k i ϕ + ,h ; σ h k i ϕ + ,h (cid:19) P (cid:18) k i ϕ + ,h , ϕ + ,h , λ h , θ h (cid:19)(cid:21) P (cid:18) U t = k i ϕ + ,h − | y t , Θ h − (cid:19) = m X i =1 T X t =1 ln (cid:20) φ (cid:18) y t ; µ ,h + µ h k i ϕ + ,h ; σ h k i ϕ + ,h (cid:19)(cid:21) P (cid:18) U t = k i ϕ + ,h − | y t , Θ h − (cid:19) + m X i =1 T X t =1 ln (cid:20) P (cid:18) k i ϕ + ,h , ϕ + ,h , λ h , θ h (cid:19)(cid:21) P (cid:18) U t = k i ϕ + ,h − | y t , Θ h − (cid:19) (35) The Maximization-step (M-step henceforth) is based on the maximization of the quantity in (35), i.e.:( µ ,h , µ h , σ h , ϕ + ,h , λ h , θ h ) = argmax µ ,h , µ h , σ h ϕ + ,h , λ h , θ h E [ L ? ( µ ,h , µ h , σ h , ϕ + ,h , λ h , θ h )] (36)Using the following parametrization: (cid:26) µ = ˜ µϕ + σ = ˜ σ √ ϕ + . The problem in (36) becomes: argmax µ ,h, µh, σhϕ + ,h, λh, θh m X i =1 T X t =1 ln (cid:2) φ (cid:0) yt ; µ ,h + ˜ µhki ; ˜ σ hki (cid:1)(cid:3) P (cid:16) Ut = kiϕ + ,h − (cid:12)(cid:12) yt, Θ h − (cid:17) + m X i =1 T X t =1 ln h P (cid:16) kiϕ + ,h , ϕ + ,h, λh, θh (cid:17)i P (cid:16) Ut = kiϕ + ,h − (cid:12)(cid:12) yt, Θ h − (cid:17) µ ,h ,µ h ,σ h H ( µ ,h , µ h , σ h ) := m X i =1 T X t =1 ln (cid:2) φ (cid:0) y t ; µ ,h + ˜ µ h k i ; ˜ σ h k i (cid:1)(cid:3) P (cid:18) U t = k i ϕ + ,h − | y t , Θ h − (cid:19) (37)argmax ϕ + ,h ,λ h ,θ h H ( ϕ + ,h , λ h , θ h ) := m X i =1 T X t =1 ln (cid:20) P (cid:18) k i ϕ + ,h , ϕ + ,h , λ h , θ h (cid:19)(cid:21) P (cid:18) U t = k i ϕ + ,h − | y t , Θ h − (cid:19) (38) In this section we review some results about the Gauss-Laguerre quadrature necessary to understandthe behavior of our approximation scheme. We refer to [26, 28, 1] for a complete discussion about thisquadrature.Let f ( x ) be a continuous function on the support [0 , + ∞ ) and let the integral R + ∞ f ( x ) e − x d x < + ∞ be finite with f be 2 m differentiable. Then we have: Z + ∞ e − x f ( x ) d x = m X i =1 ω ( u i ) f ( u i ) + R m where R m = ( m !) (2 m )! f (2 m ) ( (cid:15) ) , (cid:15) ∈ (0 , + ∞ ) . The Generalized Gauss-Laguerre quadrature can be applied in the presence of non negligible singu-larity at x = 0. Following [26], let f ( x ) be a non-negative continuous function such that ω ( x ) f ( x ) isa monotonically non negative not increasing in (0 , + ∞ ) where ω ( x ) = x α e − x , α > − f ( x ) ≤ e x x α +1+ ρ for some ρ > f ( x ) is 2 n differentiable, the Generalized Gauss-Laguerre quadra-ture has the following form: Z + ∞ ω ( x ) f ( x ) d x = m X i =1 ω ( u i ) f ( u i ) + R m , with ω ( u i ) = Γ( m + α ) u i m !( m + α ) [ L αm − ( u i ) ] and L αm ( x ) is the generalized Laguerre polynomial.The residual term R m can be written as: R m = m !Γ ( m + α + 1)(2 m )! f (2 m ) ( (cid:15) ) , (cid:15) ∈ (0 , + ∞ ) . A standard example where it is necessary to use the Generalized Gauss-Laguerre quadrature isthe numerical evaluation of the moment generating function of a Gamma random variable with shapeparameter α ∈ (0 , x = 0; the requirements described in this section can be easily checkedand the error term can be evaluated due to smooth condition of the exponential function. For thecase of α ≥ .4 Error computation in the option pricing formula in the case of NVMM It is worth to notice that the formula in (24) can be written as: m X i =1 E h g (cid:16) µ + θ Λ m + p Λ m Z (cid:17) |F , Λ m = u i i P ( u i ) = A m B m where: A m = m X i =1 E h g (cid:16) µ + θ Λ m + p Λ m Z (cid:17) |F , Λ m = u i i ω ( k i ) k i (cid:18) k i ϕ + (cid:19) λ L θ (cid:18) k i ϕ + (cid:19) , k i = u i ϕ + and B m = m X j =1 ω ( k j ) k j (cid:18) k j ϕ + (cid:19) λ L θ (cid:18) k j ϕ + (cid:19) . We analyze the term A m as m → + ∞ , by Gauss - Laguerre Quadrature we have:lim m → + ∞ A m = Z + ∞ E h g (cid:16) µ Λ + θ Λ T + p Λ T Z (cid:17) |F , Λ T = k i (cid:18) kϕ + (cid:19) λ L θ ( k/ϕ + ) k d k, (39)where the integral in the right hand is exactly the expectation of the function g ( Y T ) where Y T isa normal variance mean mixture (it is enough to solve the integral using the substitution kϕ + = u ).Denoting with A the integral in (39), we have the following result due to the standard Gauss-Laguerrequadrature: A = A m + R m ( A m )where the remaining term has the following form: R m ( A m ) = ( m !) (2 m )! ∂ m " E h g (cid:16) µT + θ Λ T + p Λ T Z (cid:17) |F , Λ T = (cid:15) i (cid:18) (cid:15)ϕ + (cid:19) λ L θ ( (cid:15)/ϕ + ) (cid:15) , (cid:15) ∈ (0 , + ∞ ) . A discussion about the behaviour of the remaining term R m ( A m ) can be found in [20]. The authorproved, under mild conditions, the geometric convergence for a Gauss-Laguerre quadrature for a func-tion that can be written as a power series [see [22, 30] for a complete discussion and generalizations].We analyze the behaviour of term B m that:lim m → + ∞ B m = Z + ∞ (cid:18) kϕ + (cid:19) λ L θ ( k/ϕ + ) k d k. (40)Using the substitution u = kϕ + , the integral is equal to one because the integrand function is thedensity in (2). Denoting with B the integral in (40) we have B = B m + R m ( B m ) . The remaining term R m ( B m ) has the following form: R m ( B m ) = ( m !) (2 m )! ∂ m "(cid:18) (cid:15)ϕ + (cid:19) λ L θ ( (cid:15)/ϕ + ) (cid:15) , (cid:15) ∈ (0 , + ∞ )We are now able to establish the error term behaviour of our approximation approach for the normalvariance mean mixture. The result presented here holds when we have a no negligible singularity at x = 0 but the result for this type approximation can easily to generalize to case of the singularity at x = 0 using the Generalized Gauss-Laguerre quadrature.27e define the error term R m as: R m := E [ g ( Y T ) |F ] − E [ g ( Y mT ) |F ]= A m + R m ( A m ) B m + R m ( B m ) − A m B m = A m + R m ( A m ) B m + R m ( B m ) − A m B m + R m ( B m ) + A m B m + R m ( B m ) − A m B m (41)Noting that R m ( B m ) + B m = 1, we have R m = R m ( A m ) − A m B m R m ( B m )Therefore |R m | ≤ |R m ( A m ) | + (cid:12)(cid:12)(cid:12)(cid:12) A m B m (cid:12)(cid:12)(cid:12)(cid:12) |R m ( B m ) | ..