Oscillating Gaussian Processes
aa r X i v : . [ m a t h . P R ] M a y OSCILLATING GAUSSIAN PROCESSES
PAULIINA ILMONEN , SOLEDAD TORRES , AND LAURI VIITASAARI Abstract.
In this article we introduce and study oscillating Gaussian pro-cesses defined by X t = α + Y t Y t > + α − Y t Y t < , where α + , α − > are freeparameters and Y is either stationary or self-similar Gaussian process. Westudy the basic properties of X and we consider estimation of the modelparameters. In particular, we show that the moment estimators convergein L p and are, when suitably normalised, asymptotically normal. Mathematics Subject Classifications (2010) : 60G15 (primary), 60F05,60F25, 62F10, 62F12
Keywords:
Gaussian processes, oscillating processes, stationarity, self-similarity,parameter estimation, central limit theorem
1. Introduction
During the past two decades interest in the study of the existence anduniqueness of stochastic differential equations driven by a fractional Brow-nian motion has been very intense and there have been many advances intheir theory and applications. In particular, strong solutions of the followingstochastic differential equation (SDE in short)(1.1) X t = X + Z t b ( s, X s ) ds + Z t σ ( s, X s ) dB Hs , under usual conditions on the coefficients, such as Lipschitz and linear growth,were developed by Nualart and R ˇ a şcanu [9], and have been considered by manyauthors, see [7] and the references therein.
1: Aalto University School of Science, Department of Mathematics andSystems Analysis, Finland.2: Facultad de Ingeniería, CIMFAV Universidad de Valparaíso, Casilla 123-V, 4059 Valparaiso, Chile.3: Aalto University School of Business, Department of Information andService Management, Finland (
Corresponding author ) E-mail addresses : [email protected], [email protected],[email protected] . Nevertheless, the case of SDE with discontinuous coefficients has been lessexplored. Most of the cases of stochastic differential equations driven by afractional Brownian motion and with discontinuous coefficients which havebeen studied are those corresponding to discontinuous drift coefficient (for
H > / ). Regarding that, in [8], the authors studied a drift that is Höldercontinuous except on a finite numbers of points. Another class of discontinuityin SDE driven by a fractional Brownian motion is related to adding a Poissonprocess to the equation. In [2], extending the results given in [8], the authorsproved the existence of the strong solution of this kind of SDE driven by afractional Brownian motion and a Poisson point process. To the best of ourknowledge, in the fractional Brownian motion framework, there is only a pre-liminary work that studies equations with discontinuous diffusion coefficient,written by Garzón et al. [4]. There the authors proved the existence anduniqueness of solutions to the SDE driven by the fractional Brownian motion B H with H > given by(1.2) X t = X + Z t σ ( X s ) dB Hs , t ≥ , where the function σ is given by(1.3) σ ( x ) = 1 α x ≥ + 11 − α x< , α ∈ (cid:18) , (cid:19) . The authors showed that the explicit solution to the equation (1.2) is(1.4) X t = αB Ht B Ht > + (1 − α ) B Ht B Ht < , t ≥ . It is straightforward to see that the explicit existence and uniqueness of solutionto equation (1.2) holds also if α and − α are replaced with α + and α − satisfying < α − < α + (or < α + < α − , respectively).One of the reasons why SDEs with discontinuous diffusion coefficient areinteresting is their relation to the Skew Brownian motion. In the Brownianmotion framework, the Skew Brownian motion appeared as a natural gener-alization of the Brownian motion. The Skew Brownian motion is a processthat behaves like a Brownian motion except that the sign of each excursionis chosen using an independent Bernoulli random variable with the parameter α ∈ (0 , . For α = 1 / , the process corresponds to a Brownian motion. Thisprocess is a Markov process and a semi-martingale. Moreover, it is a strongsolution to certain SDE with local time (see [5] for a survey). Let(1.5) X t = x + B t + (2 α − L t ( X ) , SCILLATING PROCESSES 3 where L t ( X ) is the symmetric local time of X at . In the case of the Brownianmotion, it follows from the Itô-Tanaka formula that the equations (1.5) and(1.2) with σ ( x ) = α { x ≥ } + − α { x< } are equivalent. For a comprehensivesurvey on Skew Brownian motion, see the work by Lejay A. in [5].In the case of the fractional Brownian motion, the Tanaka type formulasare more complicated and no relations between the two types of equations areknown to exist. The motivation for the authors in [4] to study equation (1.2)stemmed from this fact.To the best of our knowledge, [6] is the only study that considers the in-ference of parameters related to SDE with a discontinuous diffusion process.The study considers the case of a discontinuous diffusion coefficient that canonly attain two different values. More precisely, the authors of [6] studied theso-called oscillating Brownian motion that is a solution to the SDE(1.6) X t = x + Z t σ ( X s ) dW s , where W is a standard Brownian motion and σ ( x ) = α + x ≥ + α − x< , x ∈ R . The authors proposed two natural consistent estimators, which are vari-ants of the integrated volatility estimator. Moreover, the stable convergencetowards certain Gaussian mixture of the renormalised estimators was proven.The estimators are given by ˆ α + = s P nk =1 ( X k − X k − ) P nk =1 X k ≥ , ˆ α − = s P nk =1 ( X k − X k − ) P nk =1 X k ≤ . (1.7)Note that when the paths are strictly positive or strictly negative, only one ofthe estimators can be computed.Motivated by Equation (1.4), we define the Oscillating Gaussian process by(1.8) X t = α + Y t Y t > + α − Y t Y t < , t ∈ T, where α + and α − are both strictly positive (or negative, respectively) con-stants. In addition to the above mentioned links to SDEs and skew Brownianmotion, we note that (1.8) could be applied in various other modelling scenar-ios as well, making oscillating Gaussian process an interesting object of study.For example, (1.8) can be viewed as a model for different situations where thevariance changes by regions. One of the main interests in this paper is in theestimation of the model parameters α + and α − . In order to be able to com-pute estimators for both parameters in all possible cases, we define estimatorsbased on moments and study their asymptotic properties. Moreover, we showthat our moment based approach can be applied under a large class of drivingGaussian processes Y in (1.8). P.ILMONEN, S.TORRES, AND L.VIITASAARI
The rest of the paper is organised as follows. In Section 2, we introducethe oscillating Gaussian processes and study their basic properties such as mo-ments, covariance structures, and continuity properties. Section 3 is devotedto model calibration. We begin by showing that the moment estimators areconsistent and satisfy central limit theorems under suitable assumptions onthe driving Gaussian process. On top of that, we also consider correspondingestimators based on discrete observations. In Subsection 3.3, we briefly discusshow Lamperti transform can be used to study oscillating Gaussian processesdriven by self-similar Gaussian noise, and as a particular example, we applythe method to the case of the bifractional Brownian motion. We end the paperwith a short summary and a discussion about future prospects.
2. Oscillating Gaussian processes
Throughout this section we consider Gaussian oscillating processes X =( X t ) t ≥ defined by(2.1) X t = α + Y t Y t > + α − Y t Y t < , where Y = ( Y t ) t ≥ is a stationary Gaussian process and the α + and α − arepositive parameters such that α + = α − . Note that the α + and α − describethe magnitude of variations of X on different regions. Our goal is to estimatethe unknown parameters α + and α − . In order to do this, we assume that E ( Y t ) = 1 . Note that the general case E ( Y t ) = σ can be written as X t = α + σ ˜ Y t ˜ Y t > + α − σ ˜ Y t ˜ Y t < , where now E ( ˜ Y t ) = 1 . We also assume that the parameters α + and α − areboth strictly positive (or negative). Remark . Note that we can extend our analysis in a straightforward mannerto the case α − < < α + (or α − > > α + ) as well. Reason for that is that wedefined X with (2.1) directly instead of restricting ourselves to the situationwhere X is a solution to SDE (1.2), in which case the solution is known toexists and is of the form (2.1) only for α − , α + > . See also Remark 2. Definition 2.1 (Oscillating Gaussian process (OGP)) . Let Y be a centeredstationary Gaussian process with variance σ = 1 and covariance function r ( t ) ,and let α + , α − > , α + = α − be constants. We define the oscillating version X of Y by(2.2) X t = α + Y t Y t > + α − Y t Y t < . In the following lemmas we compute the moments and covariances of theOGP X defined in (2.2). SCILLATING PROCESSES 5
Lemma 2.2.
Let n ≥ be an integer and t ≥ arbitrary. Then µ n := E ( X n ) = E ( X nt ) = 2 n Γ (cid:0) n +12 (cid:1) √ π ( α n + + ( − n α n − ) . Proof.
By the definition of OGP, we have X nt = α n + Y nt Y t > + α n − Y nt Y t < . Since Y is a centered stationary Gaussian process we have(2.3) E ( Y nt Y t > ) = Z ∞ x n √ π e − x dx = 12 Z ∞−∞ | x | n √ π e − x dx = 12 E | N | n , where N ∼ N (0 , . Similarly,(2.4) E ( Y nt Y t < ) = ( − n E | N | n . Now, the well known formula for a standard normal variable E | N | n = n Γ ( n +12 ) √ π ,implies the claim. (cid:3) The following lemma allows us to compute the parameters α + and α − interms of the moments. Lemma 2.3.
Let t > be arbitrary. Then α + = r π µ + 12 p µ − π ( µ ) and α − = − r π µ + 12 p µ − π ( µ ) . Proof.
Since
Γ(1) = 1 and Γ (cid:0) (cid:1) = √ π , Lemma 2.2 yields µ = 1 √ π ( α + − α − ) and µ = 12 (cid:0) α + α − (cid:1) . From the first equality we get α + = α − + √ πµ . Plugging into the second inequality with some simple manipulations gives(2.5) α − + 2 √ πµ α − + 2 πµ − µ = 0 . P.ILMONEN, S.TORRES, AND L.VIITASAARI
Now µ − πµ = 2 α + 2 α − − ( α + − α − ) = ( α + + α − ) > , and since α − > , we obtain the result. (cid:3) Remark . Note that in the proof of Lemma 2.3 we applied the assumption α − > . In the case α − < < α + , one has to choose the other solution toEquation (2.5) yielding α − = − r π µ − p µ − π ( µ ) . In the next lemma, we derive the covariance function of the process X . Thatallows us to obtain consistency for our estimators. Lemma 2.4.
Let N ∼ N (0 , and N ∼ N (0 , such that Cov ( N , N ) = a .Then E ( N m N n N ,N > ) = 2 n + m − π − (1 − a ) n + m − ∞ X r =0 (4 a ) r r ! Γ (cid:18) n + r + 12 (cid:19) Γ (cid:18) m + r + 12 (cid:19) Proof.
We have E ( N m N n N ,N > ) = 12 π √ − a Z ∞ Z ∞ x m y n e − x y − axy − a dxdy. Change of variables u = x √ − a ) and v = y √ − a ) gives E ( N m N n N ,N > ) = 2 n + m π − (1 − a ) n + m − Z ∞ Z ∞ u m v n e − u − v +2 auv dudv, and using formula 3.5-5 in [10] we obtain Z ∞ Z ∞ u m v n e − u − v +2 auv dudv = 14 ∞ X r =0 (4 a ) r r ! Γ (cid:18) n + r + 12 (cid:19) Γ (cid:18) m + r + 12 (cid:19) . This proves the claim. (cid:3)
In the sequel we apply standard Landau notation O ( · ) . Corollary 2.5.
Let N ∼ N (0 , and N ∼ N (0 , such that Cov ( N , N ) = a , and let n ≥ be an integer. Then E ( N n N n N ,N > ) = 2 n − π − Γ (cid:18) n + 12 (cid:19) + O ( | a | ) and E ( N n N n N > ,N < ) = ( − n n − π − Γ (cid:18) n + 12 (cid:19) + O ( | a | ) . SCILLATING PROCESSES 7
Proof.
It follows from Lemma 2.4 that E ( N n N n N ,N > ) = 2 n − π − Γ (cid:18) n + 12 (cid:19) (1 − a ) n − + O ( | a | ) . Now, the first claim follows from the fact that (1 − a ) n − = 1 + O ( | a | ) . The second claim follows similarly since E ( N n N n N > ,N < ) = ( − n E ( N n ( − N ) n N > , − N > ) . (cid:3) Corollary 2.6.
Let X be the oscillating Gaussian process defined in (2.2) .Then Cov ( X nt , X ns ) = O ( | r ( t − s ) | ) , where r is the covariance function of Y .Proof. We have X nt X ns = α n + Y nt Y ns Y t ,Y s > + α n − Y nt Y ns Y t ,Y s < + α n + α n − ( Y nt Y ns Y t > ,Y s < + Y nt Y ns Y t < ,Y s > ) . Taking expectation and using Corollary 2.5 we get E ( X nt X ns ) = 2 n − π − Γ (cid:18) n + 12 (cid:19) (cid:0) α n + + α n − + 2( − n α n + α n − (cid:1) + O ( | r ( t − s ) | ) . Lemma 2.2 now implies the claim. (cid:3)
We end this section with the following result that ensures the path continuityof the OGP X . Proposition 2.7.
Let X be the oscillating Gaussian process defined by (2.2) .If Y has Hölder continuous paths of order γ ∈ (0 , almost surely, then so does X .Proof. The result follows from the simple observations that | Y t Y t > − Y s Y s > | = Y t Y t > ≥ Y s + Y s Y s > ≥ Y t + | Y t − Y s | Y t ,Y s > ≤ ( Y t − Y s ) Y t > ≥ Y s + ( Y s − Y t ) Y s > ≥ Y t + | Y t − Y s | Y t ,Y s > ≤ | Y t − Y s | ( Y t > ≥ Y s + Y s > ≥ Y t + Y t ,Y s > ) ≤ | Y t − Y s | . P.ILMONEN, S.TORRES, AND L.VIITASAARI
Similarly, | Y t Y t < − Y s Y s < | ≤ | Y t − Y s | from which the claim follows. (cid:3)
3. Model calibration
This section is devoted to the estimation of the unknown parameters α + , α − by the method of moments. Following the ideas of Lemma 2.3, we define(3.1) ˆ α + ( T ) = r π µ ( T ) + 12 q | µ ( T ) − π ˆ µ ( T ) | and(3.2) ˆ α − ( T ) = ˆ α + ( T ) − r π µ ( T ) , where ˆ µ i ( T ) , i = 1 , are the classical moment estimators defined by(3.3) ˆ µ i ( T ) = 1 T Z T X iu du. Remark . Note that here we have taken absolute values inside the squareroots in order to obtain real valued estimates for real valued quantities. Since µ − πµ > , this does not affect the asymptotical properties of the estimators.The following result gives us the consistency and can be viewed as one ofour main theorems. The proof is postponed to Subsection 3.1. Theorem 3.1.
Assume that | r ( T ) | → as T → ∞ . Then, for any p ≥ , wehave ˆ α + ( T ) → α + and ˆ α − ( T ) → α − in L p , as T → ∞ . In order to study the limiting distribution, we need some additional assump-tions on the covariance function r . Assumption 3.1.
Let r be the covariance function of Y . We assume that oneof the following condition hold: (1) The covariance function r satisfies r ∈ L ( R ) . SCILLATING PROCESSES 9 (2)
We have that lim t →∞ r ( t ) t = C < ∞ . (3) There exists H ∈ (cid:0) , (cid:1) such that lim t →∞ r ( t ) t H − = C < ∞ . Remark . The first condition in Assumption 3.1 corresponds to short-rangedependence and the last condition corresponds to long-range dependence. Thesecond condition corresponds to the border case resulting to a logarithmicfactor to our normalising sequence (see Theorem 3.3).The following theorem gives the central limit theorem for the moments es-timators.
Theorem 3.2.
Let ˆ µ ( T ) and ˆ µ ( T ) be defined by (3.3) , and let ˆ µ ( T ) =(ˆ µ ( T ) , ˆ µ ( T )) and µ = ( µ , µ ) . Then, (1) if r satisfies the condition (1) of Assumption 3.1, √ T (ˆ µ ( T ) − µ ) → N (0 , Σ ) in law as T → ∞ , (2) if r satisfies the condition (2) of Assumption 3.1, s T log T (ˆ µ ( T ) − µ ) → N (0 , Σ ) in law as T → ∞ , and (3) if r satisfies the condition (3) of Assumption (3.1) , T − H (ˆ µ ( T ) − µ ) → N (0 , Σ ) in law as T → ∞ ,where Σ , Σ , and Σ are constant covariance matrices depending on α + , α − ,and the covariance r .Remark . Note that the covariance matrices Σ i , i = 1 , , in Theorem 3.2 canbe calculated explicitly in terms of the covariance r , α + , and α − by computingthe chaos decompositions of the functions f ( x ) = α + x x> + α − x x< and f ( x ) = α + x x> + α − x x< . Remark . By replacing ˆ µ n ( T ) with ˆ µ n ( t, T ) = 1 T Z tT X nu du and normalising accordingly, one can obtain functional versions of the abovelimit theorems. That is, in cases (1) and (2) of Theorem 3.2, we obtain con-vergence in law in the space of continuous functions towards σW t , where W t is a Brownian motion. In the case (3), the limiting process is σB Ht , where B H is the fractional Brownian motion. Indeed, the last case follows from aclassical result by Taqqu [12] and the first case from [3] and from the fact thatall moments of X are finite. However, from practical point of view, translat-ing these results to functional versions of the estimators ˆ α + ( T ) and ˆ α − ( T ) isnot feasible. Indeed, this follows from the fact that in the functional centrallimit theorem for ˆ µ ( t, T ) the normalisation (subtracting the true value) is doneinside the integral, while for ˆ α + ( T ) and ˆ α − ( T ) this is done after integration.Theorems 3.1 and 3.2 now give us the following limiting distributions forthe estimators α + ( T ) and α − ( T ) . Theorem 3.3.
Let ˆ α + ( T ) and ˆ α − ( T ) be defined by (3.1) and (3.2) , respec-tively, and let ˆ α ( T ) = ( ˆ α + ( T ) , ˆ α − ( T )) and α = ( α + , α − ) . Then, (1) if r satisfies the condition (1) of Assumption 3.1, √ T ( ˆ α ( T ) − α ) → N (0 , Σ A ) in law, (2) if r satisfies the condition (2) of Assumption 3.1, then s T log T ( ˆ α ( T ) − α ) → N (0 , Σ B ) in law, and (3) if r satisfies the condition (3) of Assumption 3.1, then T − H ( ˆ α ( T ) − α ) → N (0 , Σ C ) in law,where Σ A , Σ B , and Σ C are constant covariance matrices depending on α + , α − ,and the covariance r .Proof. The result follows from Theorems 3.1 and 3.2 together with a simpleapplication of a multidimensional delta method. We leave the details to thereader. (cid:3)
Remark . As in the case of Theorem 3.2, the covariance matrices Σ j , j = A, B, C in Theorem 3.3 can be calculated explicitly. Indeed, by utilising two-dimensional delta method, Σ j , j = A, B, C are linear transformations of Σ i , i =1 , , defined in Theorem 3.2. SCILLATING PROCESSES 11
We begin with the following versionsof weak law of large numbers.
Proposition 3.4 (Laws of large numbers) . Let n ≥ and suppose that | r ( T ) | → as | T | → ∞ . Then, for any p ≥ , as T → ∞ , T Z T X nu du → n Γ (cid:0) n +12 (cid:1) √ π ( α n + + ( − n α n − ) in L p as T → ∞ .Proof. In order to prove the claim, we have to show that (cid:13)(cid:13)(cid:13)(cid:13) T Z T X nu − µ n du (cid:13)(cid:13)(cid:13)(cid:13) p → , where k · k p is the p :th norm. We first observe that it suffices to prove conver-gence in probability. Indeed, for every p ≥ and ǫ > , we have sup T ≥ (cid:13)(cid:13)(cid:13)(cid:13) T Z T X nu − µ n du (cid:13)(cid:13)(cid:13)(cid:13) p + ǫ ≤ sup T ≥ T Z T k X nu − µ n k p + ǫ du ≤ C. Thus, for every p , the quantity (cid:12)(cid:12)(cid:12)(cid:12) T Z T X nu − µ n du (cid:12)(cid:12)(cid:12)(cid:12) p is uniformly integrable. Now the result follows from the fact that uniformintegrability and convergence in probability implies convergence in L , i.e. E (cid:12)(cid:12)(cid:12)(cid:12) T Z T X nu − µ n du (cid:12)(cid:12)(cid:12)(cid:12) p → , as T → ∞ . Let us now prove the convergence in L , which then implies the convergencein probability. By Corollary 2.6, we have that E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) T Z T X nu du − n Γ (cid:0) n +12 (cid:1) √ π ( α n + + ( − n α n − ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = T − Z T Z T a ( u, s ) duds, where a ( u, s ) = O ( | r ( s − u ) | ) . Writing Z ( u,s ) ∈ [0 ,T ] r ( u − s ) duds = Z ( u,s ) ∈ [0 ,T ] , | u − s |≥ T r ( u − s ) duds + Z ( u,s ) ∈ [0 ,T ] , | u − s | By Proposition 3.4, we have that (ˆ µ ( T ) , ˆ µ ( T )) → ( µ , µ ) in L p as T → ∞ . As sup T ≥ k ˆ µ ( T ) k p < ∞ for all p ≥ , it followsfrom Hölder inequality that, for any r > , we have k ˆ µ ( T ) − µ k p = k (ˆ µ ( T ) + µ )(ˆ µ ( T ) − µ ) k p ≤ C k ˆ µ ( T ) − µ k p + r , where C is a constant. Thus k ˆ µ ( T ) − µ k p → as T → ∞ . Now, using |√ a − √ b | ≤ p | a − b | and the triangle inequality, we get q | µ ( T ) − π ˆ µ ( T ) | − q | µ − πµ |≤ C p | ˆ µ ( T ) − µ | + C q | ˆ µ ( T ) − µ | . The claim now follows from the fact that, for any random variable Z and forany p ≥ , k p | Z |k p = q k Z k p/ . (cid:3) We proceed now to the proof of Theorem 3.3. Before that we recall somepreliminaries.Let N ∼ N (0 , and let f be a function such that E ( f ( N ) ) < ∞ . Then f admits the Hermite decomposition f ( x ) = ∞ X k =0 β k H k ( x ) , where H k , k = 0 , , . . . are the Hermite polynomials. The index d = min { k ≥ β k = 0 } is called the Hermite rank of f . For our purposes we need toconsider the functions f i ( x ) = α i + x i x> + α − x i x< , i = 1 , . The Hermite decompositions of f and f are denoted by(3.4) f ( x ) = X k =0 β ,k H k ( x ) and(3.5) f ( x ) = X k =0 β ,k H k ( x ) , respectively. SCILLATING PROCESSES 13 Proof of Theorem 3.2. By Cramer-Wold device, it suffices to prove that eachlinear combination Z ( y , y , T ) := y (ˆ µ ( T ) − µ ) + y (ˆ µ ( T ) − µ ) , when properly normalised, converges towards a normal distribution. By usingrepresentations (3.4) and (3.5), it follows that Z ( y , y , T ) have representation(3.6) Z ( y , y , T ) = 1 T Z T ∞ X k =0 γ k H k ( Y t ) dt, where γ k = y β ,k + y β ,k . Note also that we have E ˆ µ i ( T ) = µ i , i = 1 , , and thus γ = 0 , i.e. Z ( y , y , T ) is a normalised sequence. We begin withthe first case that is relatively easy. Indeed, suppose that the condition (1)of Assumption 3.1 holds. Then, as r is integrable, continuous version of theBreuer-Major theorem (see e.g. [3]) implies the claim directly.Under the other two conditions, we first note that the only contributingfactor to the limiting distribution in (3.6) is T Z T γ H ( Y t ) dt. This follows from the fact that E " ∞ X k =2 γ k Z T H k ( Y t ) dt ≤ CT Z T r ( u ) du and clearly T Z T r ( u ) du → under the condition (2) and T − H Z T r ( u ) du → under the condition (3). Thus it suffices to prove that [ y β , + y β , ] l ( T ) T Z T Y t dt converges towards normal distribution, where l ( T ) = q T log T under the condi-tion (2) and l ( T ) = T − H under the condition (3). Convergence of l ( T ) T R T Y t dt follows from the fact that Y is Gaussian, and the variance converges. Indeed,we have that E (cid:18)Z T Y t dt (cid:19) = Z T Z T E ( Y u Y s ) duds = Z T Z T r ( u − s ) duds = Z T r ( u )( T − u ) du. Under conditions (2) and (3) of Assumption 3.1 we obtain that, in both cases, l ( T ) T Z T r ( u )( T − u ) du → C > . Thus, it suffices to prove that β , = 0 or β , = 0 . Recall that f ( x ) = α + x x> + α − x x< . Thus we have β , = E [ f ( N ) N ] , where N ∼ N (0 , . Using (2.3) and (2.4) we get β , = 32 ( α + − α − ) . Recalling that α + = α − concludes the proof. (cid:3) Remark . Note that the proof of Theorem 3.3 relied on the fact that α + = α − .If α + = α − = α , then X t = α | Y t | and it follows that γ = 0 and γ = 0 . Then,under conditions (1) and (2), the limiting distribution is normal and the rateis √ T . Under the condition (3), the limiting distribution and the rate dependson the value of H . If H < , the limiting distribution is normal and the rateis √ T . If H = , then the limiting distribution is still normal, but the rateis q T log T . For H > , the limiting distribution is the Rosenblatt distribution(multiplied by a constant) and the rate is T − H . In practice, one doesnot observe the continuous path of X . Instead of that, one observes X on somediscrete time points ≤ t < t < . . . < T N < ∞ . That is why, in practicalapplications, the integrals in (3.3) are approximated by discrete sums. Thus SCILLATING PROCESSES 15 the natural moment estimators ˜ µ n ( N ) are defined by(3.7) ˜ µ n ( N ) = 1 T N N X k =1 X nt k − ∆ t k , where ∆ t k = t k − t k − . The corresponding estimators ˜ α + ( N ) and ˜ α − ( N ) forparameters α + and α − are(3.8) ˜ α + ( N ) = r π µ ( N ) + 12 q | µ ( N ) − π ˜ µ ( N ) | and(3.9) ˜ α − ( N ) = ˜ α + ( N ) − r π µ ( N ) . Let ∆ N = max k ∆ t k . In order to obtain consistency and asymptotic normalityfor the discretised versions, we have to assume that T N → ∞ and, at the sametime, that ∆ N → in a suitable way. The following proposition studies thedifference between ˆ µ n ( T N ) and ˜ µ n ( N ) . Proposition 3.5. Denote the variogram of the stationary process Y by c ( t ) ,i.e. c ( t ) = 2 [ r (0) − r ( t )] , where r is the covariance function. Then, for any n ≥ and for any p ≥ ,there exists a constant C = C ( n, p, α + , α − ) such that k ˆ µ n ( T N ) − ˜ µ n ( N ) k p ≤ C sup ≤ t ≤ ∆ N p c ( t ) . Proof. We have, by Minkowski inequality, that k ˆ µ n ( T N ) − ˜ µ n ( N ) k p = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T N Z T N X nu du − T N N X k =1 X nt k − ∆ t k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p ≤ T N N X k =1 Z t k t k − (cid:13)(cid:13)(cid:13) X nu − X nt k − (cid:13)(cid:13)(cid:13) p du. Using, x n − y n = ( x − y ) n − X j =0 x j y n − − j , we get, for any s, u ≥ , that | X ns − X nu | ≤ | X s − X u | n − X j =0 | X s | j | X u | n − − j . Thus, a repeated application of Hölder inequality together with the fact that sup s ≥ k X s k p < ∞ implies that, for every q > p , we have k X nu − X ns k p ≤ C k X u − X s k q , where C is a constant. Moreover, by the proof of Proposition 2.7, we have, | X u − X s | ≤ C | Y u − Y s | . Since Y is Gaussian, hypercontractivity implies that k X u − X s k q ≤ C k Y u − Y s k . Now stationarity of Y gives k Y u − Y s k = p c ( u − s ) . Thus we observe T N N X k =1 Z t k t k − (cid:13)(cid:13)(cid:13) X nu − X nt k − (cid:13)(cid:13)(cid:13) p du ≤ CT N N X k =1 Z t k t k − p c ( u − t k − ) du ≤ C sup ≤ t ≤ ∆ N p c ( t ) proving the claim. (cid:3) We can now easily deduce the following results for the asymptotical proper-ties of the estimators ˜ α + and ˜ α − . Theorem 3.6. Let ˜ α + ( N ) and ˜ α − ( N ) be defined by (3.8) and (3.9) , respec-tively. Suppose that r ( T ) → as T → ∞ and that sup ≤ s ≤ T c ( s ) → as T → . If T N → ∞ and ∆ N → as N → ∞ , then for any p ≥ , ˜ α + ( N ) → α + and ˜ α − ( N ) → α − in L p . SCILLATING PROCESSES 17 Proof. Using the arguments of the proof of Theorem 3.1 together with Propo-sition 3.5 we deduce that k ˜ α + ( N ) − ˆ α + ( T N ) k p → and k ˜ α − ( N ) − ˆ α − ( T N ) k p → . Thus the claim follows from Theorem 3.1. (cid:3) Theorem 3.7. Let ˜ α + ( N ) and ˜ α − ( N ) be defined by (3.8) and (3.9) , respec-tively, and let ˜ α ( N ) = ( ˜ α + ( N ) , ˜ α − ( N )) and α = ( α + , α − ) . Let Σ A , Σ B , and Σ C be the same covariance matrices as in Theorem 3.3. Suppose further that sup ≤ s ≤ t c ( s ) → as t → , T N → ∞ , and ∆ N → as N → ∞ . Denote h ( N ) = sup ≤ s ≤ ∆ N p c ( s ) . Then, (1) if r satisfies the condition (1) of Assumption 3.1, p T N ( ˜ α ( N ) − α ) → N (0 , Σ A ) in law for every partitions ≤ t < . . . T N satisfying √ T N h ( N ) → , (2) if r satisfies the condition (2) of Assumption 3.1, s T N log T N ( ˜ α ( N ) − α ) → N (0 , Σ B ) in law for every partitions ≤ t < . . . T N satisfying q T N log T N h ( N ) → ,and (3) if r satisfies the condition (3) of Assumption 3.1 , T − HN ( ˜ α ( N ) − α ) → N (0 , Σ C ) in law for every partitions ≤ t < . . . T N satisfying T − HN h ( N ) → .Proof. The additional conditions on the mesh together with Proposition 3.5guarantee that l ( T N ) k ˜ α ( N ) − ˆ α ( T N ) k p → , where l ( T N ) is the corresponding normalisation for each case. Thus the resultfollows directly from Theorem 3.3. (cid:3) One natural way for choosing the observation points such that the abovementioned conditions are fulfilled, is to choose N equidistant points with ∆ N = log NN . Then ∆ N → and T N = N ∆ N = log N → ∞ . If, in addi-tion, Y is Hölder continuous of some order θ > , then also the rest of therequirements are satisfied. Indeed, it follows from [1, Theorem 1] that if Y isHölder continuous of order θ > , then for any ǫ > , we have c ( t ) ≤ Ct θ − ǫ for some constant C . Thus h ( N ) ≤ √ C ∆ ( θ − ǫ ) N , from which it is easy to seethat, for ǫ < θ , T − HN h ( N ) ≤ s T N log T N h ( N ) ≤ p T N h ( N ) ≤ √ C (log N ) (1+ θ − ǫ ) N ( θ − ǫ ) → . Self-similar processesform an interesting and applicable class of stochastic processes. In this subsec-tion, we consider oscillating Gaussian processes driven by self-similar Gaussianprocesses Y . In other words, we consider processes of the type X t = α + Y t Y t > + α − Y t Y t < , where Y is H -self-similar for some H > . That is, for every a > , thefinite dimensional distributions of the processes ( Y at ) t ≥ and ( a H Y t ) t ≥ are thesame. Throughout this section we assume that we have observed X t on aninterval [0 , , and our aim is to estimate α + and α − . The key ingredient isthe Lamperti transform(3.10) U t = e − Ht Y e t . It is well-known that U is stationary on ( −∞ , . Moreover, for t ≥ , wedefine a process ˜ X t := e Ht X e − t = α + U − t U − t > + α − U − t U − t < . Clearly, observing X on [0 , is equivalent to observing ˜ X t on t ≥ . Thisleads to the ”moment estimators” ˆ µ i ( T ) defined by(3.11) ˆ µ i ( T ) = 1 T Z e − T u − H − X iu du. The corresponding parameter estimators ˆ α + ( T ) and ˆ α − ( T ) are defined by plug-ging in ˆ µ ( T ) and ˆ µ ( T ) into (3.1) and (3.2), respectively. Indeed, a change ofvariable u = e t gives ˆ µ i ( T ) = 1 T Z T ˜ X it dt. SCILLATING PROCESSES 19 Thus studying the covariance function r of a stationary Gaussian process U given by (3.10) enables us to apply Theorems 3.1 and 3.3. We end this sectionwith an interesting example. We consider bifractional Brownian motions that,among others, cover fractional Brownian motions and standard Brownian mo-tions. Recall that a bifractional Brownian motion B H,K with H ∈ (0 , and K ∈ (0 , such that HK ∈ (0 , is a centered Gaussian process with a covari-ance function R ( s, t ) = 12 K (cid:2) ( t H + s H ) K − | t − s | HK (cid:3) . It is known that B H,K is HK -self-similar. Furthermore, one recovers fractionalBrownian motion by plugging in K = 1 , from which standard Brownian motionis recovered by further setting H = . Now the covariance function r of theLamperti transform U t = e − HKt B H,Ke t has exponential decay (see [11]). Thus,we may apply the item (1) of Theorem 3.3 to obtain that √ T ( ˆ α + ( T ) − α + ) and √ T ( ˆ α − ( T ) − α − ) are asymptotically normal. Similarly, discretising theintegral in (3.11) and applying Theorems 3.6 and 3.7 allows us to considerparameter estimators based on discrete observations. We leave the details tothe reader. 4. Discussion In this paper we considered oscillating Gaussian processes and introduceda moment based estimators for the model parameters. Moreover, we provedconsistency and asymptotic normality of the estimators under natural assump-tions on the driving Gaussian process. An interesting and natural extensionto our approach would be to consider oscillating processes with several (morethan two) parameters and corresponding regions. This would make the modelclass more flexible and adaptive. Another topic for future research would beto develop testing procedures for the model parameters. Acknowledgements. S. Torres is partially supported by the Project Fonde-cyt N. 1171335. P. Ilmonen and L. Viitasaari wishes to thank Vilho, Yrjö, andKalle Väisälä foundation for financial support. References [1] E. Azmoodeh, T. Sottinen, L. Viitasaari, A. Yazigi (2014). Necessary and sufficientconditions for Hölder continuity of Gaussian processes. Statistics and Probability Letters,94: 230–235. [2] L. Bai, J. Ma (2015). Stochastic differential equations driven by fractional Brownianmotion and Poisson point process. Bernoulli, 21(1): 303–334.[3] S. Campese, I. Nourdin, D. Nualart (2018). Continuous Breuer-Major theorem: tightnessand non-stationarity. Annals of Probability, to appear.[4] J. Garzón, J.A. León, S. Torres (2017). Fractional stochastic differential equation withdiscontinuous diffusion. Stochastic Analysis and Applications, 35(6): 1113– 1123.[5] A. Lejay (2006). On the constructions of the skew Brownian motion. Probab. Surveys,3: 413–466.[6] A. Lejay, P. Pigato (2018). Statistical estimation of the Oscillating Brownian Motion. Bernoulli, 24(4B): 3568–3602.[7] Y. Mishura (2008). Stochastic Calculus for Fractional Brownian Motion and RelatedProcesses. Springer-Verlag, Berlin.[8] Y. Mishura, D. Nualart (2004). Weak solutions for stochastic differential equations withadditive fractional noise. Statistics and Probability Letters, 70: 253–261.[9] D. Nualart, A. R ˇ a şcanu (2002). Differential equations driven by fractional Brownianmotion. Collect. Math., 53(1): 55–81.[10] S.O. Rice (1944). Mathematical Analysis of Random Noise Part III. Bell System Tech-nical Journal, 23(3): 282– 332.[11] T. Sottinen, L. Viitasaari (2018). Parameter Estimation for the Langevin Equation withStationary-Increment Gaussian Noise. Statistical Inference for Stochastic Processes, 21(3):569–601.[12] M. Taqqu (1975).