Copulas Related to Manneville-Pomeau Processes
CCopulas Related to Manneville-PomeauProcesses
S´ılvia R.C. Lopes and Guilherme Pumi
Mathematics InstituteFederal University of Rio Grande do SulThis version: October, 10th, 2011
Abstract
In this work we derive the copulas related to Manneville-Pomeau processes. We examineboth bidimensional and multidimensional cases and derive some properties for the relatedcopulas. Computational issues, approximations and random variate generation problems arealso addressed and simple numerical experiments to test the approximations developed arealso perform. In particular, we propose an approximation to the copula which we show toconverge uniformly to the true copula. To illustrate the usefulness of the theory, we derivea fast procedure to estimate the underlying parameter in Manneville-Pomeau processes.
Keywords.
Copulas; Manneville-Pomeau Processes; Invariant Measures; Parametric Esti-mation.
The statistics of stochastic processes derived from dynamical systems has seen a grown attentionin the last decade or so (see Chazottes et al. (2005) and references therein). The relationshipbetween copulas and areas such ergodic theory and dynamical systems also have seen somedevelopment, especially in the last few years (see, for instance, Koles´arov´a et al. (2008)). Inthis work our aim is to contribute with the area by identifying and studying the copulas relatedto random vectors coming from the so-called Manneville-Pomeau processes, which are obtainedas iterations of the Manneville-Pomeau transformation to a specific chosen random variable (seeDefinitions 2.1 and 2.2). We cover both, bidimensional and n -dimensional cases, which share alot more in common than one could expect.The copulas derived here depend on a probability measure which has no closed formula. Inorder to minimize this deficiency, we propose an approximation to the copula which we showto converge uniformly to the true copula. The copula also depend on several functions whichhave to be approximated as well, so the approximation depends on several intermediate steps.The results related to the convergence of the proposed approximation presented here are farmore general than we need and actually allows one to change these intermediate approximationsand still obtain the uniform convergence result for the approximated copula. We also addressproblems related to random variate generation of the copula and present the results of somesimple numerical experiments in order to assess the stability and precision of the intermediateapproximations. The usefulness of the theory is illustrated by a simple application to the problemof estimating the underlying parameter in Manneville-Pomeau processes.The paper is organized as follows: in the next section, we briefly review some conceptsand results on Manneville-Pomeau transformations and processes and on copulas. Section 3is devoted to determine the copulas related to any pair ( X t , X t + h ) from a Manneville-Pomeauprocess and to explore some consequences. In Section 4, the multidimensional extensions areshown. In Section 5 an approximation to the copulas derived in Section 3 is proposed. This a r X i v : . [ m a t h . S T ] J a n Copulas Related to Manneville-Pomeau Processes approximation, which is shown to converge uniformly to the true copula, is then applied to exploitsome characteristics of the copulas related to Manneville-Pomeau process through statisticaland graphical analysis. Some computational and random variate generation problems are alsoaddressed. In Section 6 we illustrate the usefulness of the theory by deriving a fast procedure toestimate the underlying parameter in Manneville-Pomeau processes. Conclusions are reservedto Section 7.
In this section we shall briefly review some basic results on Manneville-Pomeau transformationsand related processes as well as some concepts on copulas needed later. We start with thedefinition of the Manneville-Pomeau transformation.
Definition 2.1.
The map T s : [0 , −→ [0 , T s ( x ) = x + x s (mod 1) , for s >
0, is called the
Manneville-Pomeau transformation (MP transformation , for short).In what follows, λ shall denote the Lebesgue measure in I := [0 ,
1] and the k -fold compo-sition will be denoted, as usual, by T ks = T s ◦ · · · ◦ T s . Figure 1 shows the plot of the MPtransformation for the values of s ∈ { . , , , } . The plots show the usual behavior of theMP transformations: for any s , they are increasing and differentiable functions by parts in I .Furthermore, for any s >
0, the function T ks will have exactly 2 k parts. Figure 2.1:
Plot of the Manneville-Pomeau transformation for different values of s ∈ { . , , , } . Pianigiani (1980) shows the existence of a T s -invariant and absolutely continuous measurewith respect to the Lebesgue measure in I which will be denoted henceforth by µ s . However,the proof uses Perron-Frobenius operator theory and is, for practical purposes, non-constructiveso that an explicit form for a T s -invariant measure is unknown. However, this measure will bea Sinai-Bowen-Ruelle (SBR) measure in the sense that the weak convergence n n − (cid:88) k =0 δ T ks ( x ) ( A ) −→ µ s ( A ) (2.1) holds for almost all x ∈ I and all µ s -continuity sets A , where δ a ( · ) is the Dirac measure at a .As a dynamical system, the triple ( I, µ s , T s ) is exact (that is, lim k →∞ ( µ s ◦ T ks )( A ) = 1, forall positive µ s -measurable sets A ) which implies ergodicity and strong-mixing. When s < Recall that a set A is a µ -continuity set if µ ( ∂A ) = 0, where ∂A denotes the boundary of A . The measuretheoretical results applied here can be found, for instance, in Royden (1988). A good reference in weak convergenceof probability measures is Billingsley (1999) and for ergodic theoretical related results, see Pollicott and Yuri(1998). .R.C. Lopes and G. Pumi µ s is a probability measure, while if s ≥ µ s is no longer finite, but σ -finite (see Fisherand Lopes (2001)). Furthermore, it can be shown that µ s has a positive, bounded continuousRadon-Nikodym derivative d µ s = h s ( x )d x , fact that will be useful later. For further detailsin the theory of MP transformations and related results, we refer to Pianigiani (1980), Young(1999), Maes et al. (2000) and Fisher and Lopes (2001). For applications, see Zebrowsky (2001),Olbermann et al. (2007) and Lopes and Lopes (1998). Definition 2.2.
Let s ∈ (0 ,
1) and let U be a random variable distributed according to (theprobability measure) µ s . Let ϕ : [0 , −→ R be a function in L ( µ s ). The stochastic processgiven by X t = ( ϕ ◦ T ts )( U ) , for all t ∈ N , is called a Manneville-Pomeau process (or MP process , for short).The MP process, as defined above, is stationary since µ s is a T s -invariant measure and µ s (cid:28) λ . It is also ergodic since µ s is ergodic for T s . By its turn, copulas are distributionfunctions whose marginals are uniformly distributed on I . The copula literature has grownenormously in the last decade, especially in terms of empirical applications and have becomestandard tools in financial data analysis (see Nelsen (2006) and references therein). The nexttheorem, known as Sklar’s theorem, is the key result for copulas and elucidates the role playedby them. See Schweizer and Sklar (2005) for a proof. Theorem 2.1 (Sklar) . Let X , · · · , X n be random variables with marginals F , · · · , F n , respec-tively, and joint distribution function H . Then, there exists a copula C such that, H ( x , · · · , x n ) = C (cid:0) F ( x ) , · · · , F n ( x n ) (cid:1) , for all ( x , · · · , x n ) ∈ R n . If the F i ’s are continuous, then C is unique. Otherwise, C is uniquely determined on Ran( F ) ×· · · × Ran( F n ) . The converse also holds. Furthermore, C ( u , · · · , u n ) = H (cid:0) F ( − ( u ) , · · · , F ( − n ( u n ) (cid:1) , for all ( u , · · · , u n ) ∈ I n , where for a function F , F ( − denotes its pseudo-inverse given by F ( − ( x ) := inf (cid:8) u ∈ Ran( F ) : F ( u ) ≥ x (cid:9) . The next theorem, whose proof can be found, for instance, throughout Nelsen (2006), shallprove very useful in what follows. Except stated otherwise, the measure implicit to phrases like“almost sure”, “almost everywhere” and so on will be the (appropriate) Lebesgue measure.
Theorem 2.2.
Let X and Y be continuous random variables with copula C . If f is an almosteverywhere decreasing function then C f ( X ) ,Y ( u, v ) = u − C X,Y ( u, − v ) . Furthermore, if f and f are functions increasing almost everywhere, then C f ( X ) ,f ( Y ) ( u, v ) = C X,Y ( u, v ) . For an introduction to copulas, we refer the reader to Nelsen (2006). For more details andextensions to the multivariate case with emphasis in modeling and dependence concepts, seeJoe (1997). The theory of copulas is also intimately related to the theory of probabilistic metricspaces, see Schweizer and Sklar (2005) for more details in this matter.
In this section we shall investigate the bidimensional copulas associated to pairs of randomvariables coming from MP processes which we shall call MP copulas . As we will see later, the
Copulas Related to Manneville-Pomeau Processes multidimensional case is very similar to the bidimensional case, so we shall give special attentionto the latter.First, let { X n } n ∈ N be an MP process with parameter s ∈ (0 ,
1) and ϕ ∈ L ( µ s ) be anincreasing almost everywhere function. Throughout this section and in the rest of the paper, weshall treat s ∈ (0 ,
1) as a given fixed number. Let F ( x ) := P ( U ≤ x ) = µ s (cid:0) [0 , x ] (cid:1) . Since µ s (cid:28) λ , µ s is non-atomic and, therefore, F will be (uniformly) continuous. The existenceof a positive Radon-Nikodym density for µ s also shows that F will be increasing and its inversewill be well defined. Let F t be the distribution function of T ts ( U ), for all t ∈ N . For x ∈ I ,notice that F t ( x ) := P (cid:0) T ts ( U ) ≤ x (cid:1) = µ s (cid:0) ( T ts ) − (cid:0) [0 , x ] (cid:1)(cid:1) = µ s (cid:0) [0 , x ] (cid:1) = F ( x ) , (3.1) since µ s is a T s -invariant measure.In what follows, we shall need the solution for the inequality T ts ( X ) ≤ y , y ∈ (0 , X ,for X a random variable taking values in I . Now, since each of the 2 t parts of T ts is one-to-onein its domain, the inverse of T ts will also be continuous by parts and each part will also be aone-to-one function in its domain. Let 0 = a t, , · · · , a t, t = 1 be the end points of each part of T ts . We shall call each interval [ a t,k , a t,k +1 ) a node of T ts , for k = 0 , · · · , t − t >
0. The(piecewise) inverse of T ts can be conveniently written as ( T ts ) − : I −→ I t y (cid:55)−→ (cid:0) T t, ( y ) , · · · , T t, t − ( y ) (cid:1) , (3.2) where T t,k ( y ) denotes the inverse of T ts restricted to its k -th node, for all k ∈ { , · · · , t − } .Notice that both T t,k and a t,k depend on s for each k , but since no confusion will arise, and forthe sake of simplicity, we shall omit this dependence from the notation as we shall do in severalother occasions. Now, the solution of the inequality T ts ( X ) ≤ y in X can be determined and isgiven by X ∈ A t, ( y ) (cid:83) · · · (cid:83) A t, t − ( y ), where A t,k ( y ) = (cid:2) a t,k , T t,k ( y ) (cid:3) , (3.3) which will be a proper closed subinterval of [ a t,k , a t,k +1 ), for each k = 0 , · · · , t −
1. Notice that A t,k ( y ) (whose dependence on s was omitted from the notation) is just the inverse image of[0 , y ] by the transformation T ts restricted to the node [ a t,k , a t,k +1 ). We can now use this resultto prove the following useful lemma. Lemma 3.1.
Let X be a random variable taking values in I and let T s be the MP transformationwith parameter s > . Then, for any t ∈ N and x ∈ I , P (cid:0) T ts ( X ) ≤ x (cid:1) = P (cid:0) X ∈ (cid:83) t − k =0 A t,k ( x ) (cid:1) = t − (cid:88) k =0 P (cid:0) X ∈ A t,k ( x ) (cid:1) , where A t,k ’s are given by (3.3) . Proof:
The result follows easily from what was just discussed and from the fact that theintervals A t,k ’s are (pairwise) disjoints. (cid:4) As for the copulas related to MP processes, in view of the stationarity of the MP process,the following result follows easily.
Proposition 3.1.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , and ϕ ∈ L ( µ s ) bean almost everywhere increasing function. Then, for any t, h ∈ N , C X t ,X t + h ( u, v ) = C X ,X h ( u, v ) , everywhere in I . .R.C. Lopes and G. Pumi Proof:
As consequence of the stationarity of { X t } t ∈ N , if we let the joint distribution of thepair ( X p , X q ) for any p, q ∈ N , p (cid:54) = q , be denoted by (cid:101) H p,q ( · , · ), it follows that for all x, y ∈ (0 , t ∈ N and h ∈ N ∗ := N \{ } , (cid:101) H t,t + h ( x, y ) = (cid:101) H ,h ( x, y ). Now, upon applying Sklar’s Theoremand (3.1), it follows that C X t ,X t + h ( u, v ) = (cid:101) H t,t + h (cid:0) F − t ( u ) , F − t + h ( v ) (cid:1) = (cid:101) H ,h (cid:0) F − ( u ) , F − h ( v ) (cid:1) = C X ,X h ( u, v ) , for all ( u, v ) ∈ I . (cid:4) Corollary 3.1.
Let T s be the MP transformation for some s ∈ (0 , , µ s be a T s -invariantprobability measure and let U be distributed as µ s . Then, for any t, h ∈ N , h (cid:54) = 0 , C T ts ( U ) ,T t + hs ( U ) ( u, v ) = C U ,T hs ( U ) ( u, v ) everywhere in I . Proof:
Immediate from Theorem 2.2 applied to Proposition 3.1. (cid:4)
Now we turn our attention to determine the copula associated to any pair of random variables( X p , X q ), p, q ∈ N , obtained from an MP process with ϕ increasing almost everywhere. For thesake of simplicity, let us introduce the following functions: let h be a positive integer and for k = 0 , · · · , h −
1, let F h,k : I → (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:3) be given by F h,k ( x ) := F (cid:0) T h,k (cid:0) F − ( x ) (cid:1)(cid:1) . Notice that for each k , F h,k (0) = F ( a h,k ) and F h,k (1) = F ( a h,k +1 ) and F h,k is a one to one,increasing and uniformly continuous function. Proposition 3.2.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , , ϕ ∈ L ( µ s ) be anincreasing almost everywhere function and let F be the distribution function of U . Then, forany t, h ∈ N , h (cid:54) = 0 and ( u, v ) ∈ I , C X t ,X t + h ( u, v ) = (cid:32) n − (cid:88) k =0 F h,k ( v ) − F ( a h,k ) (cid:33) δ N ∗ ( n ) + min (cid:8) u, F h,n ( v ) (cid:9) − F ( a h,n ) , (3.4) where δ N ∗ ( x ) equals 1, if x ∈ N ∗ and 0, otherwise, { a h,k } h k =0 are the end points of the nodes of T hs and n := n ( u ; h ) = (cid:8) k : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) ∈ { , · · · , h − } . Proof:
By Propositions 3.1 and 2.2, it suffices to derive the copula of the pair (cid:0) U , T hs ( U ) (cid:1) .So let again { X n } n ∈ N be an MP process with parameter s ∈ (0 ,
1) and ϕ ∈ L ( µ s ) be anincreasing almost everywhere function and let H ,h ( · , · ) denote the distribution function of thepair (cid:0) U , T hs ( U ) (cid:1) . Notice that H ,h ( x, y ) = P ( U ≤ x, T hs ( U ) ≤ y ) = P (cid:0) U ≤ x, U ∈ (cid:83) h − k =0 A h,k ( y ) (cid:1) = P (cid:0) U ∈ [0 , x ] (cid:84) (cid:83) h − k =0 A h,k ( y ) (cid:1) = P (cid:0) U ∈ (cid:83) h − k =0 (cid:2) [0 , x ] (cid:84) A h,k ( y ) (cid:3)(cid:1) = h − (cid:88) k =0 P (cid:0) U ∈ [0 , x ] (cid:84) A h,k ( y ) (cid:1) , for any x, y ∈ (0 , n := n ( x ; h ) = (cid:8) k : x ∈ [ a h,k , a h,k +1 ) (cid:9) ∈ { , · · · , h − } andassume for the moment that n ≥
1. Since A h,k ( y ) = (cid:2) a h,k , T h,k ( y ) (cid:3) , it follows H ,h ( x, y ) = n − (cid:88) k =0 P (cid:0) U ∈ A h,k ( y ) (cid:1) + P (cid:0) U ∈ A h,n ( y ) (cid:84) [ a h,n , x ] (cid:1) Copulas Related to Manneville-Pomeau Processes = n − (cid:88) k =0 µ s (cid:0) A h,k ( y ) (cid:1) + µ s (cid:0)(cid:2) a h,n , T h,n ( y ) (cid:3) (cid:84) [ a h,n , x ] (cid:1) = n − (cid:88) k =0 µ s (cid:0)(cid:2) a h,k , T h,k ( y ) (cid:3)(cid:1) + µ s (cid:0)(cid:2) a h,n , min { x, T h,n ( y ) } (cid:3)(cid:1) , which can be written, since F ( x ) = µ s ([0 , x ]) is increasing, as H ,h ( x, y ) = n − (cid:88) k =0 (cid:2) F (cid:0) T h,k ( y ) (cid:1) − F ( a h,k ) (cid:3) + min (cid:8) F ( x ) , F (cid:0) T h,n ( y ) (cid:1)(cid:9) − F ( a h,n ) . If n = 0, the summation is absent of the formula and we have H ,h ( x, y ) = min (cid:8) F ( x ) , F (cid:0) T h, ( y ) (cid:1)(cid:9) − F ( a h, ) , so that, in any case, we have H ,h ( x, y ) = (cid:32) n − (cid:88) k =0 (cid:2) F (cid:0) T h,k ( y ) (cid:1) − F ( a h,k ) (cid:3)(cid:33) δ N ∗ ( n ) + min (cid:8) F ( x ) , F (cid:0) T h,n ( y ) (cid:1)(cid:9) − F ( a h,n ) . Now upon applying Sklar’s Theorem, it follows that C U ,T hs ( U ) ( u, v ) = H ,h (cid:0) F − ( u ) , F − h ( v ) (cid:1) = H ,h (cid:0) F − ( u ) , F − ( v ) (cid:1) = (cid:32) n − (cid:88) k =0 F h,k ( v ) − F ( a h,k ) (cid:33) δ N ∗ ( n ) + min (cid:8) u, F h,n ( v ) (cid:9) − F ( a h,n ) , where n := n ( u ; h ) = n (cid:0) F − ( u ); h (cid:1) = (cid:8) k : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) . The result nowfollows from Proposition 3.1. (cid:4) Remark 3.1.
Notice that the copula (3.4) can be expressed in terms of µ s as C X t ,X t + h ( u, v ) = (cid:32) n − (cid:88) k =0 µ s (cid:16)(cid:2) a h,k , T h,k (cid:0) F − ( v ) (cid:1)(cid:3)(cid:17)(cid:33) δ N ∗ ( n ) ++ µ s (cid:16)(cid:2) a h,n , min (cid:8) F − ( u ) , T h,n (cid:0) F − ( v ) (cid:1)(cid:9)(cid:3)(cid:17) , (3.5) which will prove useful in Section 5. Also, expression (3.5) is helpful if one desires to verifydirectly that the marginals of (3.4) are indeed uniform.In the next proposition we address the case where ϕ is an almost everywhere decreasingfunction. In view of Theorem 2.2, one could, at first glance, think that a result like C X ,X h = C X t ,X t + h would not hold anymore, but in fact it still does, as it is shown in the next proposition. Proposition 3.3.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , , ϕ ∈ L ( µ s ) bean almost everywhere decreasing function and let F be the distribution function of U . Then, C X ,X h ( u, v ) = C X t ,X t + h ( u, v ) everywhere in I and, for any t, h ∈ N and h (cid:54) = 0 , C X t ,X t + h ( u, v ) = u + v − (cid:32) n (cid:88) k =0 [ F h,k (1 − v ) − F ( a h,k )] (cid:33) δ N ∗ ( n ) ++ min (cid:8) − u, F h,n (1 − v ) (cid:9) − F ( a h,n ) , (3.6) for all ( u, v ) ∈ I , where { a h,k } h k =0 are the end points of the nodes of T hs and n := n ( u ; h ) = (cid:8) k : u ∈ (cid:0) − F ( a h,k +1 ) , − F ( a h,k ) (cid:3)(cid:9) . .R.C. Lopes and G. Pumi Proof:
Since the inverse of an almost everywhere decreasing function is still decreasing almosteverywhere and X t = ϕ (cid:0) T ts ( U ) (cid:1) , upon applying Theorem 2.2 twice, it follows that C T ts ( U ) , T t + hs ( U ) ( u, v ) = C ϕ − ( X t ) , ϕ − ( X t + h ) ( u, v ) = u − C X t ,ϕ − ( X t + h ) ( u, − v )= u − (cid:0) − v − C X t , X t + h (1 − u, − v ) (cid:1) , or, equivalently (changing u by 1 − u and v by 1 − v ), C X t , X t + h ( u, v ) = u + v − C T ts ( U ) , T t + hs ( U ) (1 − u, − v ) . (3.7) Now (3.6) follows upon applying Proposition 3.2 with the identity map and substituting equation(3.4) into (3.7). As for the equality C X ,X h ( u, v ) = C X t ,X t + h ( u, v ), Corollary 3.1 and Theorem2.2 applied to (3.7) yield C X t , X t + h ( u, v ) = u + v − C U , T hs ( U ) (1 − u, − v )= u + v − C ϕ − ( ϕ ( U )) ,ϕ − ( ϕ ( T hs ( U ))) (1 − u, − v )= C ϕ ( U ) ,ϕ ( T hs ( U )) ( u, v ) = C X ,X h ( u, v ) , everywhere in I , as desired. (cid:4) Remark 3.2.
In view of the “stationarity” results of Theorems 3.1 and 3.3, a copula associatedto a pair ( X t , X t + h ) from an MP process will be referred as lag h MP copula .The copulas in (3.4) and (3.6) are both singular, as it can be readily verified. So the questionthat naturally arises is, for each h , what is the support of C X t ,X t + h ? The question is addressedin the next proposition, which will be useful in Sections 5 and 6. For simplicity, for a given MPprocess and h >
0, let (cid:96) + h,k , (cid:96) − h,k : (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1) → I be functions defined by (cid:96) + h,k ( x ) = x − F ( a h,k ) F ( a h,k +1 ) − F ( a h,k ) and (cid:96) − h,k ( x ) = F ( a h,k +1 ) − xF ( a h,k +1 ) − F ( a h,k ) , for all k = 0 , · · · , h −
1. Notice that, for each k , (cid:96) + h,k is the linear function connecting the points (cid:0) F ( a h,k ) , (cid:1) and (cid:0) F ( a h,k +1 ) , (cid:1) , while (cid:96) − h,k connects the points (cid:0) F ( a h,k ) , (cid:1) and (cid:0) F ( a h,k +1 ) , (cid:1) . Proposition 3.4.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , , for ϕ ∈ L ( µ s ) an almost everywhere increasing function and let { Y n } n ∈ N be an MP process with parameter s ∈ (0 , , for ϕ ∈ L ( µ s ) an almost everywhere decreasing function. Also let F be the distributionfunction of U . Then, for any t, h ∈ N , h > , supp { C X t ,X t + h } = (cid:83) h − k =0 (cid:8)(cid:0) u, (cid:96) + h,k ( u ) (cid:1) : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) (3.8) and supp { C Y t ,Y t + h } = (cid:83) h − k =0 (cid:8)(cid:0) u, (cid:96) − h,k ( u ) (cid:1) : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) . (3.9) Proof:
Let R = [ u , u ] × [ v , v ] be a rectangle in I and let its C X t ,X t + h -volume be denotedby V C X ( R ). Let k ∈ { , · · · , h − } be fixed and suppose that u i ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:3) . Thisimplies that n = k for all four terms in V C X ( R ), hence the summands and constants on thecopula cancel out so that we have V C X ( R ) = min (cid:8) u , F h,k ( v ) (cid:9) + min (cid:8) u , F h,k ( v ) (cid:9) − min (cid:8) u , F h,k ( v ) (cid:9) − min (cid:8) u , F h,k ( v ) (cid:9) = V M (cid:0) [ u , u ] × [ F h,k ( v ) , F h,k ( v )] (cid:1) , where M ( u, v ) = min { u, v } is the Frech`et upper bound copula whose support is the maindiagonal in I . Since [ u , u ] × [ F h,k ( v ) , F h,k ( v )] ⊂ [ F ( a h,k ) , F ( a h,k +1 )] , V C X ( R ) > R (cid:84) (cid:8)(cid:0) u, (cid:96) + h,k ( u ) (cid:1) : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) (cid:54) = ∅ . Copulas Related to Manneville-Pomeau Processes
Analogously, denoting the C Y t ,Y t + h -volume of R by V C Y ( R ), if u i ∈ (cid:2) − F ( a h,k ) , − F ( a h,k +1 ) (cid:3) , we have V C Y ( R ) = min (cid:8) − u , F h,k (1 − v ) (cid:9) + min (cid:8) − u , F h,k (1 − v ) (cid:9) −− min (cid:8) − u , F h,k (1 − v ) (cid:9) − min (cid:8) − u , F h,k (1 − v ) (cid:9) = V M (cid:0) [1 − u , − u ] × [ F h,k (1 − v ) , F h,k (1 − v )] (cid:1) . (3.10) Since [1 − u , − u ] × [ F h,k (1 − v ) , F h,k (1 − v )] ⊂ [ F ( a h,k ) , F ( a h,k +1 )] , V C Y ( R ) is pos-itive if, and only if, R (cid:84) (cid:8)(cid:0) u, (cid:96) − h,k ( u ) (cid:1) : u ∈ (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:1)(cid:9) (cid:54) = ∅ (notice the terms1 − v i in expression (3.10), for i = 1 , I = (cid:83) h − k =0 (cid:2) F ( a h,k ) , F ( a h,k +1 ) (cid:3) = (cid:83) h − k =0 (cid:2) − F ( a h,k +1 ) , − F ( a h,k ) (cid:3) . (cid:4) Remark 3.3.
We end up this section by noticing that as an application of Propositions 3.1and 3.3, together with the so-called copula version of Hoeffding’s lemma (see Nelsen (2006)),we can show in a rather different way that an MP process is weakly stationary. Let F X t be thedistribution function of X t and notice that F X t ( x ) = F X ( x ), for all t ∈ N , by the stationarityof { X t } t ∈ N and since C X t ,X t + h ( u, v ) = C X ,X h ( u, v ), the result follows immediately. In this section we are interested in extending the results from the previous section to the mul-tidimensional case, that is, in this section we are interested in deriving the copulas associatedto n -dimensional vectors ( X t , · · · , X t n ), t , · · · , t n ∈ N , coming from an MP process with ϕ an increasing almost everywhere function. In view of Theorem 2.2, it suffices to derive thecopula associated to the vector (cid:0) T t s ( U ) , · · · , T t n s ( U ) (cid:1) . It turns out that there are more simi-larities between the bidimensional and multidimensional cases than one could expect. In fact,an expression very similar in form to (3.4) holds for the multidimensional case as well.Let { X n } n ∈ N be an MP process with parameter s ∈ (0 ,
1) and ϕ ∈ L ( µ s ) be an al-most everywhere increasing function. For the sake of simplicity, we shall use the followingnotation: let a, b ∈ N , a < b , we shall write x a : b := ( x a , · · · , x b ) and for a function f , f ( x a : b ) := (cid:0) f ( x a ) , · · · , f ( x b ) (cid:1) . Again we shall denote the distribution function of U by F . Theorem 4.1.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , , with ϕ ∈ L ( µ s ) analmost everywhere increasing function. Let t , · · · , t n ∈ N and set h i := t i − t . Then, for all ( u , · · · , u n ) ∈ I n , C X t , ··· ,X tn ( u , · · · , u n ) = (cid:32) n − (cid:88) k =0 F (cid:16) b h n ,k (cid:0) F − ( u n ) (cid:1)(cid:17) − F ( a h n ,k ) (cid:33) δ N ∗ ( n )++ min (cid:8) u , F (cid:0) b h n ,n (cid:0) F − ( u n ) (cid:1)(cid:1)(cid:9) − F ( a h n ,n ) , (4.1) where n := n (cid:0) u , n ) = (cid:8) k : u ∈ (cid:2) F ( a h n ,k ) , F ( a h n ,k +1 ) (cid:1)(cid:9) , { a h n ,k } h k =0 are the end points ofthe nodes of T h n s , for i = 2 , · · · , n , j = 0 , · · · h i − , T h i ,j is given by (3.2) and for a vector ( x , · · · , x n ) ∈ I n − , b h n ,k ( x n ) = min i =2 , ··· ,n (cid:8) c i ( x i ; h n , k ) (cid:9) , with c i ( x i ; h n , k ) = (cid:40) a h n ,k , if B i ( x i ; h n , k ) = ∅ ; B i ( x i ; h n , k ) , otherwise . and B i ( x i ; h n , k ) = min j =0 , ··· , hi − (cid:8) T h i ,j ( x i ) : T h i ,j ( x i ) > a h n ,k and a h i ,j < a h n ,k +1 (cid:9) . .R.C. Lopes and G. Pumi Proof:
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 ,
1) and ϕ ∈ L ( µ s ) be an almosteverywhere increasing function. Without loss of generality, we can assume that 0 < t < · · · < t n .In view of Theorem 2.2, it suffices to work with the vector (cid:0) T t s ( U ) , · · · , T t n s ( U ) (cid:1) . Let H t , ··· ,t n be the distribution function of (cid:0) T t s ( U ) , · · · , T t n s ( U ) (cid:1) . Let h i = t i − t , for each i = 1 , · · · , n ,and notice that h i > t < t i , for all i = 2 , · · · , n . Let ( x , · · · , x n ) ∈ (0 , n and for thesake of simplicity, let Y t := T t s ( U ), so that we have H t , ··· ,t n ( x , · · · , x n ) = P (cid:0) T t s ( U ) ≤ x , · · · , T t n s ( U ) ≤ x n (cid:1) = P (cid:0) Y t ≤ x , T h s ( Y t ) ≤ x , · · · , T h n s ( Y t ) ≤ x n (cid:1) = P (cid:16) Y t ∈ [0 , x ] , Y t ∈ (cid:83) h − k =0 A h ,k ( x ) , · · · , Y t ∈ (cid:83) hn − k =0 A h n ,k ( x n ) (cid:17) = P (cid:16) Y t ∈ [0 , x ] (cid:84) ni =2 (cid:2) (cid:83) hi − k =0 A h i ,k ( x i ) (cid:3)(cid:17) = P (cid:16) U ∈ (cid:84) ni =2 (cid:83) hi − k =0 (cid:2) [0 , x ] (cid:84) A h i ,k ( x i ) (cid:3)(cid:17) , (4.2) where A h i ,k ’s are given by (3.3) and the last equality is a consequence of the T s -invariance of µ s . For k = 0 , · · · , h n − , let (cid:101) A h n ,k ( x n ) = A h n ,k ( x n ) (cid:84) n − i =2 (cid:2) (cid:83) hi − j =0 A h i ,j ( x i ) (cid:3) . In order to simplify the notation, for i = 2 , · · · , n and k = 0 , · · · , h n −
1, let B i ( x i ; h n , k ) = min j =0 , ··· , hi − (cid:8) T h i ,j ( x i ) : T h i ,j ( x i ) > a h n ,k and a h i ,j < a h n ,k +1 (cid:9) . For each k and i , B i ( x i ; h n , k ) is either the smallest T h i ,j ( x i ) which is greater than a h n ,k andsuch that the correspondent A h i ,j ( x i ) has non-empty intersection with A h n ,k ( x n ), or empty. Let c i ( x i ; h n , k ) = (cid:40) a h n ,k , if B i ( x i ; h n , k ) = ∅ ; B i ( x i ; h n , k ) , otherwise . Then, for each k = 1 , · · · , h n −
1, setting b h n ,k ( x n ) = min i =2 , ··· ,n (cid:8) c i ( x i ; h n , k ) (cid:9) , it follows that (cid:101) A h n ,k ( x n ) = (cid:2) a h n ,k , b h n ,k ( x n ) (cid:3) , which is a closed subset of [ a h n ,k , a h n ,k +1 ]. Also notice that, from the definition of b h n ,k ( x n ), wecould have (cid:101) A h n ,k ( x n ) = { a h n ,k } , in which case we set (cid:101) A h n ,k ( x n ) = ∅ (although from a measure-theoretical point of view, this correction makes no difference). Again we are omitting thedependence in s from the notation on both, b h n ,k and (cid:101) A h n ,k . Each b h n ,k ( x n ) above determinesthe smallest T h i ,j ( x i ) that lies on the k -th node of T h n s (which has the smallest nodes among all T h i s ’s), so that (cid:101) A h n ,k ’s are just the intersection of all A h i ,k ( x i )’s with end point in the k -th nodeof T h n s . Also notice that the (cid:101) A h n ,k ’s are pairwise disjoints. One can rewrite (4.2) as H t , ··· ,t n ( x , · · · , x n ) = P (cid:16) U ∈ (cid:83) hn − k =0 (cid:2) (cid:101) A h n ,k ( x n ) (cid:84) [0 , x ] (cid:3)(cid:17) . (4.3) Now, let n := n ( x ; n ) = (cid:8) k : x ∈ [ a h n ,k , a h n ,k +1 ) (cid:9) ∈ { , · · · , h n − } , and assume for themoment that n ≥
1. Then (4.3) becomes H t , ··· ,t n ( x , · · · , x n ) = n − (cid:88) k =0 P (cid:0) U ∈ (cid:101) A h n ,k ( x n ) (cid:1) + P (cid:0) U ∈ (cid:101) A h n ,n ( x n ) (cid:84) [ a h n ,n , x ] (cid:1) = n − (cid:88) k =0 µ s (cid:0) [ a h n ,k , b h n ,k ( x n )] (cid:1) + µ s (cid:0) [ a h n ,n , min { x , b h n ,n ( x n ) } ] (cid:1) = n − (cid:88) k =0 (cid:2) F (cid:0) b h n ,k ( x n ) (cid:1) − F ( a h n ,k ) (cid:3) + min (cid:8) F ( x ) , F ( b h n ,n ( x n )) (cid:9) − F ( a h n ,n ) . Copulas Related to Manneville-Pomeau Processes If n = 0, then H t , ··· ,t n ( x , · · · , x n ) = min (cid:8) F ( x ) , F ( b h n , ( x n )) (cid:9) − F ( a h n , ) . In any case, we can write H t , ··· ,t n ( x , · · · , x n ) = (cid:32) n − (cid:88) k =0 F (cid:0) b h n ,k ( x n ) (cid:1) − F ( a h n ,k ) (cid:33) δ N ∗ ( n ) ++ min (cid:8) F ( x ) , F ( b h n ,n ( x n )) (cid:9) − F ( a h n ,n ) . Recall that the distribution function of T ts ( U ) is also F by the T s -invariance of µ s . Nowapplying Sklar’s Theorem, it follows that, C X t , ··· ,X tn ( u , · · · , u n ) = H t , ··· ,t n (cid:0) F − ( u ) , · · · , F − ( u n ) (cid:1) = (cid:32) n − (cid:88) k =0 F (cid:16) b h n ,k (cid:0) F − ( u n ) (cid:1)(cid:17) − F ( a h n ,k ) (cid:33) δ N ∗ ( n )++ min (cid:8) u , F (cid:0) b h n ,n (cid:0) F − ( u n ) (cid:1)(cid:1)(cid:9) − F ( a h n ,n ) . where n := n (cid:0) F − ( u ) , n (cid:1) = (cid:8) k : u ∈ (cid:2) F ( a h n ,k ) , F ( a h n ,k +1 ) (cid:1)(cid:9) , which is the desired formula. (cid:4) Remark 4.1.
Notice that the proof of Theorem 4.1 from equation (4.3) on is exactly the sameas the one in Proposition 3.2 with the obvious notational adaptations.Now we turn our attention to the case where ϕ is an almost everywhere decreasing function.In view of Theorem 2.2, one cannot expect a simple expression for the copula. What happensis that the copula in this case will be the sum of the lower dimensions copulas related to theiterations T ks ( U ), as the next proposition shows. Proposition 4.1.
Let { X n } n ∈ N be an MP process with parameter s ∈ (0 , , and ϕ ∈ L ( µ s ) be an almost everywhere decreasing function. Let t, h , · · · , h n ∈ N , < h < · · · < h n and set Y := U and Y k := T h k s ( U ) . Denote the copula associated to the random vector ( X t , X t + h , · · · , X t + h n ) by C t . Then the following relation holds C t ( u , · · · , u n ) = 1 − n + n (cid:88) i =0 u i + n (cid:88) i =0 n (cid:88) j = i +1 C Y i ,Y j (1 − u i , − u j ) + · · · ++ ( − n − n (cid:88) k =0 n (cid:88) k = k +1 · · · n (cid:88) k n − = k n − +1 C Y k , ··· ,Y kn − (1 − u k , · · · , − u k n − ) ++ ( − n C U ,Y , ··· ,Y n (1 − u , · · · , − u n ) , (4.4) everywhere in I n +1 . Proof:
Let t, h , · · · , h n ∈ N , 0 < h < · · · < h n , t (cid:54) = 0. Set Y := U , Y k := T h k s ( U ) and y k := ϕ ( x k ). We have H X ,X h , ··· ,X hn ( x , x , · · · , x n ) = P (cid:0) U ≥ y , Y ≥ y , · · · , Y n ≥ y n (cid:1) = P (cid:16) U ≥ y (cid:12)(cid:12) Y ≥ y , Y ≥ y , · · · , Y n ≥ y n (cid:17) P (cid:16) Y ≥ y , · · · , Y n ≥ y n (cid:17) = P (cid:16) Y ≥ y , · · · , Y n ≥ y n (cid:17) − P (cid:0) U ≤ y , Y ≥ y , · · · , Y n ≥ y n (cid:1) . (4.5) Upon applying a long chain of a conditioning argument on both terms in (4.5), we arrive at H X ,X h , ··· ,X hn ( x , x , · · · , x n ) = 1 − n (cid:88) i =0 F ( y i ) + n (cid:88) i =0 n (cid:88) j = i +1 H Y i ,Y j ( y i , y j ) + .R.C. Lopes and G. Pumi + · · · + ( − n − n (cid:88) k =0 n (cid:88) k = k +1 · · · n (cid:88) k n − = k n − +1 H Y k , ··· ,Y kn − ( y k , · · · , y k n − ) ++ ( − n H U ,Y , ··· ,Y n ( y , · · · , y n ) . (4.6) A simple calculation (using the T s -invariance of µ s ) shows that, for all t ∈ N ∗ and x ∈ (0 , F X t ( x ) = 1 − F (cid:0) ϕ ( x ) (cid:1) and F − X t ( x ) = ϕ − (cid:0) F − (1 − x ) (cid:1) , so that, the result follows upon applying Sklar’s Theorem to (4.6) (recall that y k = ϕ ( x k )). (cid:4) Remark 4.2.
Notice that the copula in Proposition 4.1 can be explicitly calculated since (4.4)is written as sums of the copulas of vectors containing U and T t ( U ) for different t ’s, so thatthe desired formulas can be deduced in terms of the copulas in Theorem 4.1. The MP copulas derived in the last sections do not have readily computable formulas, especiallybecause µ s does not have explicit expression and because even apparently simple tasks likedetermining the discontinuity points of T hs or to compute explicit formulas for the branchesof T hs can be highly complex ones. However, one can still study the copulas derived in thelast sections by using appropriate approximations to the functions appearing in the copulaexpression. Besides the invariant measure µ s , computation of the bidimensional copulas so fardiscussed also involves computation of the quantile function F − , the inverse of T hs and the endpoints { a h,k } h k =0 of the nodes of T hs .In this section our goal is to derive simple approximations to these functions in order toobtain an approximation to the copula itself, which we shall prove to converge uniformly in itsarguments to the true copula. The approximations presented here are simple ones, usually alinear interpolation based on a grid of values, but the technique and results we shall use andprove here are stronger and cover a wide range of approximations, for instance, all results hold ifwe use some type of spline interpolation instead of a linear one. This is so because the functionsto be approximated are generally very smooth. We also evaluate the stability and performanceof the approximations by simple numerical experiments. Approximation to µ s We start with an approximation to µ s . In this direction there are at least two ways to computeapproximations to µ s . One way is by using the ideas and results outlined in Dellnitz and Junge(1999), which are based on a discretization of the Perron-Frobenius operator by means of aGarlekin projection type approximation in order to compute the eigenvectors of the discretizedoperator corresponding to the eingenvalue 1. Although it can be used to approximate any SBRmeasure, the method is especially suited to approximate and study (almost) cyclical behaviorof dynamical systems. However, its complexity makes the efficient implementation troublesome.A much simpler idea, which we shall adopt here, is to approximate the measure by truncatingequation (2.1) for a reasonably large value of n . That is, we consider the approximating measure µ n ( A ; s, x ) = 1 n n − (cid:88) k =0 δ T ks ( x ) ( A ) (5.1) which converges in a weak sense to µ s as n tends to infinity, for almost all initial points x ∈ I and all µ s -continuity sets A . The iterations of T s are known to be unstable with respect to theinitial point in the sense that, given a small ε > x ∈ (0 , T ks ( x )2 Copulas Related to Manneville-Pomeau Processes and T ks ( x + ε ) become far apart exponentially fast. The approximation (5.1), however, is quitestable with respect to the initial point x for large n . For instance, in Figure 5.1 we show themeasure of the sets [0 . , .
2] and [0 . , .
6] obtained by using µ n ( · ; s, x ) with s = 0 .
5, for 50different initial points x and 3 different truncation points n ∈ { , , , , , } .All plots are in the same scale (within set) in order to make comparison possible. In Table 5.1 weshow basic statistics related to Figure 5.1. Notice that, in average, the 1,000,000 and 3,000,000iteration cases are very similar and all cases are fairly stable with respect to the initial points(observe the scale). Figure 5.1:
Performance of the approximation (5.1) for truncation points n ∈{ , , , , , } (top, middle and bottom, respectively) and 50 different initialpoints for s = 0 .
5. The measured sets are (left) [0 . , .
2] and (right) [0 . , . Table 5.1:
Summary statistics for the data presented in Figure 5.1.
Set n [ . , . ] [min , max] [0.12511,0.13067] [0.12431,0.12901] [0.12688,0.12825]range 0.00556 0.00470 0.00137mean 0.12790 0.12775 0.12777 [ . , . ] [min , max] [0.15349,0.16092] [0.15326,0.15944] [0.15676,0.15857]range 0.00743 0.00618 0.00181mean 0.15792 0.15771 0.15771 Next question is how good is the approximation (5.1)? One way to test this is by test-ing whether the approximation is invariant under T s . For given initial points, say x , · · · , x k and some interval [ a, b ], we calculate µ n (cid:0) [ a, b ]; s, x i (cid:1) and µ n (cid:0) T − s ([ a, b ]); s, x j (cid:1) . If the differ-ence between the two quantities is small for different pairs ( x i , x j ), one can conclude that theapproximation is reasonably good. In Table 5.2 we present the difference (cid:12)(cid:12) µ n (cid:0) [ a, b ]; s, x i (cid:1) − µ n (cid:0) T − s ([ a, b ]); s, x j (cid:1)(cid:12)(cid:12) for 7 different initial points and 3 different sets [ a, b ]. The truncationpoint was taken to be 3,000,000 and s = 0 .
5. From Table 5.2 we conclude that the approxi- .R.C. Lopes and G. Pumi T s -invariant. Asexpected, when x i = x j the differences are the smallest ( < − in all cases). Table 5.2:
Difference (cid:12)(cid:12) µ n (cid:0) [ a, b ]; s, x i (cid:1) − µ n (cid:0) T − s ([ a, b ]); s, x j (cid:1)(cid:12)(cid:12) for different values of x and sets [ a, b ].The truncation point was taken to be n = 3 , ,
000 and s = 0 .
5. The initial points are ( x , · · · , x ) = (cid:0) π, π/ ( √ , π √ , π + √ , √ , π + √ , √
11 + √ (cid:1) (mod 1). initial x x x x x x x [ . , . ] x x x x x x x x x x x x x x [ . , . ] x x x x x x x x x x x x x x [ . , . ] x x x x x x x In the remaining of this section we shall assume that s ∈ (0 ,
1) has been fixed and x ∈ (0 , µ s . Since no confusion will arise,we shall drop s and x from the notation and write the approximation (5.1), based on a size n iteration vector, just by µ n ( · ). Approximating F − and the nodes of T hs In order to approximate F − , one can use an empirical version based on the same iterationvector from which µ n is derived. First we need to define an approximation to F from whichan approximation to F − will be derived. Let (cid:98) F n be the empirical distribution based on asize n iteration vector (cid:0) x , T s ( x ) , · · · , T n − s ( x ) (cid:1) and let x , · · · , x n be the jump points of (cid:98) F n . Consider the set L n := { x , x , · · · , x n , x n +1 = 1 } . Given x ∈ I \ L n , there exists a k ∈ { , · · · , n } such that x ∈ ( x k , x k +1 ). We define the approximate value of F ( x ), denoted by F n ( x ), as the linear interpolation of x between the points (cid:0) x k , (cid:98) F n ( x k ) (cid:1) and (cid:0) x k +1 , (cid:98) F n ( x k +1 ) (cid:1) ,that is, we set F n ( x ) := (cid:32) (cid:98) F n ( x k +1 ) − (cid:98) F n ( x k ) x k +1 − x k (cid:33) x + (cid:98) F n ( x k ) x k +1 − (cid:98) F n ( x k +1 ) x k x k +1 − x k . (5.2) If x ∈ L n , we simply define F n ( x ) := (cid:98) F n ( x ). Notice that, for each n , F n : I → I is a one-to-one,increasing and uniformly continuous function, so that its inverse, F − n , is well defined and is alsoone-to-one and uniformly continuous. In the next proposition, we show that F n ( x ) → F ( x ) and F − n ( x ) → F − ( x ), both limits being uniform in x . by the choice of x , there will be exactly n jump points. Copulas Related to Manneville-Pomeau Processes
Proposition 5.1.
Let (cid:98) F n be the empirical distribution based on an iteration vector (cid:0) x , T s ( x ) , · · · , T n − s ( x ) (cid:1) and let x , · · · , x n be the jump points of (cid:98) F n . Let F n be the approximation (5.2) based on { x , · · · , x n } and F − n be its inverse. Then, F n ( x ) −→ F ( x ) and F − n ( x ) −→ F − ( x ) , uniformly in x . Proof:
By the Glivenko-Cantelli theorem, (cid:98) F n ( x ) → F ( x ) uniformly in x ∈ [0 , ε >
0, one can find n := n ( ε ) > n > n , then (cid:12)(cid:12) (cid:98) F n ( x ) − F ( x ) (cid:12)(cid:12) < ε uniformly in x . Now, for x ∈ (0 ,
1) (if x equals 0 or 1, the result is trivial), there exists a k ∈ { , · · · , n } suchthat x ∈ [ x k , x k +1 ). Hence, if n > n (cid:12)(cid:12) F n ( x ) − F ( x ) (cid:12)(cid:12) ≤ (cid:12)(cid:12) F n ( x ) − (cid:98) F n ( x ) (cid:12)(cid:12) + (cid:12)(cid:12) (cid:98) F n ( x ) − F ( x ) (cid:12)(cid:12) < (cid:12)(cid:12) (cid:98) F n ( x k +1 ) − (cid:98) F n ( x k ) (cid:12)(cid:12) + ε ≤ sup i =1 , ··· ,n − (cid:8)(cid:12)(cid:12) (cid:98) F n ( x i +1 ) − (cid:98) F n ( x i ) (cid:12)(cid:12)(cid:9) + ε ≤ n + ε, uniformly in x . To show the convergence of the inverse, let y ∈ [0 ,
1] and ε > F − n being uniformly continuous, one can find a δ := δ ( ε ) > | x − y | < δ = ⇒ | F − n ( x ) − F − n ( y ) | < ε. Now, since F n converges uniformly to F , there exists n := n ( ε ) > n > n = ⇒ (cid:12)(cid:12) F n ( x ) − F ( x ) (cid:12)(cid:12) < δ, for all x ∈ I . Also, since F is one to one, there exists v ∈ [0 ,
1] such that y = F ( v ). Therefore,if n > n (cid:12)(cid:12) F − n ( y ) − F − ( y ) (cid:12)(cid:12) = (cid:12)(cid:12) F − n (cid:0) F ( v ) (cid:1) − v (cid:12)(cid:12) = (cid:12)(cid:12) F − n (cid:0) F ( v ) (cid:1) − F − n (cid:0) F n ( v ) (cid:1)(cid:12)(cid:12) < ε and since n is independent of y , the desired convergence follows. (cid:4) As for the end points { a h,k } h k =0 of the nodes of T hs , let { x , · · · , x m } ∈ (0 , x i (cid:54) = x j and consider the set { T hs ( x ) , · · · , T hs ( x m ) } , for m > . Note that a h, = 0and a h, h = 1, for any h . Let D = (cid:8) i : T hs ( x i ) > T hs ( x i +1 ) (cid:9) ⊂ { , · · · , m } . The set D contains the indexes i ∈ { , · · · , m } for which the interval [ x i , x i +1 ] contains a discontinuity of T hs . Let { d j } h − j =1 denote the ordered elements of D , so that the interval [ x d j , x d j +1 ] containsthe j -th discontinuity of T hs . Now consider the function T ∗ i,h ; s : [ x d i , x d i +1 ] → [0 ,
2] given by T ∗ i,h ( x ; s ) := T h − s ( x ) + (cid:0) T h − s ( x ) (cid:1) s and notice that we can write T hs ( x ) = T ∗ i,h ( x ; s ) − δ [1 , (cid:0) T ∗ i,h ( x ; s ) (cid:1) . Since there is a discontinuity of T hs in the interval [ x d i , x d i +1 ], we have T ∗ i,h ( x d i ; s ) ≤ T ∗ i,h ( x d i +1 ; s ) ≥ T ∗ i,h is continuous and increasing, there exists a point x ∈ [ x d i , x d i +1 ] By “sufficiently large” we mean that m should be at least large enough to guarantee that the set { T hs ( x ) , · · · , T hs ( x m ) } reflects the 2 h − T hs , or, in other words, m ≥ h . The limits in m taken for an approximation are understood to be in terms of partitions, that is, we start with a sufficiently largeset of points, say I m = { x , · · · , x m } and consider refinements of the form I m +1 = I m (cid:83) { x m +1 } , · · · , I m + k = I m + k − (cid:83) { x m + k } . Suppose that R m := R ( I m ) is an approximation based on I m . For a sequence of refinements { I k } ∞ k = m +1 we consider the sequence { R ( I k ) } ∞ k = m +1 . Whenever the last limit exists, we set lim m →∞ R m = lim k →∞ R ( I k ). .R.C. Lopes and G. Pumi T ∗ s ( x ; s ) = 1, which is precisely a h,i . With this in mind, let a mh,i denote the approx-imation to a h,i obtained from { x , · · · , x m } by using a linear interpolation between the points (cid:0) x d i , T ∗ i,h ( x d i ; s ) (cid:1) and (cid:0) x d i +1 , T ∗ i,h ( x d i +1 ; s ) (cid:1) . That is, a mh,i is given by a mh,i = x d i + x d i +1 − x d i T ∗ i,h ( x d i +1 ; s ) − T ∗ i,h ( x d i ; s ) (cid:0) − T ∗ i,h ( x d i ; s ) (cid:1) , (5.3) for all d i ∈ D . Clearly a mh,i −→ m →∞ a h,i , since | x d i +1 − x d i | −→ m →∞ T ∗ i,h ,for each i ∈ { , · · · , h − } . Approximating T h,k Concerning the approximation of T h,k , we shall use an argument based on an empirical inverseand linear interpolation, but we shall also need a doubling argument in order to improve accuracyof the approximation near the discontinuities and guarantee the uniform convergence of theapproximation to its target. So let { x , · · · , x m = 1 } ∈ I , x i < x j and consider the set (cid:8) T hs ( x ) , · · · , T hs ( x m ) (cid:9) , for m > y ∈ [0 , T hs ( y ) is a size 2 h vector which we denoted by (cid:0) T h, , · · · , T h, h − (cid:1) . Let again D = (cid:8) i : T hs ( x i ) >T hs ( x i +1 ) (cid:9) ⊂ { , · · · , m } and { d i } h − i =1 be the ordered points in D . Suppose that we know exactlyor have good estimates for the nodes { a h,k } h k =0 of T hs (for instance, we could use { a mh,k } h k =0 , asdescribed before, based on the same set { x , · · · , x m } considered here). For i = 0 , · · · , h − R h,i = { x (1) h,i , · · · , x ( p i ) h,i } := { a mh,i , x d i +1 , · · · , x d i +1 , a mh,i +1 } and I h,i = { y (1) h,i , · · · , y ( p i ) h,i } := (cid:8) , T hs ( x d i +1 ) , · · · , T hs ( x d i +1 ) , (cid:9) . Given y ∈ [0 , i = 0 , · · · , h −
1, there exists a y ( k ) h,i ∈ I h,i such that y ∈ (cid:2) y ( k ) h,i , y ( k +1) h,i (cid:1) .We define the approximation T mh,i ( y ) of T h,i ( y ), as being the linear interpolation of y betweenthe points (cid:0) x ( k ) h,i , y ( k ) h,i (cid:1) and (cid:0) x ( k +1) h,i , y ( k +1) h,i (cid:1) . That is, for each i = 0 , · · · , h − T mh,i ( y ) = x ( k ) h,i + x ( k +1) h,i − x ( k ) h,i y ( k +1) h,i − y ( k ) h,i (cid:0) y − y ( k ) h,i (cid:1) . (5.4) Notice that if y equals 0 or 1, we have T mh,i ( y ) = T h,i ( y ). Also, as the partition { x , · · · , x m } increases, (cid:12)(cid:12) x k +1 − x k (cid:12)(cid:12) −→ m →∞ T hs clearly implies T mh,i ( y ) −→ m →∞ T h,i ( y ), for each y ∈ [0 , i = 0 , · · · , h −
1. More is true: the convergence is actuallyuniform in y , as we show in the next proposition. Proposition 5.2.
Let T mh,k be the approximation of T h,k given by (5.4) based on a partition R m .Then, T mh,k ( y ) −→ T h,k ( y ) , for each k = 0 , · · · , h − , as m goes to infinity (that is, as the partition gets thinner). Moreover,the convergence is uniform in y ∈ [0 , . Proof:
Given ε >
0, the uniform continuity of T h,k implies the existence of a δ := δ ( ε ) > | x − y | < δ = ⇒ (cid:12)(cid:12) T h,k ( x ) − T h,k ( y ) (cid:12)(cid:12) < ε, for all x ∈ [0 , R = { x , · · · , x m = 1 } ∈ I for a sufficiently large m ∈ N ∗ such that sup i =1 , ··· ,m − (cid:8) | x i +1 − x i (cid:12)(cid:12) } < δ. Copulas Related to Manneville-Pomeau Processes
For m > m , let R m = { x ∗ , · · · , x ∗ m } ⊃ R be a size m refinement of R . Given y ∈ (0 , i = 0 , · · · , h −
1, let T mh,i be the approximation (5.4) based on R m . By construction andsince y ∈ (0 , T h,i (cid:0) x ( k ) h,i (cid:1) ≤ T mh,i ( y ) < T h,i (cid:0) x ( k +1) h,i (cid:1) and T h,i (cid:0) x ( k ) h,i (cid:1) ≤ T h,i ( y ) < T h,i (cid:0) x ( k +1) h,i (cid:1) , so that (cid:12)(cid:12) T mh,i ( y ) − T h,i ( y ) (cid:12)(cid:12) ≤ |T h,i (cid:0) x ( k +1) h,i (cid:1) − T h,i (cid:0) x ( k ) h,i (cid:1) |≤ sup j =1 , ··· ,m − (cid:8)(cid:12)(cid:12) T h,i ( x j +1 ) − T h,i ( x j ) (cid:12)(cid:12)(cid:9) < ε, for all y ∈ (0 , y ∈ { , } , by construction T h,i ( y ) = T mh,i ( y ), so that the result followsuniformly for all y ∈ [0 , (cid:4) h MP copula
With these approximations in hand, we can now define the approximation for the copula C X t ,X t + h when ϕ is almost everywhere increasing given in Proposition 3.2 but in the form (3.5). For( u, v ) ∈ I , n > m ≥ h , we set C m,n ( u, v ; h ) = (cid:18) n ∗ − (cid:88) k =0 µ n (cid:16)(cid:2) a mh,k , T mh,k (cid:0) F − n ( v ) (cid:1)(cid:3)(cid:17)(cid:19) δ N ∗ ( n ∗ )++ µ n (cid:16)(cid:2) a mh,n , min (cid:8) F − n ( u ) , T mh,n (cid:0) F − n ( v ) (cid:1)(cid:9)(cid:3)(cid:17) , (5.5) where n ∗ := n ( m, n ) = (cid:8) k : u ∈ (cid:2) F n ( a mh,k ) , F n ( a mh,k +1 ) (cid:1)(cid:9) and lim m,n →∞ n ∗ = n since F n convergesuniformly to F and a mh,k converges to a h,k . In the next theorem we establish the convergence ofthe approximation (5.5) to the true copula. Theorem 5.1.
Let C m,n ( u, v ; h ) be given by (5.5) . Then, for all ( u, v ) ∈ I , t ≥ and h > lim n →∞ lim m →∞ C m,n ( u, v ; h ) = lim m →∞ lim n →∞ C m,n ( u, v ; h ) = lim m,n →∞ C m,n ( u, v ; h ) and the common limit is C X t ,X t + h ( u, v ) (given by (3.4) ). Furthermore, the limits above areuniform in ( u, v ) ∈ I . The proof of Theorem 5.1, is a consequence of the following stronger lemma.
Lemma 5.1.
Let { µ n } n ∈ N be a sequence of probability measures defined in I such that µ n w −→ µ . Let f n : I → I be a sequence of continuous functions converging uniformly to a function f : I → I . Let { a m } m ∈ N be a sequence of real numbers such that a m ∈ [0 , for all m and a m → a . Also let g m : [ a m , → I be a sequence of continuous functions converging uniformlyto a function g : I → I , S m,n ( v ) := (cid:2) a m , g m (cid:0) f n ( v ) (cid:1)(cid:3) and S ( v ) := (cid:2) a, g (cid:0) f ( v ) (cid:1)(cid:3) . Then, lim m →∞ lim n →∞ µ n (cid:0) S m,n ( v ) (cid:1) = lim n →∞ lim m →∞ µ n (cid:0) S m,n ( v ) (cid:1) = lim m,n →∞ µ n (cid:0) S m,n ( v ) (cid:1) = µ (cid:0) S ( v ) (cid:1) uniformly in v ∈ I . Proof:
For all m, n > v ∈ [0 , S m,n ( v ) and S ( v ) be as in the enunciate and let S n ( v ) := (cid:2) a, g (cid:0) f n ( v ) (cid:1)(cid:3) and S m ( v ) := (cid:2) a m , g m (cid:0) f ( v ) (cid:1)(cid:3) . Notice that all sets just defined are µ -continuity sets for all m , n and v . Since the convergenceof f n to f is uniform, we have lim m,n →∞ g m (cid:0) f n ( v ) (cid:1) = lim n →∞ lim m →∞ g m (cid:0) f n ( v ) (cid:1) = lim m →∞ lim n →∞ g m (cid:0) f n ( v ) (cid:1) = g (cid:0) f ( v ) (cid:1) .R.C. Lopes and G. Pumi v , so that, both, the iterated and the double limits exist and S m,n ( v ) → S ( v ), for all v ∈ [0 , δ S m,n ( x ) ≤ δ I ( x ) uniformly in m , n and x , and since µ n converges weakly to µ and I is a µ -continuity set, it follows that (cid:90) δ I ( x )d µ n −→ (cid:90) δ I ( x )d µ. Now, in one hand, since S m,n ( v ) → S m ( v ) for all v and δ S m,n ≤ δ I , by the Lebesgue convergencetheorem, it follows that µ n (cid:0) S m,n ( v ) (cid:1) = (cid:90) δ S m,n ( x )d µ n −→ n →∞ (cid:90) δ S m ( x )d µ, and, since δ S m ≤ δ I and (cid:82) δ I d µ < ∞ , by the Lebesgue dominated theorem, we conclude that (cid:90) δ S m ( x )d µ −→ m →∞ (cid:90) δ S ( x )d µ = µ (cid:0) S ( v ) (cid:1) , which shows that lim m →∞ lim n →∞ µ n (cid:0) S m,n ( v ) (cid:1) = µ (cid:0) S ( v ) (cid:1) and the convergence holds uniformly in v ∈ (0 , δ S m,n ≤ δ I and (cid:82) δ I d µ n < ∞ , by the Lebesgue dominatedtheorem, it follows that µ n (cid:0) S m,n ( v ) (cid:1) = (cid:90) δ S m,n ( x )d µ n −→ m →∞ (cid:90) δ S n ( x )d µ n , and, since δ S n ≤ δ I and (cid:82) δ I d µ n → (cid:82) δ I d µ , by the Lebesgue convergence theorem we concludethat, (cid:90) δ S n ( x )d µ n −→ n →∞ (cid:90) δ S ( x )d µ = µ (cid:0) S ( v ) (cid:1) , that is, lim n →∞ lim m →∞ µ n (cid:0) S m,n ( v ) (cid:1) = µ (cid:0) S ( v ) (cid:1) , which also holds uniformly in v . Since the iteratedlimits are established, in order to finish the proof we need to show that the double limit existsand is equal to the iterated ones. Let ε > µ (cid:28) λ , the Radon-Nikodym theoremimplies the existence of a non-negative continuous function h , which will be bounded since weare restricted to the interval I , such that, for any A ∈ B ( I ), µ ( A ) = (cid:90) A h ( x )d λ ≤ M λ ( A ) , where M = sup x ∈ I { h ( x ) } < ∞ . Now, since a m → a , one can find m := m ( ε ) > m > m , a m ∈ K ( ε ) := (cid:104) a − ε M , a + ε M (cid:105) and µ (cid:0) K ( ε ) (cid:1) ≤ M λ (cid:16)(cid:104) a − ε M , a + ε M (cid:105)(cid:17) = ε . The uniform convergence of g m to g implies the existence of m := m ( ε ) > m > m , | g m ( x ) − g ( x ) | < ε/ M , for all x ∈ I , or equivalently, taking x = f n ( v ), if m > m g m (cid:0) f n ( v ) (cid:1) ∈ (cid:104) g (cid:0) f n ( v ) (cid:1) − ε M , g (cid:0) f n ( v ) (cid:1) + ε M (cid:105) . Now, the uniform continuity of g implies the existence of a δ := δ ( ε ) > (cid:12)(cid:12) x − f n ( v ) (cid:12)(cid:12) < δ = ⇒ (cid:12)(cid:12) g ( x ) − g (cid:0) f n ( v ) (cid:1)(cid:12)(cid:12) < ε M .
But since f n converges to f uniformly, there exists a n = n ( δ ) > n > n = ⇒ (cid:12)(cid:12) f n ( v ) − f ( v ) (cid:12)(cid:12) < δ, Copulas Related to Manneville-Pomeau Processes for all v so that, taking x = f ( v ), for n > n , we have g (cid:0) f n ( v ) (cid:1) ∈ (cid:104) g (cid:0) f ( v ) (cid:1) − ε M , g (cid:0) f ( v ) (cid:1) + ε M (cid:105) , for all v ∈ I . Hence, if we take m > m and n > n , g (cid:0) f n ( v ) (cid:1) − ε M ∈ (cid:104) g (cid:0) f ( v ) (cid:1) − ε M , g (cid:0) f ( v ) (cid:1)(cid:105) and g (cid:0) f n ( v ) (cid:1) + ε M ∈ (cid:104) g (cid:0) f ( v ) (cid:1) , g (cid:0) f ( v ) (cid:1) + ε M (cid:105) so that, setting K ( ε ) := (cid:104) g (cid:0) f ( v ) (cid:1) − ε M , g (cid:0) f ( v ) (cid:1) + ε M (cid:105) , for m > m and n > n , it follows that g m (cid:0) f n ( v ) (cid:1) ∈ (cid:104) g (cid:0) f n ( v ) (cid:1) − ε M , g (cid:0) f n ( v ) (cid:1) + ε M (cid:105) ⊆ K ( ε ) , for all v ∈ I . Also observe that µ (cid:0) K ( ε ) (cid:1) ≤ M λ (cid:16)(cid:104) g (cid:0) f ( v ) (cid:1) − ε M , g (cid:0) f ( v ) (cid:1) + ε M (cid:105)(cid:17) ≤ ε . The convergence of µ n to µ implies the existence of n := n ( ε ) > n > n ( K i ( ε )is a µ − continuity set) (cid:12)(cid:12) µ n (cid:0) K i ( ε ) (cid:1) − µ (cid:0) K i ( ε ) (cid:1)(cid:12)(cid:12) < ε , for i = 1 ,
2. Also, if we set F n ( x ) = µ n (cid:0) [0 , x ] (cid:1) and F ( x ) = µ (cid:0) [0 , x ] (cid:1) , then F is continuous (since µ (cid:28) λ ), F n → F , and, by P´olya’s theorem, there exists a n := n ( ε ) > n > n sup x ∈ I (cid:8)(cid:12)(cid:12) F n ( x ) − F ( x ) (cid:12)(cid:12)(cid:9) < ε . Now, notice that, if n > n (cid:12)(cid:12) µ n (cid:0) S ( v ) (cid:1) − µ (cid:0) S ( v ) (cid:1)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12) F n (cid:0) g (cid:0) f ( v ) (cid:1)(cid:1) − F (cid:0) g (cid:0) f ( v ) (cid:1)(cid:1)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12) F n ( a ) − F ( a ) (cid:12)(cid:12) ≤ x ∈ I (cid:8)(cid:12)(cid:12) F n ( x ) − F ( x ) (cid:12)(cid:12)(cid:9) < ε , for all v ∈ I . Observe further that, by construction, if m > max { m , m } and n > n , S m,n ( v ) \ S ( v ) ⊂ K ( ε ) (cid:83) K ( ε ) , for all v so that, setting n = n ( ε ) := max { m , m , n , n , n } , if m, n > n , we have (cid:12)(cid:12) µ n (cid:0) S m,n ( v ) (cid:1) − µ (cid:0) S ( v ) (cid:1)(cid:12)(cid:12) ≤≤ (cid:12)(cid:12) µ n (cid:0) S m,n ( v ) (cid:1) − µ n (cid:0) S ( v ) (cid:1)(cid:12)(cid:12) + (cid:12)(cid:12) µ n (cid:0) S ( v ) (cid:1) − µ (cid:0) S ( v ) (cid:1)(cid:12)(cid:12) < (cid:12)(cid:12) µ n (cid:0) K ( ε ) (cid:1) + µ n (cid:0) K ( ε ) (cid:1)(cid:12)(cid:12) + ε ≤ (cid:12)(cid:12) µ n (cid:0) K ( ε ) (cid:1) − µ (cid:0) K ( ε ) (cid:1)(cid:12)(cid:12) + µ (cid:0) K ( ε ) (cid:1) + µ (cid:0) K ( ε ) (cid:1) + (cid:12)(cid:12) µ (cid:0) K ( ε ) (cid:1) − µ n (cid:0) K ( ε ) (cid:1)(cid:12)(cid:12) + ε < ε, for all v , which implies the existence of the double limit, equality with the iterated ones and thedesired uniform convergence. (cid:4) Proof of Theorem 5.1 : First notice that taking f n = F − n , g m = T mh,k , a m = a mh,k , it followsfrom Lemma 5.1 that µ n (cid:16)(cid:2) a mh,k , T mh,k (cid:0) F − n ( v ) (cid:1)(cid:3)(cid:17) −→ m,n →∞ µ (cid:16)(cid:2) a h,k , T h,k (cid:0) F − ( v ) (cid:1)(cid:3)(cid:17) , .R.C. Lopes and G. Pumi k = 0 , · · · , n −
1. It remains to show that lim m,n →∞ µ n (cid:16)(cid:2) a mh,n , min (cid:8) F − n ( u ) , T mh,n (cid:0) F − n ( v ) (cid:1)(cid:9)(cid:3)(cid:17) = µ (cid:16)(cid:2) a h,n , min (cid:8) F − ( u ) , T h,n (cid:0) F − ( v ) (cid:1)(cid:9)(cid:3)(cid:17) , and that the iterated limits exist and are equal to the double limit. First, since we can writemin { u, v } = u + v − | u − v | , it is routine to show that if f n → f uniformly, with f n and f uniformly continuous and g m → g uniformly, with g m and g uniformly continuous, we havemin (cid:8) f n ( u ) , g m (cid:0) f n ( v ) (cid:1)(cid:9) converging uniformly to min (cid:8) f ( u ) , g (cid:0) f ( v ) (cid:1)(cid:9) in n , m , u and v . So,the problem simplifies to show that if a m → a , g m,n ( u, v ) is a sequence of functions such that g m,n ( u, v ) → g ( u, v ) uniformly in u, v, n, m and a m ≤ g m,n ( u, v ) for all u, v, n, m and µ n w −→ µ ,then lim m,n →∞ µ n (cid:0) [ a m , g m,n ( u, v )] (cid:1) = µ (cid:0) [ a, g ( u, v )] (cid:1) , uniformly in u and v and the double limit above is equal to the iterated limits. A similarargument to the one used in Lemma 5.1 to establish the existence and equality of the iteratedlimits can be used to show the existence and equality of the iterated limits in this case. Asfor the double limit, let M be as in the proof of Lemma 5.1. By the uniform convergence of g m,n ( u, v ) to g ( u, v ) and since g m,n and g are uniformly continuous for all m, n , it follows thatthere exists m := m ( ε ) >
0, depending on ε only, such that, if m, n > m , g m,n ( u, v ) ∈ K ( ε ) := (cid:104) g ( u, v ) − ε M , g ( u, v ) + ε M (cid:105) , for all u and v and µ (cid:0) K ( ε ) (cid:1) ≤ ε/
5. The rest of the proof is carried out by mimicking theproof of Lemma 5.1 with the obvious adaptations. Identification of g m,n ( u, v ), g ( u, v ), a m and a with min (cid:8) F − n ( u ) , T mh,n (cid:0) F − n ( v ) (cid:1)(cid:9) , min (cid:8) F − ( u ) , T h,n (cid:0) F − ( v ) (cid:1)(cid:9) , a mh,n and a h,n , respectively,completes the proof. (cid:4) Remark 5.1.
Notice that neither the convergence proved in Lemma 5.1 nor the one in Theorem5.1 is uniform in m and n .As for the case when ϕ is almost everywhere decreasing, we observe that, in view of (3.7),the function C ∗ m,n ( u, v ; h ) = u + v − C m,n (1 − u, − v ; h ) is an approximation to the copula in (3.6). Clearly C ∗ m,n converges to the true copula as m and n tends to infinity (view either as an iterated or a double limit) and the convergence is uniformin ( u, v ). Implementation and Random Variate Generation
The implementation of the approximations so far discussed is routine. All the approximationswe mentioned can share the same iteration vector, which further improves the efficiency andprecision of the task and greatly reduces the computational burden. In the top panel of Figure 5.2we show the three dimensional plot of the lag 1 and 2 MP copula for values of s ∈ { . , . } . Therespective level plots are shown in the bottom panel of Figure 5.2. Notice the non-exchangeabilityof the copulas in all cases.Obtaining random samples from an MP copulas is a trivial task in view of Proposition 3.4.There we show that the support of an MP copula is the union of graphs of certain linear functions.The following algorithm can be used to generate a pair of variates from a bidimensional MPcopula for ϕ an almost everywhere increasing function.1. Generate an uniform (0 ,
1) variate u .0 Copulas Related to Manneville-Pomeau Processes
Figure 5.2:
From left to right, three dimensional plots of the lag 1 MP copula for s ∈ { . , . } and lag 2 MPcopula for the same parameters (top panel) and respective level sets (bottom panel) obtained from approximation(5.5).
2. Let κ denote the index for which u ∈ (cid:2) F ( a h,κ ) , F ( a h,κ +1 ) (cid:3) and set v = (cid:96) + h,κ ( u ).3. The desired pair is ( u, v ).In practice the T s -invariant probability measure is unknown and F has to be approximated.Furthermore, most of times the nodes related to T hs , for h > s ∈ (0 ,
1) cannot be analyticallyobtained. However, we can apply the approximations developed in this section together withthe algorithm above to obtain approximated samples from MP copulas. In Figure 5.3 we show500 approximated sample points from a lag 1 and 2 MP copula for s ∈ { . , . } and ϕ analmost everywhere. Obvious modifications in the algorithm, allow handling the case where ϕ isan almost everywhere decreasing function. Figure 5.3:
Left to right: 500 approximated sample points from a lag 1 MP copula for s ∈ { . , . } and lag 2MP copula for the same parameters. Remark 5.2.
For small values of the lag, the resemblance of the sample to a piecewise contin-uous function is very clear, but this is not always the case as it can be seen in Figure 5.4, wherewe show 500 approximated sample points of the lag 4, 5 and 7 MP copulas for s = 0 .
2. Thisis a general principle, for a fixed sample size the higher the lag, the harder to distinguish thesupport of the copula based on the sample, since the number of branches of T hs grow as fast as2 h . For instance, for h = 7 in Figure 5.4 is difficult to say that the sample came from a singularcopula at all. .R.C. Lopes and G. Pumi Figure 5.4:
Left to right: 500 approximated sample points from the lag 4, 5 and 7 MP copulas for s = 0 . In this section we apply the theory developed in Section 3 to the problem of estimating theparameter s in MP processes. This problem have been studied before in Olbermann et. al(2007), where the authors adapt and apply several estimation methods from the classical theoryof long-range dependence to the problem of estimating the parameter s . In this section wepropose an estimator for the parameter s based on the ideas developed in Section 3, which isboth, precise and fast.The mathematical framework is as follows. Let s ∈ (0 ,
1) and consider the associated MPprocess { X n } n ∈ N for ϕ the identity map. Suppose we observe a realization x , · · · , x N from X n and our goal is to estimate the unknown parameter s . Let a := a ( s ) ∈ (cid:0) , √ − (cid:1) denote thediscontinuity point of the MP transformation and notice that s and a are related by a + a s = 1 ⇐⇒ s = log(1 − a )log( a ) − . Hence, the problem of estimating s is equivalent to the problem of estimating a .To define the proposed estimator, we start by observing that Proposition 3.4 for h = 1implies that the lag 1 MP copula’s support is given by the graph of the piecewise linear function (cid:96) ( x ) := (cid:40) xF ( a ) , if x ∈ (cid:2) , F ( a ) (cid:1) x − F ( a )1 − F ( a ) , if x ∈ (cid:2) F ( a ) , . so that, any (independent or correlated) sample from a lag 1 MP copula consists of pointsscattered through the lines defined by (cid:96) (see Figure 5.3). The discontinuity point of the function (cid:96) is precisely F ( a ). Let y i = F ( x i ), for i = 1 , · · · , N , and consider the series { u i := ( y i , y i +1 ) } N − i =1 .By Sklar’s Theorem, { u i } N − i =1 is a (correlated) sample from the lag 1 MP copula, so all pointsshould lie in the graph of the function (cid:96) .These considerations suggest the following procedure to obtain s based on a path x , · · · , x N of X n within a given accuracy ε >
0. We choose s ∈ (0 ,
1) as an initial guess for s and calculateˆ y i = F n ( x i ; s ), i = 1 , · · · , N , where F n is the approximation of F given in (5.2). Next we define { ˆ u i := (ˆ y i , ˆ y i +1 ) } N − i =1 , from which we estimate the slope of the two branches of the approximatedsample from the lag 1 MP copula obtained by this way. The discontinuity point (and hence s )can then be easily calculated. In this manner we obtain an estimative ˜ s which can be comparedto s . If s is close to the true value s , then the difference between ˜ s and s should be small. Ifnot, we choose another starting value and repeat the operation until obtain the desired accuracy.This leads to an optimization procedure to obtain s within a predefined accuracy.To illustrate the procedure, Figure 6.1(a) shows a sample path of an MP process for s = 0 . N = 200 while Figure 6.1(b) shows the sample path y i = F n ( x i ; 0 . i = 1 , · · · , N . From { y i } Ni =1 , we construct the sequence { u i } N − i =1 , where u i = (cid:0) F n ( y i ; s ) , F n ( y i +1 ; s ) (cid:1) , i = 1 , · · · , N − Copulas Related to Manneville-Pomeau Processes
1, for the correctly specified s = 0 . s = 0 .
3. Figure 6.1(c) presents the graph of { u i } N − i =1 obtained from the correct specification of s , while Figure 6.1(d) shows the graph of themisspecified one. In Figures 6.1(c) and 6.1(d), the solid lines represent the respective theoreticalsupport of the copula given in Proposition 3.4. Some distortion in the points can be seen given tothe use of the approximation F n instead of the theoretical F , especially in lower quantiles. FromFigure 6.1(d) it is clear that the line obtained from the sequence { u i } N − i =1 and the theoreticalone for the chosen value of s , namely, 0.3, do not match, while for the correct specified one inFigure 6.1(c), they do. (a) (b) (c) (d) Figure 6.1: (a) Sample path x , · · · , x of an MP process with s = 0 . √ y i = F n ( x i ). Plot of u i = ( y i , y i +1 ) for the (c) correct and (d) misspecified s . The solid lines correspond to thetheoretical support of the respective lag 1 MP copula. The procedure just outlined is, however, computationally expensive given the fact that tocalculate the approximation F n with reasonable stability and accuracy, for each s , it requiresthe construction of an iteration vector of large size (see Figure 5.1 and Table 5.1). Such an opti-mization procedure can easily take hundreds of evaluations, depending on the desired accuracy,and hence, can be a very time consuming task.To overcome this difficulty, observe that in Figures 6.1(a) and 6.1(b), little differences canbe seen between them. In fact, since F is a smooth distribution, an alternative is to applythe previous argument to the points ˆ v i := ( x i , x i +1 ), i = 1 , · · · , N . There will certainly besome distortion in the lines due to the absence of F , but we expect to be able to estimate thediscontinuity point a based on v i by similar idea as before.As an illustration, Figure 6.2 shows the plots of v i = ( x i , x i +1 ), i = 1 , · · · , s ∈ { . , . , . , . } all starting at √ ,
0) and ( a,
1) and joining ( a,
0) and (1 , a denotes the correctdiscontinuity point of the respective MP transformation. From the graphs in Figure 6.2 we seethe identification of the line based on v i with the correct line, especially in the second branchof the graph. That is so because a ∈ (cid:0) , √ − (cid:1) , so that the second branch, being smaller, is lessaffected by the distortion due to the absence of F .In order to assess the performance of the estimation procedure, we perform the followingexperiment. We randomly select 100 initial points in (0 ,
1) and for each initial point we generatea path (of size N = 200) of an MP process for s ∈ { . , . , · · · , . } . For each path, say x , · · · , x , we perform the proposed estimation procedure. In order to estimate a , we appliedtwo methods: the first one is a simple least squares method applied to the points lying inthe second branch of ( x i , x i +1 ). The second method is the following: let ( x m , x m +1 ) and( x m , x m +1 ) denote the points among the ones lying on the second branch of { ( x i , x i +1 ) } N − i =1 Tables with the initial values applied in our experiments and the complete simulation results are availableupon request. .R.C. Lopes and G. Pumi (a) (b) (c) (d) Figure 6.2:
Plot of v i = ( x i , x i +1 ), i = 1 , · · · ,
199 from a sample path of an MP process with (a) s = 0 .
2, (b) s = 0 .
4, (c) s = 0 . s = 0 .
8. The solid lines correspond to the lines joining the points (0 ,
0) and ( a,
1) andjoining ( a,
0) and (1 , a denotes the correct discontinuity point of the respective MP transformation. for which x m is minimum and x m is maximum. We define the estimator of a , say ˆ a , as ˆ a = − BA , where A := x m +1 − x m +1 x m − x m and B := x m +1 − Ax m . (6.6) For reference, in the subsequent we shall call this the min-max procedure . Geometrically, ˆ a isthe inverse image of 0 by the linear function joining ( x m , x m +1 ) and ( x m , x m +1 ). (a) (b) (c) (d) Figure 6.3:
Plot of the estimated values for s ∈ { . , . , . } for 100 random initial points by using (a) theleast squares procedure and (b) the min-max procedure. The dashed lines correspond to the correct value of s .Also shown the histogram of the estimated values for s = 0 . Table 6.1 summarizes the experiment results by presenting the mean, range, standard de-viation (st.d.) and mean square error (mse) of the results. Figures 6.3(a) and 6.3(b) presentgraphically the results for both methods for s ∈ { . , . , . } while in Figures 6.3(c) and 6.3(d),the histogram of the results for s = 0 . s increases.The min-max procedure can be carried out even for time series of sample size as small as 20,as long as the second branch of { ( x i , x i +1 ) } N − i =1 contains at least 2 points, which does not alwayshappen (for instance, for N = 110, a sample path of an MP processes with s = 0 . √ x m and x m in (6.6) are, respectively, the betterthe estimation performance.4 Copulas Related to Manneville-Pomeau Processes
Table 6.1:
Summary statistics of the experiment results. Presented are the mean estimate (ˆ s ), the range,standard deviation (st.d.) and the mean square error (mse) of the estimates. The min-max procedure is denotedby MM while LS denotes the least squares.Proc. s ˆ s range st.d. mse s ˆ s range st.d. mseMM 0.10 0.1008 [0 . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . ∗ . , . . , . . , . . , . . , . . , . . , . . , . . , . . , . . , . Note: 0 ∗ means that the mse is smaller than 5 × − . In this work we derive the copulas related to Manneville-Pomeau processes for almost everywheremonotonic functions ϕ . In the bidimensional case, we find that the copulas of any random pair( X t , X t + h ) depend only on the lag h and are singular. The support of the copulas is derived aswell.As for the multidimensional case, when ϕ is increasing almost everywhere, the functionalform of the copulas are very similar to the ones in derived in the bidimensional case. We concludethat the copulas of vectors ( X t , · · · , X t n ) and (cid:0) U , T t − t s ( U ) , · · · , T t n − t s ( U ) (cid:1) are the same. When ϕ is decreasing almost everywhere, we find that the copulas of an n -dimensional random vectorfrom an MP process can be deduced from the ones derived for the increasing case.The copulas derived here depend on the T s -invariant measure µ s which has no explicitformula. For the bidimensional case, we propose an approximation to the copula which is shownto converge uniformly to the true copula. From this approximation, we are able to present plotsof the copulas for different parameters and lags and to present a simple algorithm to generateapproximated samples from the copulas. Some simple numerical calculation are presented totest the steps of the approximation. To illustrate the usefulness of the theory, we derive a fastestimation procedure of the underlying parameter s in Manneville-Pomeau processes. Acknowledgements
S´ılvia R.C. Lopes research was partially supported by CNPq-Brazil, by CAPES-Brazil, by INCT emMatem´atica and also by Pronex
Probabilidade e Processos Estoc´asticos - E-26/170.008/2008 -APQ1. .R.C. Lopes and G. Pumi Guilherme Pumi was partially supported by CAPES/Fulbright Grant BEX 2910/06-3 and by CNPq-Brazil.
References
Billingsley, P. (1999).
Convergence of Probability Measures.
Nonlin-earity , , 2341-2364. MR2165706Dellnitz, M. and Junge, O. (1999). “On the Approximation of Complicated Dynamical Behavior”. SIAM Journal of Numerical Analysis , Vol. , 491-515. MR1668207Fisher, A.M. and Lopes, A. (2001). “Exact Bounds for the Polynomial Decay of Correlation, 1 /f Noise and the CLT for the Equilibrium State of a Non-H¨older Potential”.
Nonlinearity , Vol. ,1071-1104. MR1862813Joe, H. (1997). Multivariate Models and Dependence Concepts.
Monographs on Statistics and AppliedProbability, 73. London: Chapman & Hall. MR1462613Koles´arov´a, A.; Mesiar, R.; Sempi, C. (2008). “Measure-preserving transformations, copulæ and com-patibility”.
Mediterranean Journal of Mathematics , , 325-339. MR2465579Lopes, A. and Lopes, S.R.C. (1998). “Parametric Estimation and Spectral Analysis of Piecewise LinearMaps of the Interval”. Advances in Applied Probability , Vol. , 757-776. MR1663557Maes, C.; Redig, F.; Takens, F.; Moffaert, A. and Verbitski, E. (2000). “Intermittency and Weak GibbsStates”. Nonlinearity , Vol. , 1681-1698. MR1781814Nelsen, R.B. (2006). An Introduction to Copulas . New York: Springer-Verlag. 2nd Edition. MR2197664Olberman, B.P; Lopes, S.R.C and Lopes, A.O. (2007). “Parameter Estimation in Manneville-PomeauProcesses”. Unpublished manuscript. Source: arXiv:0707.1600.Pianigiani, G. (1980). “First Return Map and Invariant Measures”.
Israel Journal of Mathematics , Vol. , 32-48. MR0576460Pollicott, M. and Yuri, M. (1998). Dynamical Systems and Ergodic Theory . Cambridge: CambridgeUniversity Press. MR1627681Royden, H.L. (1988).
Real Analysis.
New York: Macmillan. 3nd Edition. MR1013117Schweizer, B. and Sklar, A. (2005).
Probabilistic Metric Spaces . Mineola: Dover Publications. MR0790314Young, L-S. (1999). “Recurrence Times and Rates of Mixing”.
Israel Journal of Mathematics , Vol. , 153-188. MR1750438Zebrowsky, J.J. (2001). “Intermittency in Human Heart Rate Variability”.
Acta Physica Polonica B ,Vol.32