On Optimal Exact Simulation of Max-Stable and Related Random Fields
OON OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATEDRANDOM FIELDS ON A COMPACT SET
ZHIPENG LIU, JOSE BLANCHET, A.B. DIEKER, AND THOMAS MIKOSCH
Abstract.
We consider the random field M ( t ) = sup n ≥ (cid:8) − log A n + X n ( t ) (cid:9) , t ∈ T , for a set T ⊂ R m , where ( X n ) is an iid sequence of centered Gaussian random fields on T and0 < A < A < · · · are the arrivals of a general renewal process on (0 , ∞ ), independent of ( X n ).In particular, a large class of max-stable random fields with Gumbel marginals have such a repre-sentation. Assume that one needs c ( d ) = c ( { t , . . . , t d } ) function evaluations to sample X n at d locations t , . . . , t d ∈ T . We provide an algorithm which, for any (cid:15) >
0, samples M ( t ) , . . . , M ( t d )with complexity o ( c ( d ) d (cid:15) ) as measured in the L p norm sense for any p ≥
1. Moreover, if X n has ana.s. converging series representation, then M can be a.s. approximated with error δ uniformly over T and with complexity O (1 / ( δ log(1 /δ )) /α ), where α relates to the H¨older continuity exponent ofthe process X n (so, if X n is Brownian motion, α = 1 / March 28, 2018 1.
Introduction
Let X be a centered Gaussian random field on a set T ⊆ R m , m ≥ X n ) of independent and identically distributed copies of X . In addition, let ( A n ) be a renewalsequence independent of ( X n ). Under mild regularity conditions on the X , we will provide anefficient Monte-Carlo algorithm for sampling the field(1.1) M ( t ) = sup n ≥ (cid:8) − log A n + X n ( t ) + µ ( t ) (cid:9) , t ∈ T , where µ : T −→ R is a bounded function.We will design and analyze an algorithm for the exact simulation of M ( t ) , . . . , M ( t d ) for any choice of distinct locations t , ..., t d ∈ T ,and we will show that, in some sense, this algorithm is asymptotically optimal as d → ∞ .The algorithm proposed here shaves off a factor of order (nearly) d from the running time of anyof the existing exact sampling procedures. In particular, we will show that, under mild boundednessassumptions on X , it is as hard to sample ( M ( t i )) i =1 ,...,d as it is to sample ( X ( t i )) i =1 ,...,d . Therefore,at least from a simulation point of view, it is not more difficult to work with M than with X .More precisely, if it takes O ( c ( d )) units of computing time to sample X at d distinct locations t , . . . , t d ∈ T , then, for any given (cid:15) >
0, it takes o ( c ( d ) d (cid:15) ) units to sample M at the samelocations; see Theorem 2.2 for a precise formulation.We illustrate this result by considering fractional Brownian motion X on T = [0 , c ( d ) = O ( d log d ) provided we sampleat the dyadic points t i = i/ − m for i = 1 , . . . , m = d (which we call dyadic points at level d ).In the case of Brownian motion, one even has c ( d ) = O ( d ), corresponding to the simulation of d Support from NSF grant DMS-132055 and NSF grant CMMI-1538217 is gratefully acknowledged by J. Blanchet.Thomas Mikosch’s research is partly supported by the Danish Research Council Grant DFF-4002-00435 “Largerandom matrices with heavy tails and dependence”. A.B. Dieker gratefully acknowledges support from NSF grantCMMI-1252878. a r X i v : . [ m a t h . P R ] M a r Z. LIU, J. BLANCHET, A.B. DIEKER, AND T. MIKOSCH independent Gaussian random variables. Thus, in the case of fractional Brownian motion on [0 , M at the dyadic points at level d in [0,1] with complexity o ( d (cid:15) ) for any (cid:15) >
0; see [3, Sec. XI.6].Moreover, if X has a series representation a.s. converging uniformly on T (such as the L´evy-Ciesielski representation for Brownian motion, see [28, Sec. 3.1]), we also propose an approximatesimulation procedure for M with a user-defined (deterministic) bound on the error which holdswith probability one uniformly throughout T . More precisely, for any δ >
0, the procedure that wepresent outputs an approximation M δ to M such that(1.2) sup t ∈ T | M ( t ) − M δ ( t ) | ≤ δ a . s . The results concerning (1.2) are reported in Theorem 7.4. The method of designing a family ( M δ ) δ> such that (1.2) holds is known as Tolerance Enforced Simulation (TES) or δ -strong simulation ; see[9] and [27] for details. Note that a TES algorithm enforces a strong (almost sure) guaranteewithout knowledge of any specific set of sampling locations. This is a feature which distinguishesTES from more traditional algorithms in the broad literature on simulation of random fields andprocesses.As will be explained later, the evaluation of M δ ( t ) for fixed t takes O (1) units of computingtime while the construction of the process M δ will often take O (1 / ( δ log(1 /δ )) ) units. The latterresult holds under assumptions on the convergence of the series representation of X which, inparticular, are satisfied for Brownian motion X . In the latter case, the proposed procedure achievesa complexity of order O ( d ) for the exact sampling of M on the dyadic points at level d (because theseries truncated at level d is exact on the dyadic points at level d ). Therefore, the exact samplingprocedure based on Theorem 7.4 applied to the dyadic points at level d is optimal because it takes O ( d ) computational cost to sample X at d dyadic points. Moreover, the convergence rate of theTES algorithm is also optimal in the Brownian case. In order to obtain a uniform error of order O ( δ ), one requires to discretize Brownian motion using a grid of size O (1 / ( δ log(1 /δ )) ); see [5].Our results are mainly motivated by application to the simulation of max-stable random fields.Indeed, if ( A i ) is the arrival sequence of a unit rate Poisson process on (0 , ∞ ), M is a max-stableprocess in the sense of de Haan [21]. This means, in particular, that the distribution of M ( t ) forany fixed t ∈ T has a Gumbel distribution which is one of the max-stable distributions. The latterclass of distributions consists of the non-degenerate limit distributions for the suitably centeredand scaled partial maxima of an iid sequence; see for example [19]. The non-Gumbel max-stableprocesses with Fr´echet or Weibull marginals are obtained from the representation (1.1) by suitablemonotone transformations. We also mention that de Haan [21] already proved that max-stableprocesses with Gumbel marginals have representation (1.1), where X may have a rather generaldependence structure not restricted to Gaussian X . However, the case of Gaussian X has attractedmajor attention. The case of Brownian X was treated in [15]; it is known as the Brown-Resnickprocess . In the paper [23], the case of a general Gaussian process X with stationary incrementswas treated, including the case of a Gaussian process X defined on a multidimensional set T oftenreferred to as Smith model . It is used in environmental applications for modeling storm profiles;see for example [30]. General characterizations, including spectral representations and furtherproperties, have been obtained as well; for example, see [29] and [23]. However, the explicit jointdistribution of the max-stable process is in general not tractable. Because max-stable processesare generated as weak limits of maxima of iid random fields, max-stable models are particularlysuited for modeling extremal events in spatio-temporal contexts. These include a wide range ofapplications of environmental type, for example, extreme rainfall [20] and extreme temperature [32].Recently, several exact sampling procedures for M have been proposed and studied in the lit-erature. In [17], an elegant and easy-to-implement procedure was proposed for the case in which X has stationary increments. Such a procedure has a computational complexity at least of or-der O ( c ( d ) d ); see Proposition 4 in [18]. So, for example, if X is fractional Brownian motion, the N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET3 procedure takes at least O ( d log d ) units of computing time to produce d dyadic points of M in[0 , M was recently proposed in [18]. It also has complexity O ( c ( d ) d ) (see Proposition 4 in [18]), thus the procedure in [18] takes O ( d ) for fractional Brownianmotion (neglecting the contribution of logarithmic factors). This method is based on the ideaof simulating the extremal functions. It is completely different from the approach taken here.Additional work concentrates on max-stable processes which satisfy special characteristics. Forexample, [29] proposed an exact simulation algorithm for the moving maxima model under suitableuniformity conditions.Another recent development is [26], where the authors discuss an exact sampling algorithm formax-stable fields using the so-called normalized spectral representation. If the normalized spectralfunctions can be sampled with cost c NS ( d ), then the algorithm in [26] samples the max-stable fieldexactly with complexity O ( c NS ( d )). However, for Gaussian-based max-stable fields, it is an openproblem to devise exact sampling algorithms for the normalized spectral function, and it is unclearhow c NS ( d ) compares with the complexity c ( d ) of sampling X .An important difference between our method and those in [17] and [18] is the following: Both[17] and [18] take advantage of representations or structures which allow to truncate the infinitemax-convolution in (1.1) while preserving the simple Gaussian structure of the number of termsin the truncation. Because the simple structure of these terms is preserved, the number of termsin the truncation increases at least linearly in d . In contrast, we are able to truncate the numberof terms in the infinite max-convolution uniformly in d . While the terms in the truncation havea slightly more complex structure (they are no longer iid Gaussian), they are still quite tractablefrom a simulation standpoint.This paper is organized as follows: In Section 2 we present our main result and in Section 3we discuss our general strategy, based on milestone events or record-breakers. The record-breakingstrategy is illustrated in Section 4 in the setting of random walks, which is needed in our context dueto the presence of ( A n ) in M . Then we apply the record-breaking strategy to the setting of maximaof Gaussian random vectors with focus on Section 5: This section describes the main algorithmicdevelopments of the paper. A complexity analysis is performed in Section 6. We introduce andanalyze a TES algorithm in Section 7. Finally, in Section 8, we conclude our paper with a series ofempirical comparison results. 2. Main Result
This section provides a formal statement of the main result and its underlying assumptions.We assume that ( A n ) n ≥ is a renewal sequence, as mentioned in the Introduction. In particular, A = 0, and A n = τ + · · · + τ n , n ≥
1, where ( τ i ) is an iid sequence of positive random variables,independent of ( X n ).We introduce the following technical assumptions applicable to ( A n ):A1) For any γ < E τ , there exists some θ γ > E [exp( θ γ ( γ − τ ))] = 1.A2) It is possible to sample step sizes under the nominal probability measure as well as underthe exponentially tilted distribution E (cid:2) exp( θ γ ( γ − τ )) ( τ ∈ dt ) (cid:3) . We also introduce the following assumptions on the Gaussian field ( X ( t )) t ∈ T .B1) E [ X ( t )] = 0.B2) E [exp ( p sup t ∈ T X ( t ))] < ∞ for any p ≥ Z. LIU, J. BLANCHET, A.B. DIEKER, AND T. MIKOSCH
Remark 2.1.
By Borell’s inequality [1, Thm. 2.1.1], if T is bounded, a sufficient condition for B2)is Var( X ( s ) − X ( t )) ≤ c | s − t | β for any s, t ∈ T and some c > β >
0. Define σ ( t ) = Var ( X ( t )). Then, under B1) and B2),sup t ∈ T σ ( t ) = sup t ∈ T E [ X ( t ) ] ≤ E (cid:20) sup t ∈ T X ( t ) (cid:21) < ∞ . We also assume that sampling ( X ( t i )) i =1 ,...,d costs c ( { t , . . . , t d } ) ≥ d units of operations. Inthis paper, a single operation can be any single arithmetic operation, generating a uniform randomvariable, calculating a Gaussian cumulative probability function, comparing any two numbers,or retrieving a Gaussian quantile value. For simplicity in the notation, we shall simply write c ( d ) = c ( { t , . . . , t d } ). The locations t , ..., t d will be assumed given throughout our development.The following is our performance guarantee for our final algorithm, Algorithm M , presented inSection 6. A crucial part of the theorem is that the points t , . . . , t d for any d ≥ T . Theorem 2.2.
Assume the conditions
A1), A2), and
B1), B2).
Then
Algorithm M outputs M ( t ) , . . . , M ( t d ) without any bias, and the total number R of operations in the execution of thisalgorithm satisfies E [ R p ] = o ( d (cid:15) c ( d ) p ) for any p ≥ and (cid:15) > . Building Blocks For Our Algorithm
This section serves as a roadmap for the algorithmic elements behind our approach. We startwith a few definitions: X n = max i =1 ,...,d X n ( t i ) , X n = min i =1 ,...,d X n ( t i ) . We shall use X and X to denote generic copies of X n and X n , respectively. We also set µ =max i =1 ,...,n µ ( t i ) and µ = min i =1 ,...,n µ ( t i ).Our algorithm relies on three random times which are finite a.s. They depend on parameters a ∈ (0 , C ∈ R , 0 < γ < E [ A ] to be chosen later.(1) N X = N X ( a, C ): for all n > N X , X n ≤ a log n + C. A straightforward Borel-Cantelli argument shows that N X is finite.(2) N A = N A ( γ ): for all n > N A ,(3.1) A n ≥ γn. (3) N a = N a ( γ, a, C ): for all n > N a ,(3.2) nγ ≥ A n a exp( C − X ) . Applying the defining properties of these random times, we find that for n > N := max( N A , N X , N a )and any t ∈ { t , . . . , t d } , − log A n + X n ( t ) ≤ − log A n + X n ≤ − log A n + a log n + C ≤ − log( nγ ) + a log n + C ≤ − log A + X ≤ − log A + X ( t ) . N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET5
We conclude that, for t ∈ { t , . . . , t d } ,sup n ≥ {− log A n + X n ( t ) + µ ( t ) } = max ≤ n ≤ N {− log A n + X n ( t ) + µ ( t ) } , (3.3)and thus we can sample M ( t ) , . . . , M ( t d ) with computational complexity N c ( d ) plus the overheadto identify N A and N X .From an algorithmic point of view, the key is the simulation of the random variables N X , N A ,and N a . If we know how to simulate these quantities, relation (3.3) indicates that we must be ableto simulate the sequences ( A n ) and ( X n ) up to and jointly with N which heavily depends on bothsequences. Remark 3.1.
Assumptions A1) and A2) can be removed without loss of generality. To see this,we first observe that for any r > τ i ( r ) = min ( τ i , r ) ≤ τ i and, therefore, A n ( r ) = τ ( r ) + · · · + τ n ( r ) ≤ A n . Moreover, we can select r > γ < E [ τ i ( r )] < E [ τ i ]. Hence we can use ( A n ( r )) n ≥ to find N A satisfying A n > A n ( r ) > γ n. Because 0 ≤ τ n ( r ) ≤ r , the moment generating function of τ n ( r ) exists on the whole real line. Byconvexity, one can always choose θ γ which satisfies E [exp( θ γ ( γ − τ ( r )))] = 1, as long as Var ( τ i ( r )) > τ i > r > τ i is deterministic, thestrategy can be implemented directly, that is, we can simply select N A deterministic. Once we find N A , we can recover ( A n ) n ≤ N A from ( A n ( r )) n ≤ N A by replacing τ n ( r ) with an independent sampleof τ n given τ n ≥ r , for any n ≤ N A such that τ n ( r ) = r , and keeping τ n ( r ) if it is less than r .Given our previous discussion, we might concentrate on how to sample from an exponentiallytilted distribution of a random variable with compact support, which may require evaluating themoment generating function in closed form. Sampling from an exponentially tilted distribution isstraightforward for random variables with finite support. So, the strategy can be implemented for (cid:98) τ i ( r ) ∆ (cid:99) / ∆ < τ i ( r ), where (cid:98)·(cid:99) is the round-down operator, picking ∆ > E [ (cid:98) τ i ( r ) ∆ (cid:99) / ∆] > γ . Once (cid:98) τ i ( r ) ∆ (cid:99) is sampled we can easily simulate τ i ( r ) using accep-tance/rejection. The details of this idea are explained in [13].4. Sampling a Random Walk up to a Last Passage Time
In this section, we discuss the simulation of the random time N A jointly with the sequence( A n ) n ≥ . We lead this discussion in the context of a general random walk ( S n ) n ≥ starting fromthe origin with negative drift. It is eventually negative almost surely. We review an algorithmfrom [14] for finding a random time N S such that S n < n > N S . Our aim is to develop asampling algorithm for ( S , . . . , S N S + (cid:96) ) for any fixed (cid:96) ≥
0. Our discussion here provides a simplerversion of the algorithm in [14] and allows us to provide a self-contained development of the wholeprocedure for sampling M ( t ) , . . . , M ( t d ).The algorithm is based on alternately sampling upcrossings and downcrossings of the level 0.We write ξ +0 = 0 and, for i ≥
1, we recursively define ξ − i = (cid:40) inf { n ≥ ξ + i − : S n < } if ξ + i − < ∞∞ otherwisetogether with ξ + i = (cid:40) inf { n ≥ ξ − i : S n ≥ } if ξ − i < ∞∞ otherwise . As usual, in these definitions the infimum of an empty set should be interpreted as ∞ . Writing N S = sup { ξ − n : ξ − n < ∞} , Z. LIU, J. BLANCHET, A.B. DIEKER, AND T. MIKOSCH and keeping in mind that ( S n ) starts as zero and has negative drift, we have by construction0 ≤ N S < ∞ almost surely, and for n > N S , S n ≤
0. The random variable N S − N S − { n ≥ S n ≥ } . We write P x for the distribution of the random walk starting from x ∈ R , so that P = P . Weassume the existence of Cram´er’s root , θ >
0, satisfying E [exp( θS )] = 1. Also assume that we cansample a random walk starting from x under P θx , which is defined with respect to P x through anexponential change of measure: on the σ -field generated by S , . . . , S n we have d P x d P θx = exp( − θ ( S n − x )) . Under P θx , the random walk ( S n ) has positive drift.The rest of this section is organized as follows: • In Section 4.1 we discuss sampling of downcrossing and upcrossing segments of the randomwalk. • In Section 4.2 we explain how to sample beyond N S . • In Section 4.3 we presents our full algorithm for sampling ( S , . . . , S N S + (cid:96) ).4.1. Downcrossings and upcrossings.
To introduce the algorithm, we first need the followingdefinitions: τ − = inf { n ≥ S n < } , τ + = inf { n ≥ S n ≥ } . For x ≥
0, it is immediate that we can sample a downcrossing segment S , . . . , S τ − under P x dueto the negative drift, and we record this for later use in a pseudocode function. Throughout thispaper, ‘sample’ in pseudocode stands for ‘sample independently of anything that has been sampledalready.’
Function
SampleDowncrossing ( x ): Samples ( S , . . . , S τ − ) under P x for x ≥ S , . . . , S τ − ) under P x .Step 2: EndFunctionSampling an upcrossing segment is much more challenging because it is possible that τ + = ∞ ,so an algorithm needs to be able to detect this event within a finite amount of computing resources.For this reason, we understand sampling an upcrossing segment under P x for x < S , . . . , S τ + ) if τ + < ∞ , and otherwise it outputs ‘degenerate.’Our algorithm is based on importance sampling and exponential tilting, techniques that arewidely used for rare event simulation [3, p. 164]. Under Assumption A1), it is well-known that E θx [ τ + ] < ∞ ; for instance, see [2, p. 231, Cor. 4.4]. In particular, the expected time to simulate( S , . . . , S τ + ) is finite under P θx for any x < Proposition 4.1.
Let x < . Suppose there exists some θ > with E [exp( θS )] = 1 . With U beinga standard uniform random variable independent of ( S n ) under P θx , we have the following: (1) The law of ( τ + < ∞ ) under P x equals the law of ( U ≤ exp( − θ ( S τ + − x ))) under P θx . (2) The law of τ + given τ + < ∞ under P x equals the law of τ + given U ≤ exp( − θ ( S τ + − x )) under P θx . (3) For any k ≥ , the law of ( S , ..., S k ) given τ + = k under P x equals the law of ( S , . . . , S k ) given U ≤ exp( − θ ( S τ + − x )) and τ + = k under P θx .Proof. For any integer k ≥ B , B , . . . , B k , we have P x (cid:0) S ∈ B , . . . , S k ∈ B k , τ + = k (cid:1) = E θx (cid:2) exp( − θ ( S k − x )) ( S ∈ B , . . . , S k ∈ B k , τ + = k ) (cid:3) N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET7 = E θx (cid:2) ( U ≤ exp( − θ ( S τ + − x ))) ( S ∈ B , . . . , S k ∈ B k , τ + = k ) (cid:3) . All claims are elementary consequences of this identity, upon noting that τ + < ∞ under P θx . (cid:3) This proposition immediately yields the following algorithm.
Function
SampleUpcrossing ( x ): Samples ( S , . . . , S τ + ) under P x for x < S ← sample ( S , . . . , S τ + ) under P θx Step 2: U ← sample a standard uniform random variableStep 3: If U ≤ exp( − θ ( S τ + − x ))Step 4: Return S Step 5: ElseStep 6: Return ‘degenerate’Step 7: EndIfStep 8: EndFunction4.2.
Beyond N S . We next describe how to sample ( S , . . . , S (cid:96) ) from P x conditionally on τ + = ∞ for x <
0. Because τ + = ∞ is equivalent to sup k ≤ (cid:96) S k < k>(cid:96) S k < (cid:96) ≥
1, aftersampling S , . . . , S (cid:96) , by the Markov property we can use SampleUpcrossing ( S (cid:96) ) to verify whetheror not sup k>(cid:96) S k <
0. This observation immediately yields an acceptance/rejection algorithm thatachieves our goal.
Function
SampleWithoutRecordS ( x, (cid:96) ) : Samples ( S , . . . , S (cid:96) ) from P x given τ + = ∞ for (cid:96) ≥ , x < S ← sample ( S , . . . , S (cid:96) ) under P x Step 3: Until sup ≤ k ≤ (cid:96) S k < SampleUpcrossing ( S (cid:96) ) is ‘degenerate’Step 4: Return S Step 5: EndFunction4.3.
Sampling a random walk until a last passage time.
We summarize our findings in thissection in our full algorithm for sampling ( S , . . . , S N S + (cid:96) ) under P given some (cid:96) ≥
0. The validityof the algorithm is a direct consequence of the strong Markov property.
Algorithm S: Samples ( S , . . . , S N S + (cid:96) ) under P for (cid:96) ≥
0. S end to denote the lastelement of S .Step 1: S ← [0]Step 2: RepeatStep 3: DowncrossingSegment ← SampleDowncrossing ( S end )Step 4: S ← [ S, DowncrossingSegment]Step 5: UpcrossingSegment ← SampleUpcrossing ( S end )Step 6: If UpcrossingSegment is not ‘degenerate’Step 7: S ← [ S, UpcrossingSegment]Step 8: EndIfStep 9: Until UpcrossingSegment is ‘degenerate’Step 10: If (cid:96) > S ← [ S, SampleWithoutRecordS ( S end , (cid:96) )]Step 12: EndIf5. Record-Breaker Technique for the Maximum of a Gaussian Field
After the excursion to random walks in Section 4 we return to the main theme of this paper.In particular, we stick to the notation and assumptions of Section 1-3. Define η = n for some Z. LIU, J. BLANCHET, A.B. DIEKER, AND T. MIKOSCH fixed n to be defined later. Let ( X n ) n ≥ be iid copies of X and define, for i ≥
1, a sequence of record-breaking times ( η i ) through η i = (cid:40) inf { n > η i − : X n > a log n + C } if η i − < ∞∞ otherwise.It is the aim of this section to develop a sampling algorithm for ( X , . . . , X N X + (cid:96) ) for any fixed (cid:96) ≥
0, where N X = max { η i : η i < ∞} . Here and in what follows, we write X i for a sample path at the given points t , . . . , t d ∈ T .Section 5.1 first discusses an algorithm to sample ( X n ) up to a single record. For this algorithm towork, n needs to be large enough so that P ( X > a log n + C ) is controlled for every n > n ; thechoice of n is also discussed in Section 5.1. Section 5.2 describes how to sample ( X n ) beyond thelast record-breaking time. Section 5.3 presents our algorithm for sampling ( X , . . . , X N X + (cid:96) ).5.1. Breaking a single record.
We define for n ≥ n , T n = inf { k ≥ X k > a log( n + k ) + C } . We describe an algorithm that outputs ‘degenerate’ if T n = ∞ and ( X , . . . , X T n ) if T n < ∞ .Ultimately, the strategy is based on acceptance/rejection. We will eventually sample T n given T n < ∞ using a suitable random variable K as a proxy with probability mass function g n , whichwe discuss later in this subsection. In order to apply this acceptance/rejection strategy, we needto introduce auxiliary sampling distributions.Our algorithm makes use of a measure P ( n ) that is designed to appropriately approximate theconditional distribution of X given X > a log n + C , which is defined through d P ( n ) d P ( x ) = (cid:80) di =1 ( x ( t i ) > a log n + C ) (cid:80) di =1 P ( X ( t i ) > a log n + C ) . For any index j ∈ { , . . . , d } and t ∈ { t , . . . , t d } , define w j ( t ) = Cov( X ( t ) , X ( t j )) / Var( X ( t j )).Since X is centered Gaussian X ( t ) − w j X ( t j ) and X ( t j ) are uncorrelated, hence independent.Now one readily verifies that the following algorithm outputs samples from P ( n ) . Here and in whatfollows, Φ is the standard normal distribution function. Function
ConditionedSampleX ( a, C, n ) : Samples X from P ( n ) Step 1: ν ← sample with probability mass function P ( ν = j ) = P ( X ( t j ) > a log n + C ) (cid:80) di =1 P ( X ( t i ) > a log n + C )Step 2: U ← sample a standard uniform random variableStep 3: X ( t ν ) ← σ ( t ν )Φ − (cid:16) U + (1 − U )Φ (cid:16) a log n + Cσ ( t ν ) (cid:17)(cid:17) X ( t ν ) > a log n + C Step 4: Y ← sample of X under P Step 5: Return Y − w ν Y ( t ν ) + w ν X ( t ν )Step 6: EndFunctionWe are now ready to see how ConditionedSampleX is used to sample until the first record.
Function
SampleSingleRecord ( a, C, n ) : Samples ( X , . . . , X T n ) for a ∈ (0 , , C ∈ R , n ≥ n Step 1: K ← sample from pmf g n Step 2: ( X , . . . , X K − ) ← iid sample under P Step 3: X K ← ConditionedSampleX ( a, C, n + K )Step 4: U ← sample a standard uniform random variableStep 5: If X k ≤ a log( n + k ) + C for k = 1 , . . . , K − U g n ( K ) ≤ d P /d P ( n + K ) ( X K ) N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET9
Step 6: Return ( X , . . . , X K )Step 7: ElseStep 8: Return ‘degenerate’Step 9: EndIfStep 10: EndFunctionThe following proposition shows that SampleSingleRecord achieves the desired goal.
Proposition 5.1.
Assume the condition d (cid:88) i =1 P ( X ( t i ) > a log( n + k ) + C ) ≤ g n ( k ) for k ≥ . (5.1) For n ≥ n , if ( (cid:101) X , . . . , (cid:101) X (cid:101) T ) has the distribution of the output of SampleSingleRecord condi-tioned on not being ‘degenerate,’ then we have (1) the algorithm
SampleSingleRecord returns ’degenerate’ with probability P ( T n = ∞ ) , (2) the length (cid:101) T has the same distribution as T n given T n < ∞ , and (3) the distribution of ( (cid:101) X , . . . , (cid:101) X (cid:101) T ) given (cid:101) T = (cid:96) is the same as the distribution of ( X , . . . , X (cid:96) ) given T n = (cid:96) .Proof. Write A m = { x ∈ R d : max i x i > a log( n + m ) + C } for m ≥
1. For B ⊂ A c , . . . , B k − ⊂ A ck − and B k ⊂ A k , we have P ( (cid:101) X ∈ B , . . . , (cid:101) X k − ∈ B k − , (cid:101) X k ∈ B k , (cid:101) T = k )= P ( K = k ) P ( X ∈ B ) · · · P ( X ∈ B k − ) × P ( n + k ) (cid:18) U g n ( k ) ≤ d P d P ( n + k ) ( X ) , X ∈ B k (cid:19) = P ( K = k ) P ( X ∈ B ) · · · P ( X ∈ B k − ) × E ( n + k ) (cid:18) g n ( k ) d P d P ( n + k ) ( X ) I ( X ∈ B k ) (cid:19) = g n ( k ) P ( X ∈ B ) · · · P ( X ∈ B k − ) P ( X ∈ B k ) g n ( k )= P ( X ∈ B , . . . , X k ∈ B k , T n = k ) , and all claims follow from this identity. The second equality follows from the assumption, whichimplies that d P /d P ( n + k ) ( x ) /g n ( k ) is bounded by 1 for all k ≥ x ∈ R d . (cid:3) Choosing n and the density g n . We start with g n , guided by (5.1) and the requirement thatwe need to sample from g n . The random variable K is a proxy for the first-record epoch T n , thedistribution of which we can approximate with a union-bound. This leads to the idea to use , for k ≥ g n ( k ) = (cid:82) kk − φ (( a log( n + s ) + C ) /σ ) ds (cid:82) ∞ φ (( a log( n + s ) + C ) /σ ) ds , (5.2)where φ ( · ) is the density function of the standard normal distribution, σ = max t ∈ T Var( X ( t )).The following lemma resolves the sampling question. Lemma 5.2.
Let U be a uniform random variable on (0 , . The quantity (cid:24) exp (cid:26) σ a − Ca + σa Φ − (cid:18) U Φ (cid:18) a log n + Cσ − σa (cid:19)(cid:19)(cid:27) − n (cid:25) has probability mass function g n , where (cid:100)·(cid:101) is the round-up operator, Φ = 1 − Φ , and Φ − is theinverse of Φ .Proof. Write f n ( U ) for the expression inside the exponential operator. For k ≥
1, we have P ( (cid:100) exp( f n ( U )) − n (cid:101) ≥ k ) = P ( f n ( U ) > log( n + k − a log( n + k −
1) + C ) /σ − σ/a )Φ (( a log n + C ) /σ − σ/a ) , so it remains to show that this equals (cid:88) m ≥ k g n ( m ) = (cid:82) ∞ n + k − φ (( a log x + C ) /σ ) dx (cid:82) ∞ n φ (( a log x + C ) /σ ) dx . To see this, we note that, for y > (cid:90) ∞ y φ (( a log( x ) + C ) /σ ) dx = 1 √ π (cid:90) ∞ log y exp (cid:18) − ( at + C ) σ + t (cid:19) dt = e − C/a √ πφ ( σ/a ) / ( σ/a ) × Φ(( a log y + C ) /σ − σ/a )(5.3) = r ( y ) , and we thus obtain the claim. (cid:3) The next lemma shows that, for large enough n , the choice of g n as in (5.2) ensures that (5.1)is satisfied. The lemma also shows how P ( T n < ∞ ) for n ≥ n can be controlled explicitly. Proposition 5.3. If n satisfies a log n + C ≥ σ and d r ( n ) ≤ δ for a given δ ∈ (0 , , then (5.1) is satisfied and SampleSingleRecord ( a, C, n ) returns ‘degenerate’ at least with probability − δ .Proof. Since Φ( x ) ≤ φ ( x ) for x ≥ d r ( n ) ≤ δ , and in view of (5.3) we have d (cid:88) i =1 P ( X ( t i ) > a log( n + k ) + C ) ≤ d Φ (( a log( n + k ) + C ) /σ ) ≤ dφ (( a log( n + k ) + C ) /σ ) ≤ d (cid:90) kk − φ (( a log( n + s ) + C ) /σ ) ds = d (cid:90) ∞ φ (( a log( n + s ) + C ) /σ ) ds g n ( k )= d r ( n ) g n ( k ) < δ g n ( k ) . (5.4)This proves the first claim.Applying Proposition 5.1 and (5.4) for every k , the probability that SampleSingleRecord does not return ‘degenerate’ is bounded as follows: ∞ (cid:88) k =1 P ( T n = k ) ≤ ∞ (cid:88) k =1 d (cid:88) i =1 P ( X ( t i ) > a log( n + k ) + C ) ≤ ∞ (cid:88) k =1 d (cid:88) i =1 P ( X ( t i ) > a log( n + k ) + C ) < δ ∞ (cid:88) k =1 g n ( k ) = δ, N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET11 which proves the second claim. (cid:3)
Beyond N X . We next describe how to sample ( X , . . . , X n ) conditionally on T n = ∞ . As inSection 4.2 we use an acceptance/rejection algorithm, but we have to modify the procedure slightlybecause we work with a sequence of iid random fields instead of a random walk. Function
SampleWithoutRecordX ( n, (cid:96) ) : Samples ( X , . . . , X (cid:96) ) conditionally on T n = ∞ for (cid:96) ≥ X ← sample ( X , . . . , X (cid:96) ) under P Step 3: Until sup ≤ k ≤ (cid:96) [ X k − a log( n + k )] < C Step 4: Return X Step 5: EndFunction5.3.
The full algorithm.
We summarize our findings in this section in our full algorithm forsampling ( X , . . . , X N X + (cid:96) ) under P given some (cid:96) ≥ SampleSingleRecord to generate the η i from the beginningof this section. Starting from η = n satisfying the requirements in Proposition 5.3, we generate T n where n is replaced by each of the subsequent η i . As a result, we have P ( η i = ∞| η i − < ∞ ) ≥ − δ by Proposition 5.3. Thus, the number of records is bounded in probability by a geometric randomvariable with parameter 1 − δ . Algorithm X: Samples ( X , . . . , X N X + (cid:96) ) given a ∈ (0 , δ ∈ (0 , C ∈ R , σ > (cid:96) ≥
0. n must satisfy the requirements in Proposition 5.3.Step 1: X ← [ ], η ← n Step 2: X ← sample ( X , . . . , X η ) under P Step 3: RepeatStep 4: segment ← SampleSingleRecord ( a, C, η )Step 5: If segment is not ‘degenerate’Step 6: X ← [ X, segment]Step 7: η ← length( X )Step 8: EndIfStep 9: Until segment is ‘degenerate’Step 10: If (cid:96) > X ← [ X, SampleWithoutRecordX ( η, (cid:96) )]Step 12: EndIf 6. Final Algorithm and Proof of Theorem 2.2
In this section, we give our final algorithm. We also provide the remaining arguments showingwhy the algorithm outputs exact samples and prove a bound on the computational complexity.Together these proofs establish Theorem 2.2.We start with a description of our final algorithm for sampling M , which exploits that for S n = γn − A n and N A = N S , we have S n < A n ≥ γn for n > N A . Algorithm M: Samples ( M ( t ) , . . . , M ( t d )) given δ ∈ (0 , a ∈ (0 , γ < E A , C ∈ R , σ .Step 1: Sample A , . . . , A N A using Steps 1–9 from Algorithm S with S n = γn − A n .Step 2: Sample X , . . . , X N X using Steps 1–9 from Algorithm X .Step 3: Calculate N a with (3.2) and set N = max( N A , N X , N a ).Step 4: If N > N A Step 5: Sample A N A +1 , . . . , A N as in Step 10–12 from Algorithm S with S n = γn − A n . Step 6: EndIfStep 7: If
N > N X Step 8: Sample X N X +1 , . . . , X N as in Step 10–12 from Algorithm X .Step 9: EndIfStep 10: Return M ( t i ) = max ≤ n ≤ N {− log A n + X n ( t i ) + µ ( t i ) } for i = 1 , . . . , d .The pathwise construction in Section 3 implied that the output of Algorithm 6 is an exact sampleof { M ( t ) , . . . , M ( t d ) } . Thus it remains to study the running time of Algorithm M .6.1.
Computational complexity.
We next study the truncation point N in (3.3). Because thenumber of records is bounded in probability by a geometric random variable, it is clear that N < ∞ almost surely.Our aim is to study the dependence of our algorithm on the dimension d . The only placeswhere d enters the algorithm are in the definition of n and the measure P ( n ) . Sampling from thelatter happens at most a geometric number of times with parameter 1 − δ , so the computationalcomplexity is dominated by the choice of n .For any ζ >
0, if d is large enough and if we ignore rounding, the following choice of n = n ( d )log( n ( d )) = σ a − Ca + σa (cid:115) (2 + ζ ) log (cid:18) de − C/a δ √ πφ ( σ/a ) / ( σ/a ) (cid:19) satisfies the assumption d r ( n ) ≤ δ of Proposition 5.3.The following result will be needed for the proof of the second part of Theorem 2.2. Recall that K is a positive integer-valued random variable with probability mass function g n . Lemma 6.1.
For p ≥ , we have log( E [ K p ]) = O (log n ) as d → ∞ .Proof. Assume n sufficiently large. Then E [ K p ] = ∞ (cid:88) k =1 k p g n ( k ) ≤ (cid:82) ∞ ( s + n ) p φ (( a log( n + s ) + C ) /σ ) ds (cid:82) ∞ φ (( a log( n + s ) + C ) /σ ) ds = e p σ a − Cpa
Φ(( a log( n ) + C − pσ /a ) /σ − σ/a )Φ(( a log( n ) + C ) /σ − σ/a ) ≤ e p σ a − Cpa a log( n )+ C − pσ /a ) /σ − σ/a φ (( a log( n ) + C − pσ /a ) /σ − σ/a ) ( a log( n )+ C ) /σ − σ/a (( a log( n )+ C ) /σ − σ/a ) +1 φ (( a log( n ) + C ) /σ − σ/a ) ≤ e p σ a − Cpa exp (cid:18) − p σ a + pσa ( a log( n ) + C ) /σ − σ/a ) (cid:19) = 2 exp (cid:18) p log( n ) − pσ a (cid:19) , Therefore log E [ K p ] ≤ p log( n ) + log 2 − pσ /a . (cid:3) Next we show that log E [ N pX ] = O ( √ log d ). We have the decomposition N X = n + G (cid:88) i =1 K i , where K i are iid copies of K , G is the last time that the segment is not ‘degenerate’ and thedefinition of n implies log( n ( d )) = O ( √ log d ). N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET13
Proposition 5.3 shows that G is bounded by a geometric random variable G (cid:48) with parameter δ almost surely, while G (cid:48) is independent of the sequence ignore@@@@( K i ). Therefore, we have byJensen’s inequality E [ N pX ] ≤ E (cid:32) n + G (cid:48) (cid:88) i =1 K i (cid:33) p ≤ E (cid:34)(cid:32) n p + G (cid:48) (cid:88) i =1 K pi (cid:33) (1 + G (cid:48) ) p − (cid:35) = n p E (cid:2) (1 + G (cid:48) ) p − (cid:3) + E [ K p ] E [ G (cid:48) (1 + G (cid:48) ) p − ] . Therefore we have shown that log E [ N pX ] = O ( √ log d ), which means that E [ N pX ] increases slowerthan d (cid:15) for any (cid:15) > N A or N a do not depend on d . We only need to show E [ N pA ] < ∞ , and E [ N pa ] < ∞ .Recall that in Section 4 we sample the downcrossing segment of the random walk with thenominal distribution, then the upcrossing segment with the exponential tilted distribution. Wedenote the i ’th downcrossing segment having length τ − i , and the i ’th upcrossing segment havinglength τ + i . Therefore, N A = L (cid:88) i =1 ( τ − i + τ + i ) , where L is the first time that the upcrossing segment is ‘degenerate’. Recall that τ + denotes thefirst upcrossing time of level 0. Because for any x ≤ P x ( τ + = ∞ ) ≥ P ( τ + = ∞ ) > ,L is a.s. bounded by a geometric random variable L (cid:48) with parameter q < A n has step sizes bounded by r >
0. Therefore, S τ + i ≤ γ , and S τ − i ≥ r . Thus, with Theorem 8.1 in [4],for any p ≥ (cid:15) >
0, there exists some constant
V >
0, such that E [( τ − i ) p (1+ (cid:15) ) ] < V and E [( τ + i ) p (1+ (cid:15) ) ] < V. Again using Jensen’s inquuality, we obtain E [ N pA ] ≤ E (cid:32) L (cid:48) (cid:88) i =1 ( τ − i + τ + i ) (cid:33) p ≤ E (cid:34) (cid:80) L (cid:48) i =1 (( τ − i ) p + ( τ + i ) p )2 L (cid:48) (2 L (cid:48) ) p (cid:35) ≤ ∞ (cid:88) i =1 E (cid:2) (( τ − i ) p + ( τ + i ) p ) I ( L (cid:48) ≥ i )(2 L (cid:48) ) p − (cid:3) ≤ V (cid:15) (cid:16) E (cid:104) (2 L (cid:48) ) ( p − (cid:15)(cid:15) L (cid:48) (cid:105)(cid:17) (cid:15) (cid:15) < ∞ . The value of N a is only required to satisfy (see (3.2))(6.1) N a ≥ (cid:18) A exp( C − X ) γ (cid:19) − a , while a ∈ (0 , a ∈ (0 , E [ N pa ] = (cid:18) E [ A ] exp( C ) γ (cid:19) p − a E [exp( − X ) p ] − a < ∞ . This naturally holds by Assumption B2). When a = 1, with proper choice of C , (3.2) always holds.6.2. Choosing a , C , and γ . Although the values of a ∈ (0 , γ ∈ (0 , E [ A ]) and C ∈ R donot affect the order of the computational complexity of our algorithm, we are still interested indiscussing some guiding principles which can be used to choose those parameters for a reasonablygood implementation.First, note that among N X , N A , and N a , only N X would increase to ∞ as the number d ofsampled locations increases to ∞ . (Although N a also increases in d , it remains bounded since X decreases to the minimum over T .) Assuming that C has been fixed, we can see that N X decreasespathwise while a increases, therefore we should try to choose a close to 1. On the other hand,while a ∈ (0 , A exp( C − X ) > γ , then N a (cid:37) ∞ while a (cid:37)
1. This analysishighlights a trade-off between the values of N X and N a with respect to the choice of a . Because E [ N X ] is not explicitly tractable, we can have a reasonable balancing of the computational effortby equating n with E [ N a ]. In particular, we look for the largest value of a ∈ (0 ,
1) satisfying thefollowing equation(6.2) exp (cid:18) σa Φ − (cid:18) δ √ π φ ( σ/a ) dσ/a (cid:19) + σ a − Ca (cid:19) = E (cid:34)(cid:18) A exp( C − X ) γ (cid:19) − a (cid:35) . Note that the left-hand side converges to infinity as a (cid:38) a (cid:37) X , then search for thedesired a numerically.Another approach consists of selecting a = 1 and adjusting C so that (3.2) holds true for all n ≥
1. Therefore, we choose C = X + log ( A /γ ) . The value of C is random, but the algorithmscan be modified accordingly, by changing the definition of n , which depends on C . However, theexpected computational cost has the same order as in the case when C is deterministic.Similarly, N A increases pathwise while γ increases, while N a decreases if γ increases. One couldget the empirical average value of N A via simulation, and choose γ accordingly such that N A and N a are balanced. 7. Tolerance Enforced Simulation
In this section we illustrate a general procedure which can be applied so that, for any given δ > M δ , with the property that P (cid:18) sup t ∈ T | M ( t ) − M δ ( t ) | ≤ δ (cid:19) = 1 . For ease of notation we focus on the case T = [0 , T , as long as one has an infinite series representation for X which satisfiescertain regularity conditions.A TES estimator can be used to easily obtain error bounds for sample-path functionals of theunderlying field. For example, in the context of parametric catastrophe bonds, it is not uncommonto use the average extreme precipitation over a certain geographical region as the trigger; see [24].This motivates estimating E [ u ( (cid:82) T M ( s ) ds )] for some function be consistent: u that is specified bythe contract characteristics of the catastrophe bond. If u is Lipschitz continuous with Lipschitz N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET15 constant 1, then one immediately obtains (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20) u (cid:18)(cid:90) T M ( s ) ds (cid:19)(cid:21) − E (cid:20) u (cid:18)(cid:90) T M (cid:15) ( s ) ds (cid:19)(cid:21)(cid:12)(cid:12)(cid:12)(cid:12) ≤ | T | (cid:15). The form of the TES estimator discussed in this section has the feature that (cid:82) T M (cid:15) ( s ) ds can beevaluated in closed form. Thus, a TES estimator facilitates the error analysis that could otherwisebe significantly more involved.The technique presented in this section is not limited to Gaussian processes, and we do not makethis assumption here. As a result, we do not use Assumptions B1) and B2) in this section, but wereplace them with C1)-C4) below. However, Assumptions A1) and A2) on the renewal sequence( A n ) are in force throughout this section.7.1. An infinite series representation.
We assume that ( X n ( t )) t ∈ T can be expressed as analmost surely convergent series of basis functions with random weights. We illustrate the procedurewith a particularly convenient family of basis functions.First, let us write any m ≥ m = 2 j + k for j ≥ ≤ k ≤ j −
1, and note thatthere is only one way to write m in this form. We assume that there exists a sequence of basisfunctions (Λ m ( · )) m ≥ , with support on [0 ,
1] (i.e., Λ m ( t ) = 0 for t (cid:54)∈ [0 , | Λ ( t ) | , | Λ ( t ) | ≤ t ∈ [0 , m ≥ m ( t ) = Λ (2 j ( t − k/ j )) . In other words, for m ≥
2, each Λ m ( · ) is a wavelet with the shape of Λ ( · ), while shrunk horizontallyby factor of 2 j , and shifted to start at k/ j .We introduce normalizing constants, λ > λ m = λ (cid:48) − jα for m ≥
1, where α ∈ (0 ,
1) and λ (cid:48) >
0. Finally, we assume that X n ( t ) = ∞ (cid:88) m =0 Z m,n Λ m ( t ) λ m , where the random variables ( Z m,n ) m ≥ ,n ≥ are iid. We shall use Z to denote a generic copy of the Z m,n ’s and we shall impose suitable assumptions on the tail decay of Z . The parameter α relatesto the H¨older continuity exponent of the process X n . For example, if X n is Brownian motion, α = 1 /
2. This interpretation of α will not be used in our development, but it helps to provideintuition which can be used to inform the construction of a model based on the basis functions thatwe consider. For more information on the connection to the H¨older properties implied by α , thereader should consult [9] and the references therein.Throughout, we use the following total order among the pairs { ( m, n ) : m ≥ , n ≥ } . We say( m, n ) < ( m (cid:48) , n (cid:48) ) if m + n < m (cid:48) + n (cid:48) and in case m + n = m (cid:48) + n (cid:48) , we say that ( m, n ) is smallerthan ( m (cid:48) , n (cid:48) ) in lexicographic order. In particular, we have(0 , < (0 , < (1 , < (0 , < (1 , < (2 , < · · · . We let θ ( m, n ) be the position of ( m, n ) in the total order. We also define η ( · ) : N → N ∪ { } × N to be the inverse function of θ ( · ), and given θ ∈ N , we write η ( θ ) = ( η m ( θ ) , η n ( θ )) . Building blocks for our algorithm.
We now proceed to describe the construction of M δ ,which is adapted from a record-breaking technique introduced in [7]. An important building blockof M δ is the truncated series X n ( t ; K ) = (cid:88) m ≤ K λ m Z m,n Λ m ( t ) . It is not required that X n agrees with the distribution of X on dyadic points, although this is thecase in our primary example of Brownian motion. We abuse notation by re-using notation such as N X and N A throughout our discussion of TES, but the random variables are not the same as inthe rest of the paper.Our algorithm relies on three random times.We choose suitable positive functions a, ξ , ξ and apositive constant γ ; see Proposition 7.2 below for details.(1) N X : for k ≥ N X and n ≥ t ∈ T | X n ( t ) − X n ( t ; k ) | ≤ ξ ( k ) + ξ ( k ) a ( n )and, for n ≥ N X ,(7.2) sup t ∈ T | X n ( t ) | ≤ ( a (0) λ + ξ (1)) + ( λ + ξ (1)) a ( n ) . (2) N A = N A ( γ ): for n ≥ N A , A n ≥ γ n, and we sample N A jointly with ( A , . . . , A N A ) using Algorithm S in Section 4.(3) N ξ : for n ≥ N ξ ,( a (0) λ + ξ (1)) + ( λ + ξ (1)) a ( n ) − log( nγ ) ≤ inf t ∈ [0 , X ( t, N X ) − log( A ) − ξ ( N X ) − ξ ( N X ) a ( n ) . (7.3) We will choose a such that N ξ < ∞ almost surely.Setting N = max( N X , N A , N ξ ), we have, for t ∈ T and n ≥ N , − log( A n ) + X n ( t ) ≤ − log A n + ( a (0) λ + ξ (1)) + ( λ + ξ (1)) a ( n ) ≤ − log( nγ ) + ( a (0) λ + ξ (1)) + ( λ + ξ (1)) a ( n ) ≤ − log( A ) + inf t ∈ [0 , X ( t, N X ) − ξ ( N X ) − ξ ( N X ) a ( n ) ≤ − log( A ) + inf t ∈ [0 , X ( t ) ≤ − log( A ) + X ( t ) , and therefore, for t ∈ T ,(7.4) sup n ≥ {− log A n + X n ( t ) + µ ( t ) } = max ≤ n ≤ N {− log A n + X n ( t ) + µ ( t ) } . If we select an integer K δ ≥ N X such that ξ ( K δ ) + ξ ( K δ ) a ( n ) ≤ δ , then M δ ( t ) = max ≤ n ≤ N {− log A n + X n ( t ; K δ ) + µ ( t ) } satisfies sup t ∈ T | M ( t ) − M δ ( t ) | ≤ δ .It remains to explain how to simulate N X jointly with ( X , . . . , X N ) and how to construct ξ ,and ξ . For this, we use a variant of the record-breaking technique, but we first need to discuss ourassumptions on the Z m,n ’s.7.3. Assumptions on the Z m,n ’s and an example. We introduce some assumptions on thedistribution of Z in order to use our record-breaking algorithm. We write F ( · ) for the right tail ofthe distribution of | Z | , that is F ( t ) = P ( | Z | > t ) for t ≥
0. Assume that we can find: a boundedand nonincreasing function H ( · ) on [0 , ∞ ), an easy-to-evaluate eventually nonincreasing functionΓ( · ) on N , as well as some θ > b ∈ (0 , ρ > a ( n ) = ρ (log( n + 1)) b :C1) For ( m, n ) satisfying θ ( m, n ) ≥ θ , we have F ( a ( m ) + a ( n )) ≤ H ( a ( m )) H ( a ( n )).C2) We have (cid:80) ∞ m =0 H ( a ( m )) < ∞ .C3) For r > θ , we have 1 > Γ( r ) ≥ (cid:80) ( m,n ): θ ( m,n ) >r H ( a ( m )) H ( a ( n )). N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET17
C4) We have (cid:80) r r ε Γ( r ) < ∞ for some ε > X n is Brownian motion.Similar constructions are possible for fractional Brownian motion (see [6]), but we do not work outthe details here. First, Λ ( t ) = tI ( t ∈ [0 , ( t ) = 2 tI ( t ∈ [0 , / − t ) I ( t ∈ (1 / , α = 1 /
2, and λ = λ (cid:48) = 1; see [31]. Second, the Z m,n ’s are iid standard Gaussian random variablesand one can select H ( t ) = φ ( t ), the standard normal density, so that we have Assumption A) for θ = inf { θ : a ( η m ( θ )) + a ( η n ( θ )) ≥ √ π } and C2) is evident. Moreover, selecting any ρ > b = 1 / (cid:88) θ ( m,n ) ≥ r H ( a ( m )) H ( a ( n ))= (cid:88) θ ( m,n ) ≥ r (cid:18) π (cid:19) exp (cid:18) − ρ log( m + 1) + log( n + 1)2 (cid:19) = (cid:88) θ ( m,n ) ≥ r (cid:18) m + 1)( n + 1) (cid:19) ρ / ≤ (cid:88) θ ( m,n ) ≥ r (cid:18) m + n (cid:19) ρ / . The point ( m, n ) with θ ( m, n ) = r is one of the (cid:96) ( r ) points on the segment between ( (cid:96) ( r ) ,
0) and(1 , (cid:96) ( r ) − (cid:96) ( r ) = (cid:100) (cid:112) r + 1 / − / (cid:101) . We therefore continue to bound as follows: (cid:88) k ≥ (cid:96) ( r ) k − ρ / ≤ (cid:90) ∞ (cid:96) ( r ) − x − ρ / dx = 1 ρ / − (cid:96) ( r ) − − ρ / . Thus, in the Brownian case we can define Γ( r ) to be the right-hand side of the preceding display,so for instance any ρ > X n is standard Brownian motion we have r − (cid:88) m =0 λ m Z m,n Λ m ( t ) = X n ( t ) , for every dyadic point t = j − r with j = 0 , , . . . , r . Therefore, once we fix any δ > δ = 1 / N and we can continue sampling Z m,n for m ≥ K δ ifneeded so that we can return M ( t ) = max ≤ n ≤ N (cid:40) − log ( A n ) + r − (cid:88) m =0 λ m Z m,n Λ m ( t ) (cid:41) . Consequently, we conclude that at least in the Brownian case the procedure that we present herecan be used to evaluate { M ( j/ r ) } dj =0 with d = 2 r exactly and with expected computational costof order O ( d · E [ N ]) = O ( d ) – because E [ N ] does not depend on d and is finite; see Theorem 7.4below.7.4. Breaking records for the Z m,n ’s. Define T = 0, and, for k ≥ T k = inf { θ ( m, n ) > T k − : | Z m,n | > a ( m ) + a ( n ) } . In this subsection, given some integer θ ≥
0, we develop a technique to sample the randomset T = { T k : T k < ∞} ∩ { θ + 1 , . . . } jointly with ( Z m,n ) m ≥ ,n ≥ . Indeed, given T , the Z m,n are independent and have the following distributions. For θ ( m, n ) ≤ θ , Z m,n has the nominal(unconditional) distribution. For θ ( m, n ) ∈ T , Z m,n has the conditional distribution of Z given {| Z | > a ( m ) + a ( n ) } , and if θ ( m, n ) / ∈ T , Z m,n has the conditional distribution of Z given {| Z | ≤ a ( m ) + a ( n ) } . We first note that that only finitely many T k ’s are finite, so that we can once again apply arecord breaking technique, based on the record-breaking epochs T k . Indeed, applying AssumptionsC1), we find that (cid:88) m,n P ( | Z m,n | > a ( m ) + a ( n )) ≤ (cid:88) m,n H ( a ( m )) H ( a ( n )) = (cid:32)(cid:88) m H ( a ( m )) (cid:33) < ∞ , and the claim follows from the Borel-Cantelli lemma.The function SampleRecordsZ given below, which is directly adapted from Algorithm 2w in[7], allows one to sequentially sample the elements in { T k : T k < ∞} jointly with the Z m,n ’s. Thefunction SampleRecordsZ takes as input θ satisfying Γ( θ ) < Function
SampleRecordsZ ( θ ) : Samples the set T = { T k : T k < ∞} ∩ { θ + 1 , . . . } Step 1: Initialize G ← θ and T ← [ ].Step 2: u ← d ← V ← U (0 , u > V > d Step 4: G ← G + 1Step 5: d ← max( d, (1 − Γ( G )) × u )Step 6: u ← P ( | Z | ≤ a ( η m ( G )) + a ( η n ( G ))) × u Step 7: EndWhileStep 8: If V ≥ u , then T ← [ T , G ] and go to Step 2.Step 9: If V ≤ d , stop and return T .The next proposition establishes that the output of the function SampleRecordsZ has thedesired distribution.
Proposition 7.1.
The output from
SampleRecordsZ ( θ ) is a sample of the set T = { T k : T k < ∞} ∩ { θ + 1 , . . . } . Moreover, we have E (cid:2) (max(0 , sup T )) β (cid:3) < ∞ for some β > .Proof. For simplicity we assume throughout this proof that θ = 0. For the first claim it suffices toshow that SampleRecordsZ (0) returns T = { T k : T k < ∞} ∩ { , , . . . } without bias. We write T = T .In Steps 3 through 5 the algorithm iteratively constructs the sequences ( u j ) and ( d j ) given by u j = u j − P ( | Z | ≤ a ( η m ( j )) + a ( η n ( j ))) , d j = max( d j − , u j − (1 − Γ( j )))with u = 1 and d = 0. It is evident that both sequences are monotone. Moreover, we have u j = P ( T > j ) for j ≥ j →∞ u j = P ( T = ∞ ). Similarly, because lim j →∞ Γ( j ) = 0 weobtain lim j →∞ d j = P ( T = ∞ ).Let n ( V ) be the number of times Step 3 is executed before either going to Step 8 or Step 9. Itsuffices to check that when Step 8 is executed then the element added to T has the law of T given T < ∞ , and that Step 9 is executed with probability P ( T = ∞ ). For the former, we note that bydefinition of n ( V ) and because u j ∈ ( d j − , u j ), we have for j ≥ P (cid:0) n ( V ) = j | V ≥ u n ( V ) (cid:1) = P ( V ∈ ( d j − , u j − ) , V ≥ u j ) P (cid:0) V ≥ u n ( V ) (cid:1) = P ( V ∈ ( u j , u j − )) P (cid:0) V ≥ u n ( V ) (cid:1) = u j − − u j − lim k →∞ u k , which equals P ( T = j | T < ∞ ) as desired. For the latter, we note that P ( V ≤ d n ( V ) ) = 1 − P ( V ≥ u n ( V ) ) = P ( T = ∞ ) . N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET19
In preparation for the proof of the second claim of the proposition, we bound the probabilitythat the while loop requires more than k ≥ P ( n ( V ) > k ) ≤ P ( V ∈ ( d k , u k )) ≤ P ( V ∈ ( d k , u k − )) = u k − − d k = u k − − max { d k − , u k − (1 − Γ( k )) } ≤ Γ( k ) . As a consequence of the inequality E [ n ( V ) β ] = ∞ (cid:88) k =0 (( k + 1) β − k β ) P ( n ( V ) > k ) ≤ ∞ (cid:88) k =1 (( k + 1) β − k β )Γ( k ) , we find that E [ n ( V ) β ] < ∞ if (cid:80) k k β − Γ( k ) < ∞ .We have a similar finite-moment bound for subsequent calls to the while loop. Writing n i ( V , . . . , V i )for the number of iterations in the i -th execution of the while loop, where V , V , . . . are the iidstandard uniform random variables generated in subsequent calls to Step 2. Compared to theabove argument for i = 1, this quantity only depends on V , . . . , V i − through a random shift of Γ.Because Γ is eventually nonincreasing, there exists a constant c (cid:48) such that, for all i ≥ E [ n i ( V , . . . , V i ) β | V , . . . , V i − ] ≤ c (cid:48) (cid:88) k k β − Γ( k ) . To prove a bound on the moment of sup T , we first let Υ be the number of times we execute thewhile loop. We then note that, for any random variable G and any β ≥
1, by Jensen’s inequality,max(0 , sup T ) β = (cid:32) Υ − (cid:88) i =1 n i ( V , . . . , V i ) (cid:33) β = (cid:32) ∞ (cid:88) i =1 n i ( V , . . . , V i ) I (Υ > i ) (cid:33) β ≤ ∞ (cid:88) i =1 (cid:18) n i ( V , . . . , V i ) I (Υ > i ) P ( G = i ) (cid:19) β P ( G = i )= ∞ (cid:88) i =1 n i ( V , . . . , V i ) β I (Υ > i ) P ( G = i ) − β , because the right-hand side is finite almost surely.Because the event { Υ > i − } only depends on V , . . . , V i − , we have by (7.5), E [ n i ( V , . . . , V i ) β I (Υ > i )] ≤ E [ n i ( V , . . . , V i ) β I (Υ > i − E (cid:104) I (Υ > i − E [ n i ( V , . . . , V i ) β | V , . . . , V i − ] (cid:105) ≤ c (cid:48) (cid:32)(cid:88) k k β − Γ( k ) (cid:33) P (Υ > i − ≤ c (cid:48) (cid:32)(cid:88) k k β − Γ( k ) (cid:33) P ( T < ∞ ) i − , where we use the fact that Υ is stochastically dominated by a geometric random variable withsuccess parameter P ( T = ∞ ) >
0. Combining the preceding displays, we deduce that, for β ≥ E (cid:104) max(0 , sup T ) β (cid:105) ≤ c (cid:48) (cid:32)(cid:88) k k β − Γ( k ) (cid:33) ∞ (cid:88) i =1 P ( T < ∞ ) i − P ( G = i ) − β , which is seen to be finite for some β > G geometric with asuitably chosen success probability. (cid:3) Truncation error of the infinite series.
We next write, for k ≥ X n ( t ) = X n ( t ; k ) + (cid:88) m>k λ m Z m,n Λ m ( t ) . and it is our objective to study the truncation error, i.e., the second term.The next proposition controls the truncation error in terms of functions ξ and ξ defined for r ≥ ξ ( r ) = λ (cid:48) (1 − − α ) − − α (cid:98) log ( r ) (cid:99) ,ξ ( r ) = ρ log ( e ) (cid:18) (cid:98) log ( r ) (cid:99) + 2 − α − − α + 2 (cid:19) ξ ( r ) . Note that ξ ( r ) , ξ ( r ) → r → ∞ . We also write N X = max { sup T , θ − } . If T is empty then sup T = −∞ and therefore N X = θ −
1; otherwise, if T is non-empty, thensup T ≥ θ and therefore N X ≥ θ . Proposition 7.2.
For all k ≥ N X and n ≥ , we have (7.1) , and for all n ≥ N X , (7.2) .Proof. We observe that | X n ( t ) − X n ( t ; k ) | ≤ (cid:88) m>k λ m a ( m ) | Λ m ( t ) | + a ( n ) (cid:88) m>k λ m | Λ m ( t ) | . If m > k ≥ N X , because θ ( m, n ) ≥ m , we have from the definition of N X , that | λ m Z m,n Λ m ( t ) | ≤ λ m ( a ( m ) + a ( n )) | Λ m ( t ) | . We bound the summand of the second sum by noting that, for r ≥ t ∈ T ∞ (cid:88) m = r λ m | Λ m ( t ) | ≤ sup t ∈ T ∞ (cid:88) j = (cid:98) log ( r ) (cid:99) j − (cid:88) k =0 λ j | Λ j + k ( t ) | ≤ ∞ (cid:88) j = (cid:98) log ( r ) (cid:99) λ (cid:48) − αj = ξ ( r ) . We similarly bound the summand in the first sum, using the definition of a ( · ) and the fact that ∞ (cid:88) j = k js j = s k k (1 − s ) + s (1 − s ) for | s | <
1. These bounds establish (7.1).Now we turn to the proof of (7.2). For n ≥ N X , | X n ( t ) | ≤ ∞ (cid:88) m =0 | λ m Z m,n Λ m ( t ) | ≤ ( a (0) + a ( n )) λ + ∞ (cid:88) m =1 λ m ( a ( m ) + a ( n )) | Λ m ( t ) | , because θ ( m, n ) ≥ N X for each n ≥ N X . The sum over m is bounded by ξ (1) + ξ (1) a ( n ) as shownin the proof of (7.1). (cid:3) Construction of M δ . Now we are ready to provide the final algorithm for computing M δ . N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET21
Algorithm TES: Samples M δ given δ > . Step 1:
T ←
Sample
SampleRecordsZ ( θ )Step 2: N X ← max { sup T , θ − } Step 3: Sample Z m,n from the nominal distribution if θ ( m, n ) ≤ θ Step 4: For 0 ≤ m ≤ N X and θ ( m, > θ Step 5: If θ ( m, ∈ T : sample Z m, from the law of Z given {| Z | > a ( m ) + a (1) } Step 6: Else If: sample Z m, from the law of Z given {| Z | ≤ a ( m ) + a (1) } Step 7: EndForStep 8: Sample A , . . . , A N A using Steps 1–8 from Algorithm S with S n = γn − A n .Step 9: Compute N ξ , the smallest n for which (7.3) holds, and let N ← max( N X , N A , N ξ )Step 10: Sample A N A +1 , . . . , A N as in Step 10 from Algorithm S with S n = γn − A n .Step 11: Compute the smallest K δ ≥ N X such that ξ ( K δ ) + ξ ( K δ ) a ( N ) ≤ δ .Step 12: For 2 ≤ n ≤ N , 0 ≤ m ≤ K δ , θ ( m, n ) > θ and also for n = 1, N X < m ≤ K δ , θ ( m, n ) > θ Step 13: If θ ( m, n ) ∈ T : sample Z m,n from the law of Z given {| Z | > a ( m ) + a ( n ) } Step 14: Else: sample Z m,n from the law of Z given {| Z | ≤ a ( m ) + a ( n ) } Step 15: EndForStep 16: Return M δ ( t ) = max { X n ( t ; K δ ) − log( A n ) } .7.7. Exponential moments of sup t ∈ [0 , | X ( t ) | . We need a bound on the exponential momentsof sup t ∈ [0 , | X ( t ) | in order to analyze N ξ . If X is Gaussian and continuous, then such a boundimmediately follows from Borell’s inequality [1, Thm. 2.1.1]. The following proposition establishesthe existence of exponential moments in the generality of the present section. Proposition 7.3.
For any p > , we have E exp (cid:32) p sup t ∈ [0 , | X ( t ) | (cid:33) < ∞ . Proof.
We first note thatsup t ∈ [0 , | X n ( t ) | ≤ λ Z ,n + ∞ (cid:88) j =1 λ (cid:48) − αj max k =0 ,..., j − (cid:12)(cid:12) Z j + k,n (cid:12)(cid:12) . It suffices to prove that the tail of the infinite sum in this expression is ultimately lighter than anyexponential. A union bound leads to, for y ≥ P ∞ (cid:88) j =1 λ (cid:48) − αj max k =0 ,..., j − (cid:12)(cid:12) Z j + k,n (cid:12)(cid:12) > y ≤ ∞ (cid:88) j =1 P (cid:18) λ (cid:48) − αj max k =0 ,..., j − (cid:12)(cid:12) Z j + k,n (cid:12)(cid:12) > (2 α/ − − αj/ y (cid:19) ≤ ∞ (cid:88) j =1 P (cid:32) max k =0 ,..., j − (cid:12)(cid:12) Z j + k,n (cid:12)(cid:12) > (2 α/ − αj/ λ (cid:48) y (cid:33) . Assumptions C1) and C2) imply that C (cid:48) := E exp (cid:16) | Z/ρ | /b (cid:17) < ∞ and therefore we have byMarkov’s inequality, for t ≥ P (cid:18) max k =0 ,..., j − (cid:12)(cid:12) Z j + k,n (cid:12)(cid:12) > αj/ t (cid:19) ≤ j P (cid:16) | Z | > αj/ t (cid:17) ≤ C (cid:48) j e − ( t αj/ /ρ ) /b . Select some t > κ ∈ (1 , /b ) such that ( t αj/ /ρ ) /b ≥ j + t κ for all j ≥ t ≥ t . Usingthis bound results in a tail estimate that is summable over j and lighter than any exponentialdistribution. (cid:3) Complexity analysis.
We conclude this section with the following result which summarizesthe performance guarantee of Algorithm TES. Higher moment bounds on the computational costsare readily found using the same arguments and a stronger version of Assumption C4).
Theorem 7.4.
Assume that the conditions
A1), A2), C1)–C4) are in force. Given δ ∈ (0 , , theoutput ( M δ ( t )) t ∈ T of Algorithm TES satisfies sup t ∈ T | M δ ( t ) − M ( t ) | ≤ δ. Moreover, we have E [ K δ ] = O (cid:16) ( δ/ log(1 /δ )) − /α (cid:17) , where α is determined by the series representation of X . Finally, the total computational costs ofrunning Algorithm TES has expectation at most O (cid:0) ( δ/ log(1 /δ )) − /α (cid:1) .Proof. The first claim follows by construction, see Section 7.2.From Proposition 7.1 we have E [ N βX ] < ∞ for some β >
1. In order to analyze N ξ , we useProposition 7.3. In fact, N ξ only has to be sufficiently large so that we have( λ + ξ (1) + ξ ( N X )) ρ (log( n + 1)) b <
12 log n and −
12 log n ≤ inf t X ( t, N X ) − log A − a (0) λ − ξ (1) − ξ ( N X ) + log γ for any n ≥ N ξ . With simple calculations, it follows from Proposition 7.3 and Assumption A1) that E [ N pξ ] < ∞ for every p >
0. We have argued in Section 6 that E [ N pA ] < ∞ , so we conclude that E [ N β ] < ∞ . Finally, using the definition of ξ ( r ) and ξ ( r ) we can see that it there is a constant κ > K δ = O (cid:32)(cid:20) δ (log N ) b + κ log(1 /δ ) (cid:21) − /α (cid:33) . This leads to the bound on the first moment of K δ . The expected running time of the algorithm isorder E [ K δ × N ], which is finite because E [ N β ] < ∞ . The complexity bound follows. (cid:3) Numerical Results
In this section we show some simulation results to empirically validate
Algorithm M . Wealso compare numerically the computational cost of our record-breaking method, noted as RB inthe following charts, with the existing exact sampling algorithm developed in [17] by Dieker andMikosch (DM) and the exact simulation algorithm using extremal function proposed in [18] (EF).We implemented all three algorithms in Matlab. For our algorithm, we choose the values of a and C according to our discussion in Section 6.2. We let C = 0, then choose the largest a ∈ (0 ,
1) suchthat (6.2) holds.We generated the Brown-Resnick processes, M ( t ) = sup n ≥ {− log A n + X n ( t ) − σ ( t ) / } , on compactsets. If X is a Brownian motion it was shown in [15] that M has a stationary sample path on [0 , M in this case. In my printed version one can hardly see anythingin this figure. It seems to be too dark. Figure 2 presents two samples of the Brown-Resnick randomfield on [0 , when X is a Brownian sheet. N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET23
Figure 1.
The Brown-Resnick process on [0 ,
1] with Brownian motion input. Thegrid mesh is 0.001.
Figure 2.
The Brown-Resnick field on [0 , with Brownian sheet input. The gridmesh is 0.001.We validated the implementation of our algorithm by checking the distribution of max( M (0 . , M (1)),which is the maximum of the Brownian-Resnick process at two locations with standard Brown-ian Motion generator, generated by our algorithm. Note that according to the bivariate H¨usler-Reiss distribution of this process, max( M (0 . , M (1)) − log(2Φ( √ . / M (0 . , M (1)) − log(2Φ( √ . / X Quantiles -2 0 2 4 6 8 10 Y Q uan t il e s -20246810 Figure 3.
The QQ-plot of M (0 . (cid:87) M (1) − log(2Φ( √ . / M with fractional Brownian motion inputs. We recorded both the average CPU time forgenerating a single sample and the 95% confidence interval for the mean based on our 200 samples,for different grid numbers d = 1000 , , H ∈{ / , / , / } of the fractional Brownian motion. The sample estimates and the 95% confidenceintervals for the mean CPU times to generate a single sample are shown in Table 1. They illustratethat when the number of grids increases, the computational cost of our algorithm appears to increasealmost linearly, while the cost for the algorithm proposed in [17] increases quadratically. Becausewe are using the circulant embedding method to generate the fractional Brownian vectors, which N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET25
Average cost per sample (second) (RB)( ± half-width of confidence interval) d H = 1 / H = 1 / H = 3 / ± ± ± ± ± ± ± ± ± ± ± ± d H = 1 / H = 1 / H = 3 / ± ± ± ± ± ± ± ± ± ± ± ± d H = 1 / H = 1 / H = 3 / ± ± ± ± ± ± ± ± ± ± ± ± Table 1.
Comparison of running time of our algorithm (RB) vs. [17] (DM) vs. [18](EF). Number of Gaussian vectorsd RB DM EF1000 29.5 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Table 2.
Comparison of number of Gaussian vectors generated in our algorithm(RB) v.s. [17] (DM) v.s. [18] (EF), H = 3 / O ( d log d ), it is consistent with expectations. It is worth noting that forthis method the computational cost to generate a d -dimensional Gaussian vector is the same as forgenerating a 2 (cid:100) log d (cid:101) -dimensional Gaussian vector. However, this consideration will not affect ourcomparison because we used this method in both algorithms.Next we compare the number of Gaussian vectors generated in our algorithm with the algorithmsof [17] and [18]. We generate samples of the Brown-Resnick process with fractional Brownianmotion generator, with H = 3 /
4. We used the grid numbers d = 1000 , , , , Number of grid points N u m be r o f G au ss i an v e c t o r s gene r a t ed LBDMDMEF
Number of grid points N u m be r o f G au ss i an v e c t o r s gene r a t ed LBDM
Figure 4.
Comparison of number of Gaussian vectors generated in our algorithm(RB) v.s. [17] (DM) v.s. [18] (EF), H = 3 / References [1] Adler, R. J. and Taylor, J. E. (2007).
Random Fields and Geometry . Springer, New York.[2] Asmussen, S. (2003).
Applied Probability and Queues , 2nd ed. Springer, New York.[3] Asmussen, S. and Glynn, P. (2007).
Stochastic Simulation: Algorithms and Analysis . Springer, New York.[4] Gut, A. (2009).
Stopped Random Walks . Springer, New York.[5] Asmussen, S., Glynn, P. and Pitman, J. (1995). Discretization error in simulation of one-dimensional reflectingBrownian motion.
The Annals of Applied Probability , 5(4), 875–896.[6] Ayache, A. and Taqqu, M.S. (2003) Rate optimality of wavelet series approximations of fractional Brownianmotion.
The Journal of Fourier Analysis and Applications , 9, 451–471.[7] Blanchet, J. and Chen, X. (2015). Steady-state simulation of reflected Brownian motion and related stochasticnetworks.
The Annals of Applied Probability , 25(6), 3209–3250.[8] Blanchet, J. and Chen, X. (2016). Perfect sampling of generalized Jackson networks. arXiv:1601.05499 [math] .http://arxiv.org/abs/1601.05499[9] Blanchet, J., Chen, X. and Dong, J. (2014). (cid:15) -Strong simulation for multidimensional stochastic differentialequations via rough path analysis. To appear in
Annals of Applied Probability . http://arxiv.org/abs/1403.5722[10] Blanchet, J. and Dong, J. (2012). Sampling point processes on stable unbounded regions and exact simulationof queues. In
Simulation Conference (WSC), Proceedings of the 2012 Winter (pp. 1–12). IEEE. This referenceis incomplete[11] Blanchet, J. and Dong, J. (2015). Perfect sampling for infinite server and loss systems.
Advances in AppliedProbability , 47(3), 761–786.[12] Blanchet, J., Dong, J. and Pei, Y. (2015). Perfect sampling of GI/GI/c queues. It is uncommon to say where thepaper has been submitted Submitted to
Queueing Systems: Theory and Applications .[13] Blanchet, J. and Wallwater, A. (2015). Exact sampling of stationary and time reversed queues.
ACM Transactionson Modeling and Computer Simulation (TOMACS) , 25(4), Article 26.
N OPTIMAL EXACT SIMULATION OF MAX-STABLE AND RELATED RANDOM FIELDS ON A COMPACT SET27 [14] Blanchet, J. and Sigman, K. (2011). On exact sampling of stochastic perpetuities.
Journal of Applied Probability ,48A, 165–182.[15] Brown, B. M. and Resnick, S. I.. (1977). Extreme values of independent stochastic processes.
Journal of AppliedProbability , 14(4), 732–739.[16] Buishand, T. A., de Haan, L., and Zhou, C. (2008). On spatial extremes: With application to a rainfall problem.
The Annals of Applied Statistics , 2(2), 624–642.[17] Dieker, A. B. and Mikosch, T. (2015). Exact simulation of Brown-Resnick random fields at a finite number oflocations.
Extremes , 18(2), 301–314.[18] Dombry, C., Engelke, S. and Oesting, M. (2015). Exact simulation of max-stable processes. arXiv:1506.04430 .[19] Embrechts, P., Kl¨uppelberg, C. and Mikosch (1997).
Modelling Extremal Events for Insurance and Finance .Springer, New York.[20] Haan, L. de and Zhou, C. (2008). On extreme value analysis of a spatial process.
RevStat - Statistical Journal ,6(1), 71–81.[21] Haan, L. de (1984). A spectral representation for max-stable processes.
The Annals of Probability , 12(4), 1194–1204.[22] Hoffman, Y., Ribak, E. (1991). Constrained realizations of Gaussian fields - a simple algorithm.
The AstrophysicalJournal , vol. 380, pp. L5–L8.[23] Kabluchko, Z., Schlather, M. and Haan, L. de (2009). Stationary max-stable fields associated to negative definitefunctions.
The Annals of Probability , 37(5), 2042–2065.[24] Kenealy, B. (2013, August 11). New York’s MTA buys $200 million cat bond to avoid storm surge losses.
BusinessInsurance
Bernoulli , 8(5),669–696.[26] Oesting, M., Schlather, M. and Zhou, C. (2017). Exact and fast simulation of max-stable processes on a compactset using the normalized spectral representation. To appear in
Bernoulli .[27] Pollock, M., Johansen, A. M. and Roberts, G. O. (2016). On the exact and (cid:15) -strong simulation of (jump)diffusions.
Bernoulli , 22(2), 794–856.[28] Schilling, R. L. and Partzsch, L. (2012)
Brownian Motion. An Introduction to Stochastic Processes.
De Gruyter,Berlin/Boston.[29] Schlather, M. (2002). Models for stationary max-stable random fields.
Extremes , 5(1), 33–44.[30] Smith, R. L. (1990). Max-stable processes and spatial extremes. Unpublished manuscript, Univer.[31] Steele, J. M. (2001).
Stochastic Calculus and Financial Applications . Springer, New York.[32] Thibaud, E., Aalto, J., Cooley, D. S., Davison, A. C. and Heikkinen, J. (2015). Bayesian inference for theBrown-Resnick process, with an application to extreme low temperatures. arXiv:1506.07836 .[33] Vaart, A. van der and Wellner, J. A. (1996).
Weak Convergence and Empirical Processes. With Applications toStatistics.
Springer, New York.
Industrial Engineering and Operations Research Department, School of Engineering and AppliedSciences, Columbia University, 500 West 120th Street, New York, NY 10027, U.S.A.
E-mail address : [email protected] E-mail address : [email protected] E-mail address : [email protected] Department of Mathematics, University of Copenhagen, Universitetsparken 5, DK-2100 Copen-hagen, Denmark
E-mail address ::