Quasi-Monte Carlo Software
Sou-Cheng T. Choi, Fred J. Hickernell, R. Jagadeeswaran, Michael J. McCourt, Aleksei G. Sorokin
QQuasi-Monte Carlo Software
Sou-Cheng T. Choi, Fred J. Hickernell, R. Jagadeeswaran, Michael J.McCourt, and Aleksei G. Sorokin
Abstract
Practitioners wishing to experience the efficiency gains from usinglow discrepancy sequences need correct, well-written software. This article,based on our MCQMC 2020 tutorial, describes some of the better quasi-MonteCarlo (QMC) software available. We highlight the key software componentsrequired to approximate multivariate integrals or expectations of functions ofvector random variables by QMC. We have combined these components inQMCPy, a Python open source library, which we hope will draw the supportof the QMC community. Here we introduce QMCPy.
Sou-Cheng T. ChoiDepartment of Applied Mathematics, Illinois Institute of Technology,RE 220, 10 W. 32 nd St., Chicago, IL 60616; and Kamakura Corporation, 2222 KalakauaAve, Suite 1400, Honolulu, HI 96815 e-mail: [email protected] J. HickernellCenter for Interdisciplinary Scientific Computation andDepartment of Applied Mathematics, Illinois Institute of TechnologyRE 220, 10 W. 32 nd St., Chicago, IL 60616 e-mail: [email protected]. JagadeeswaranDepartment of Applied Mathematics, Illinois Institute of Technology,RE 220, 10 W. 32 nd St., Chicago, IL 60616 e-mail: [email protected]; andWi-Tronix LLC, 631 E Boughton Rd, Suite 240, Bolingbrook, IL 60440Michael J. McCourtSigOpt, an Intel company,100 Bush St., Suite 1100, San Francisco, CA 94104 e-mail: [email protected] G. SorokinDepartment of Applied Mathematics, Illinois Institute of Technology,RE 220, 10 W. 32 nd St., Chicago, IL 60616 e-mail: [email protected] 1 a r X i v : . [ c s . M S ] F e b S.-C. T. Choi et al
Quasi-Monte Carlo (QMC) methods promise great efficiency gains over in-dependent and identically distributed (IID) Monte Carlo (MC) methods. Insome cases, QMC achieves one hundredth of the error of IID MC in the sameamount of time (see Figure 6). Often, these efficiency gains are obtainedsimply by replacing IID sampling with low discrepancy (LD) sampling, whichis the heart of QMC.As a practitioner, you might wish to test whether QMC would speed upyour computation. You would like to easily access the best QMC algorithmsavailable. As a theoretician or an algorithm developer, you might want todemonstrate your best ideas on various use cases to show their practical value.This tutorial points to some of the best QMC software available. Thenwe describe QMCPy [3], which is crafted to be a community owned Pythonlibrary that combines the best QMC algorithms and interesting use casesfrom various authors under a common user interface.The model problem for QMC is approximating a multivariate integral, µ := Z T g ( ttt ) λ ( ttt ) d ttt, (1)where g is the integrand, and λ is a non-negative weight. If λ is a probabilitydistribution (PDF) for the random variable TTT , then µ is the mean of g ( TTT ).Regardless, we perform a suitable variable transformation to interpret thisintegral as the mean of a function of a multivariate, standard uniform randomvariable: µ = E [ f ( XXX )] = Z [0 , d f ( xxx ) d xxx, XXX ∼ U [0 , d . (2)QMC approximates the population mean, µ , by a sample mean, b µ := 1 n n − X i =0 f ( XXX i ) , XXX , XXX , . . . M ∼ U [0 , d . (3)The choice of this sequence and the choice of n to satisfy the prescribed errorrequirement, | µ − b µ | ≤ ε absolutely or with high probability , (4)are important decisions, which QMC software helps the user make.Here, the notation M ∼ means that the sequence mimics the specified, targetdistribution, but not necessarily in a probabilistic way. We use this notationin two forms: IID ∼ and LD ∼ .IID sequences must be random. The position of any point is not influencedby any other, so clusters and gaps occur. A randomly chosen subsequence Page: 2 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 3
Fig. 1
IID points (left) contrasted with LD points (right). The LD points cover thesquare more evenly. of an IID sequence is also IID. When we say that
XXX , XXX , . . . IID ∼ F for somedistribution F , we mean that for any positive integer n , the multivariateprobability distribution of XXX , . . . , XXX n − is the product of the marginals,specifically, F n ( xxx , . . . , xxx n − ) = F ( xxx ) · · · F ( xxx n − ) . When IID points are used to approximate µ by the sample mean, theroot mean squared error is O ( n − / ). Figure 1 displays IID uniform points, XXX
IID0 , XXX
IID1 , . . .
IID ∼ U [0 , , i.e., the target distribution is F unif ( xxx ) = x x .LD sequences may be deterministic or random, but each point is carefullycoordinated with the others so that they fill the domain well. Subsequences ofLD sequences are generally not LD. When we say that XXX , XXX , . . . LD ∼ U [0 , d ,we mean that for any positive integer n , the empirical distribution of XXX , . . . , XXX n − , denoted F { XXX i } n − i =0 , approximates the uniform distribution, F unif , well (relative to n ). (The empirical distribution of a set assigns equalprobability to each point.) The measure of the difference between the empiricaldistribution of a set of points and the uniform distribution is called a discrep-ancy and is denoted D ( { XXX i } n − i =0 ) [7, 12, 13, 30]. This is the origin of the term“low discrepancy” points or sequences. LD points by definition have a smallerdiscrepancy than IID points. Figure 1 contrasts IID uniform points with LDpoints, XXX
LD0 , XXX
LD1 . . . LD ∼ U [0 , , in this case linearly scrambled Sobol’ points.The error in using the sample mean to approximate the integral canbe bounded according to the Koksma-Hlawka inequality and its extensions[7, 12, 13, 30] as the product of the discrepancy of the sampling sequence andthe variation of the integrand, denoted V ( · ): Page: 3 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
S.-C. T. Choi et al | µ − b µ | = (cid:12)(cid:12)(cid:12)(cid:12)Z [0 , d f ( xxx ) d( F unif − F { XXX i } n − i =0 )( xxx ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ D (cid:0) { XXX i } n − i =0 (cid:1) V ( f ) , (5)The variation is a (semi-) norm of the integrand in a suitable Banach space.The discrepancy corresponds to the norm of the error functional for thatBanach space. For typical Banach spaces, the discrepancy of LD points is O ( n − (cid:15) ), which is a higher convergence order than for IID points. For details,the reader is referred to the references. Here, we expect the reader to see inFigure 1 that the LD points cover the integration domain more evenly thanIID points. LD sampling can be thought of as highly stratified sampling. Inthe examples below the reader will see the demonstrably smaller cubatureerrors arising from using LD points.In the sections that follow we first overview available QMC software. Wenext describe an architecture for good QMC software, i.e., what are the keycomponents and how should they interact. We then describe how we haveimplemented this architecture in QMCPy. Finally, we summarize furtherdirections that we hope QMCPy and other QMC software projects will take.Those interested in following or contributing to the development of QMCPyare urged to visit the GitHub repository at https://github.com/QMCSoftware/QMCSoftware.We have endeavored to be as accurate as possible at the time of writingthis article. We hope that progress in QMC software development will makethis article somewhat obsolete in a good way in the next several years. QMC software spans LD sequence generators, cubatures, and applications. Wereview the better known software collections, recognizing that some softwareoverlaps multiple categories. Software focusing on generating high quality LDsequences and their generators includes
BRODA
Sobol’ sequences in C, MATLAB, and Excel [23],
Burkhardt
Various QMC software in C++, Fortran, MATLAB, and Python [1],
LatNet Builder
Generating vectors/matrices for lattices and digital nets [26,27],
MATLAB
Sobol’ and Halton sequences, commercial [38],
MPS
Magic Point Shop, lattices and Sobol’ sequences [31],
Owen
Randomized Halton sequences in R [34],
PyTorch
Scrambled Sobol’ sequences [35],
QMC.jl
LD Sequences in Julia [36], and qrng
Sobol’, Halton, and Korobov sequences in R [18].Software focusing on QMC cubatures and applications includes
GAIL
Automatic (Q)MC stopping criteria in MATLAB [2],
Page: 4 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 5
ML(Q)MC
Multi-Level (Q)MC routines in C++, MATLAB, Python, andR [8],
OpenTURNS
Open source initiative for the Treatment of Uncertainties,Risks ’N Statistics in Python [32],
QMC4PDE
QMC for elliptic PDEs with random diffusion coefficients [24],
SSJ
Stochastic Simulation in Java [25], and
UQLab
Framework for Uncertainty Quantification in MATLAB [28].The sections that follow describe QMCPy [3], which is our attempt tocombine some of the best of the above software under a common user interfacewritten in Python 3. The choice of language was determined by the desireto make QMC software accessible to a broad audience, especially the techindustry.
QMC cubature can be summarized as follows. We want to approximate theexpectation, µ , well by the sample mean, b µ , where (1), (2), and (3) combineto give µ := Z T g ( ttt ) λ ( ttt ) d ttt = E [ f ( XXX )] = Z [0 , d f ( xxx ) d xxx ≈ n n − X i =0 f ( XXX i ) =: b µ,XXX ∼ U [0 , d , XXX , XXX , . . . M ∼ U [0 , d . (6)Moreover, we want to satisfy the error requirement in (4). This requires fourcomponents, which we implement as QMCPy classes. Discrete Distribution produces
XXX , XXX , . . . mimicking U [0 , d ; True Measure ttt λ ( ttt )d ttt defines the original integral, e.g., Gaussian orLebesgue; Integrand g defines the original, and f defines the transformed version tofit the DiscreteDistribution ; and
Stopping Criterion determines how large n should be to ensure that | µ − b µ | ≤ ε as in (4).The software libraries referenced in Section 2 provide one or more of thesecomponents. QMCPy combines multiple examples of all these componentsunder an object oriented framework. Each example is implemented as aconcrete class that realizes the properties and methods required by theabstract class for that component. The following sections detail descriptionsand specific examples for each component.Thorough documentation of all QMCPy classes is available in [4]. Demon-strations of how QMCPy work are given in Google Colab notebooks [10, 11]. Page: 5 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
S.-C. T. Choi et al
The project may be installed from PyPI into your Python 3 environmentvia the command pip install qmcpy . In the code snippets that follow, weassume QMCPy has been imported alongside NumPy [9] via the followingcommands in a Python Console: >>> import qmcpy as qp >>> import numpy as np
LD sequences typically mimic U [0 , d . Good sequences mimicking otherdistributions are obtained by transformations as described in the next section.QMCPy implements extensible LD sequences, i.e., those that allow prac-titioners to obtain and use
XXX n , XXX n +1 , . . . without discarding XXX , . . . , XXX n − .Halton sequences do not have preferred sample sizes n , but integration latticesand digital sequences prefer n to be a power of 2.These latter two also have an elegant group structure, which we summarizein Table 1. The addition operator is ⊕ . The unshifted sequence is ZZZ , ZZZ , . . . and the randomly shifted sequence is XXX , XXX , . . . Table 1
Properties of lattices and digital net sequences. Note that they share groupproperties but also have distinctives. Define . . .
ZZZ ,ZZZ ,ZZZ ,... ∈ [0 , d chosen well ZZZ i := i ZZZ ⊕ i ZZZ ⊕ i ZZZ ⊕ i ZZZ ⊕ ··· for i = i + i i i ··· , i ‘ ∈ { , } XXX i := ZZZ i ⊕ ∆∆∆, where ∆∆∆ IID ∼ [0 , d Rank-1 Integration Lattices Digital Nets ttt ⊕ xxx := ( ttt + xxx ) mod 111 ttt ⊕ xxx := binary digitwise addition , ⊕ dig require ZZZ m ⊕ ZZZ m = ZZZ b m − c ∀ m ∈ N Then it follows that . . . P m := { ZZZ ,...,ZZZ m − } , ZZZ i ⊕ ZZZ j ∈ P m P ∆∆∆,m := { XXX ,...,XXX m − } , XXX i ⊕ XXX j (cid:9) XXX k ∈ P ∆∆∆,m (cid:27) ∀ i,j,k ∈ { ,..., m − }∀ m ∈ N We illustrate lattice and Sobol’ sequences using QMCPy. First, we create aninstance of a d = 2 dimensional Lattice object of the
DiscreteDistribution abstract class. Then we generate the first eight (non-randomized) points inthis lattice. >>> lattice = qp.Lattice ( dimension =2 ,randomize = False )
Page: 6 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 7 >>> lattice.gen_samples (n =8)" ParameterWarning :Non - randomized lattice sequence includes the origin. "array ([[0 . , 0. ],[0 .5 , 0 .5 ],[0 .25 , 0 .75 ],[0 .75 , 0 .25 ],[0 .125, 0 .375 ],[0 .625, 0 .875 ],[0 .375, 0 .125 ],[0 .875, 0 .625 ]])
The first three generators for this lattice are
ZZZ = (0 . , . ZZZ = (0 . , . ZZZ = (0 . , . ZZZ + ZZZ ) mod 111 = (0 . , . ZZZ , as Table 1 specifies.The random shift has been turned off above to illuminate the groupstructure. We normally include the randomization to ensure that there areno points on the boundary of [0 , d . Then, when points are transformed tomimic distributions such as the Gaussian, no LD points will be transformedto infinity. Turning off the randomization generates a warning when the gen_samples method is called.Now, we generate Sobol’ points using a similar process as we did for latticepoints. Sobol’ sequences are one of the most popular example of digitalsequences. >>> sobol = qp.Sobol (2 ,randomize = False ) >>> sobol.gen_samples (8 ,warn = False )array ([[0 . , 0. ],[0 .5 , 0 .5 ],[0 .25 , 0 .75 ],[0 .75 , 0 .25 ],[0 .125, 0 .625 ],[0 .625, 0 .125 ],[0 .375, 0 .375 ],[0 .875, 0 .875 ]]) Here,
ZZZ differs from that for lattices, but more importantly, addition fordigital sequences differs from that for lattices. Using digitwise addition fordigital sequences, we can confirm that according to Table 1, ZZZ ⊕ dig ZZZ = (0 . , . ⊕ dig (0 . , . . , . ⊕ dig ( . , . . , . . , . ZZZ . By contrast, if we construct a digital sequence using the generators for thelattice above with
ZZZ = (0 . , . ZZZ = (0 . , . ZZZ = ZZZ ⊕ dig ZZZ = ( . , . ⊕ dig ( . , . . , . . , . , Page: 7 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
S.-C. T. Choi et al which differs from the
ZZZ = (0 . , . ZZZ , ZZZ , ZZZ , . . . .The examples of qp.Lattice and qp.Sobol illustrate how QMCPy LDgenerators share a common user interface. The dimension is specified whenthe instance is constructed, and the number of points is specified when the gen_samples method is called. Following Python practice, parameters can beinput without specifying their names if they are input in the prescribed order.QMCPy also includes Halton sequences and IID sequences, again deferringdetails to the QMCPy documentation [4].A crucial difference between IID generators and LD generators is reflectedin the behavior of generating n points. For an IID generator, asking for n points repeatedly gives you different points each time because they are meantto be random and independent. >>> iid = qp.IIDStdUniform (2) ; iid.gen_samples (1)array ([[0 .17496884 0 .56612647 ]]) >>> iid.gen_samples (1)array ([[0 .82314635 0 .26919205 ]]) Your output may look different depending on the seed used to generate theserandom numbers.On the other hand for an LD generator, asking for n points repeatedlygives you the same points each time because they are meant to be the first n points of a specific LD sequence. >>> lattice = qp.Lattice (2) ; lattice.gen_samples (1)array ([[0 .15700854, 0 .41080809 ]]) >>> lattice.gen_samples (1)array ([[0 .15700854, 0 .41080809 ]]) Here we allow the randomization so that the first point in the sequence is notthe origin. To obtain the next n points one may specify the start and endingindices of the sequence. >>> lattice.gen_samples (2)array ([[0 .15700854, 0 .41080809 ],[0 .65700854, 0 .91080809 ]]) >>> lattice.gen_samples ( n_min =1 ,n_max =2)array ([[0 .65700854, 0 .91080809 ]]) Figure 2 shows how increasing the number of lattice and Sobol’ LD pointsthrough powers of two fills in the gaps in an even way.
The LD sequences implemented as
DiscreteDistribution objects mimicthe U [0 , d distribution. However, we may need sequences to mimic other Page: 8 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 9
Fig. 2
Randomized lattice and Sobol’ points mimicking a U [0 , measure for n = 64 , , and 256. Note how increasing the number of points evenly fills in the gaps between points. distributions. This is implemented via variable transformations, ΨΨΨ . In general,if
XXX M ∼ U [0 , d , then TTT = ΨΨΨ ( XXX ) := aaa + ( bbb − aaa ) (cid:12) XXX M ∼ U [ aaa, bbb ] , (7a) TTT = ΨΨΨ ( XXX ) := aaa + A ΦΦΦ − ( XXX ) M ∼ N ( aaa, Σ ) , (7b)where ΦΦΦ − ( XXX ) := Φ − ( X )... Φ − ( X d ) , Σ = AA T , and (cid:12) denotes term-by-term (Hadamard) multiplication. Here, aaa and bbb areassumed to be finite, and Φ is the standard Gaussian distribution function.Again we use M ∼ to denote mimicry, not necessarily in a probabilistic sense.Figure 3 displays LD sequences transformed as described above to mimic auniform and a Gaussian distribution. The code to generate these points takesthe following form for uniform points based on a Halton sequence: >>> u = qp.Uniform ( ... sampler = qp.Halton (2) , ... lower_bound = [ -2 ,0 ], ... upper_bound = [2 , 4]) Page: 9 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
Fig. 3
Sobol’ samples transformed to mimic a uniform U (cid:18)(cid:20) − (cid:21) , (cid:20) (cid:21)(cid:19) (left) andGaussian N (cid:18)(cid:20) (cid:21) , (cid:20) (cid:21)(cid:19) (right) distribution. >>> u.gen_samples (4)array ([[ 1 .15995575, 3 .5553664 ],[ -0 .84004425, 0 .88869974 ],[ 0 .15995575, 2 .22203307 ],[ -1 .84004425, 3 .99981085 ]]) whereas for Gaussian points based on a lattice sequence, we have: >>> g = qp.Gaussian ( qp.Lattice (2) , ... mean = [3 , 2] , ... covariance = [[9 , 5] , ... [5 , 4]]) >>> g.gen_samples (4)array ([[2 .65147682, 1 .86323299 ],[9 .06342728, 3 .60370135 ],[0 .2084947 , 1 .13289686 ],[4 .91327026, 2 .52208387 ]]) Here the covariance decomposition Σ = AA T is done using principal componentanalysis. The Cholesky decomposition is also available.The Brownian motion distribution arises often in financial risk applica-tions. Here the d components of the variable TTT correspond to the discretizedBrownian motion at times τ /d, τ /d, . . . , τ , where τ is the time horizon. Thedistribution is a special case of the Gaussian with covariance Σ = ( τ /d ) (cid:0) min( j, k ) (cid:1) dj,k =1 (8)and mean aaa , which corresponds to a drift coefficient times ( τ /d )(1 , , . . . , d ) T .The code for generating a Brownian motion is Page: 10 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 11
Fig. 4
Sobol’ samples transformed to mimic a 52-dimensional Brownian Motion withoutdrift (left) and with drift coefficient 2 (right). >>> bm = qp.BrownianMotion ( qp.Sobol (4) ,drift =2) >>> bm.gen_samples (2)array ([[0 .24724376, 1 .04423332, 0 .62553528, 0 .87167138 ],[1 .33604991, 1 .92610875, 3 .52091525, 4 .34341092 ]])
Figure 4 displays a Brownian motion based on Sobol’ sequence with andwithout a drift.
Let’s return to the integration problem in (1), which we must rewrite as (2). Wechoose a transformation of variables defined as ttt = ΨΨΨ ( xxx ) where ΨΨΨ : [0 , d → T .This leads to µ = Z T g ( ttt ) λ ( ttt ) d ttt = Z [0 , d g (cid:0) ΨΨΨ ( xxx ) (cid:1) λ (cid:0) ΨΨΨ ( xxx ) (cid:1) (cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12) d xxx = Z [0 , d f ( xxx ) d xxx, where f ( xxx ) = g (cid:0) ΨΨΨ ( xxx ) (cid:1) λ (cid:0) ΨΨΨ ( xxx ) (cid:1) (cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12) , (9)and (cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12) := | ∂ΨΨΨ /∂xxx | represents the Jacobian of the variable transformation.The abstract class Integrand provides f based on the user’s input of g and the TrueMeasure instance, which defines λ and the transformation ΨΨΨ . Differentchoices of
ΨΨΨ lead to different f , which may give different rates of convergenceof the cubature, b µ to µ .We illustrate the Integrand class via an example of Keister [22]:
Page: 11 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 µ = Z R d cos( k ttt k ) exp( − ttt T ttt ) d ttt = Z R d π d/ cos( k ttt k ) | {z } g ( ttt ) π − d/ exp( − ttt T ttt ) | {z } λ ( ttt ) d ttt. (10)Since λ is the density for N (000 , I / ΨΨΨ according to (7b)with A = p / I , in which case λ ( ΨΨΨ ( xxx )) (cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12) = 1, and so µ = Z [0 , d π d/ cos( k ΨΨΨ ( xxx ) k ) | {z } f ( xxx ) d xxx, ΨΨΨ ( xxx ) := p / ΦΦΦ − ( xxx ) . The code below sets up an
Integrand instance using QMCPy’s
CustomFun wrapper to tie a user-defined function g into the QMCPy framework. Thenwe evaluate the sample mean of n = 1000 f values obtained by sampling attransformed Halton points. >>> def my_Keister (t): ... d = t.shape [1] ... norm = np.sqrt (( t **2) .sum (1) ) ... out = np.pi **( d /2) * np.cos ( norm ) ... return out >>> gauss = qp.Gaussian ( qp.Halton (2) ,covariance =1/2) >>> keister = qp.CustomFun ( true_measure = gauss,g = my_Keister ) >>> x = keister.discrete_distrib.gen_samples (1000) >>> y = keister.f (x) >>> y.mean ()1 .8068921077383129 We have no indication yet of how accurate our approximation is. That topicis treated in the next section. Figure 5 visualizes sampling on the originalintegrand, g , and sampling on the transformed integrand, f .Another way to approximate the Keister integral in (10) is to write it asan integral with respect to the Lebesgue measure: µ = Z R d cos( k ttt k ) exp( − ttt T ttt ) | {z } g ( ttt ) |{z} λ ( ttt ) d ttt = Z [0 , d cos( k ΨΨΨ ( xxx ) k ) exp( − ΨΨΨ T ( xxx ) ΨΨΨ ( xxx )) (cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12)| {z } f ( xxx ) d xxx, where ΨΨΨ is any transformation from [0 , d to R d . Now λ is not a PDF. QMCPycan perform the cubature this way as well. >>> def my_L_Keister (t): ... norm_sq = (t **2) .sum (1) ... out = np.cos ( np.sqrt ( norm_sq ))* np.exp (- norm_sq ) ... return out >>> lebesgue_gauss = qp.Lebesgue ( qp.Gaussian ( qp.Halton (2) )) >>> keister = qp.CustomFun ( lebesgue_gauss,my_L_Keister ) >>> x = keister.discrete_distrib.gen_samples (1000) >>> y = keister.f (x) Page: 12 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 13
Fig. 5
Right: Sampling the transformed integrand f at Halton points XXX i LD ∼ U [0 , .Left: Sampling the original integrand g at TTT i = ΨΨΨ ( XXX i ) M ∼ N (000 , I /
2) where
ΨΨΨ is defined in(7b). >>> y.mean ()1 .8088133743667858
The
ΨΨΨ chosen when transforming uniform sequences on the unit cube to fill R d is given by (7b) with A = I .In the examples above, one must input the correct g into CustomFun along with the correct
TrueMeasure λ to define the integration problem. The Keister integrand included in the QMCPy library takes a more flexibleapproach of defining the integration problem µ in (10). Selecting a different sampler ΨΨΨ performs importance sampling which leaves µ unchanged. >>> >>> keister = qp.Keister ( qp.Halton (2) ) >>> x = keister.discrete_distrib.gen_samples (1 e4 ) >>> keister.f (x) .mean ()1 .8082700897855786 >>> >>> keister = qp.Keister ( sampler = qp.Gaussian ( qp.Halton (2) )) >>> x = keister.discrete_distrib.gen_samples (1 e4 ) >>> keister.f (x) .mean ()1 .807929542158969 In the first case above, the λ in (9) corresponds to the Gaussian densitywith mean zero and variance 1 / ΨΨΨ , is chosen to make λ (cid:0) ΨΨΨ ( xxx ) (cid:1)(cid:12)(cid:12) ΨΨΨ ( xxx ) (cid:12)(cid:12) = 1 and f ( xxx ) = g ( ΨΨΨ ( xxx )).In the second case, we choose an importance sampling density λ IS , corre-sponding to standard Gaussian, and the variable transformation ΨΨΨ IS makes λ IS (cid:0) ΨΨΨ IS ( xxx ) (cid:1)(cid:12)(cid:12) ΨΨΨ IS ( xxx ) (cid:12)(cid:12) = 1. Then Page: 13 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 µ = Z T g ( ttt ) λ ( ttt ) d ttt = Z T g ( ttt ) λ ( ttt ) λ IS ( ttt ) λ IS ( ttt )d ttt = Z [0 , d g (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ IS (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ IS (cid:0) ΨΨΨ IS ( xxx ) (cid:1) (cid:12)(cid:12) ΨΨΨ IS ( xxx ) (cid:12)(cid:12) d xxx = Z [0 , d f IS ( xxx ) d xxx where f IS ( xxx ) = g (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ IS (cid:0) ΨΨΨ IS ( xxx ) (cid:1) . (11)Because LD samples mimic U [0 , d , choosing a different sampler is equivalentto choosing a different variable transform. The
StoppingCriterion object determines the number of samples n that arerequired for the sample mean approximation b µ to be within error tolerance ε of the true mean µ . Several QMC stopping criteria have been implemented inQMCPy, including replications, stopping criteria that track the decay of theFourier complex exponential or Walsh coefficients of the integrand [15, 16, 21],and stopping criteria based on Bayesian credible intervals [19, 20].Let us return to the Keister example from the previous section. After settingup a default Keister instance via a Sobol’
DiscreteDistribution , we choosea
StoppingCriterion object that matches the
DiscreteDistribution andinput our desired tolerance. Calling the integrate method returns the ap-proximate integral plus some useful information about the computation. >>> keister = qp.Keister ( qp.Sobol (2) ) >>> stopping = qp.CubQMCSobolG ( keister,abs_tol =1e -3) >>> solution_qmc,data_qmc = stopping.integrate () >>> data_qmc
Page: 14 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 15
Fig. 6
Comparison of run times and sample sizes for computing the 5-dimensionalKeister integral (10) using IID and LD lattice sequences for a variety of absolute errortolerances. The respective stopping criteria are qp.CubMCG [14] and qp.CubQMCLatticeG [21]. The LD sequences provide the desired answer much more efficiently. n_init 2^(10)n_max 2^(35)LDTransformData ( AccumulateData Object )n_total 2^(13)solution 1 .808error_bound 6 .37e -04time_integrate 0 .009
The second output of the stopping criterion provides helpful diagnostic infor-mation. This computation requires n = 2 Sobol’ points and 0 .
009 secondsto complete. The error bound is 0 . O ( n − / ), which means that the timeand sample size to obtain an absolute error tolerance of ε is O ( ε − ). Bycontrast, the error of QMC using LD sequences is O ( n − (cid:15) ), which implies O ( ε − − (cid:15) ) times and sample sizes. We see that QMC methods often requireorders of magnitude fewer samples than MC methods to achieve the sameerror tolerance.For another illustration of QMC cubature, we turn to pricing an Asianarithmetic mean call option. The payoff of this option is the positive differencebetween the strike price, K , averaged over the time horizon: Page: 15 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 payoff(
SSS ) = max d d X j =1 ( S j − + S j ) − K, , SSS = ( S , . . . , S d ) . Here S j denotes the asset price at time τ j/d , and a trapezoidal rule is usedfor discrete approximation of the integral in time that defines the average. Abasic model for asset prices is a geometric Brownian motion, S j ( TTT ) = S exp(( r − σ / τ j/d + σT j ) , j = 1 , . . . , d, TTT = ( T , . . . , T d ) ∼ N (000 , Σ ) , where Σ is defined in (8), r is the interest rate, σ is the volatility, and S isthe initial asset price. The fair price of the option is then the expected valueof the discounted payoff, namely,price = µ = E [ g ( TTT )] , where g ( ttt ) = payoff (cid:0) SSS ( ttt ) (cid:1) exp( − rτ ) . The following code utilizes QMCPy’s Asian option
Integrand object toapproximate the Asian call option for a particular choice of the parameters. >>> payoff = qp.AsianOption ( ... qp.Sobol (52) , ... start_price = 100 , ... strike_price = 120 , ... volatility = 0 .5, ... interest_rate = 0, ... t_final = 1, ... call_put = " call ") >>> qmc_stop_crit = qp.CubQMCSobolG ( payoff,abs_tol =0 .001 ) >>> price,data = qmc_stop_crit.integrate () >>> print (f" Option price = ${ price : .3f } using { , → data.time_integrate : .3f } seconds, { data.n_total : .2e } , → samples ")Option price = $5.194 using 1 .309 seconds, 1 .31e +05 samples Because this
Integrand object has the built-in Brownian motion
TrueMeasure ,one only need provide the LD sampler.Out of the money option price calculations can be sped up by adding anupward drift to the Brownian motion. The upward drift produces more in themoney paths and also reduces the variation or variance of the final integrand, f . This is a form of importance sampling. Using a Brownian motion withoutdrift we get >>> payoff = qp.AsianOption ( qp.Sobol (52) , ... start_price = 100 , strike_price = 200) >>> qmc_stopper = qp.CubQMCSobolG ( payoff,abs_tol =0 .001 ) >>> price,data = qmc_stopper.integrate () >>> print (f" Option price = ${ price : .4f } using { , → data.time_integrate : .3f } seconds, n = { data.n_total : .2e , → }")Option price = $0.1755 using 1 .375 seconds, n = 1 .31e +05 Adding the upward drift gives us the answer faster:
Page: 16 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 17 >>> payoff = qp.AsianOption ( ... sampler = qp.BrownianMotion ( qp.Sobol (52) ,drift =1) , ... start_price = 100 , strike_price = 200) >>> qmc_stopper = qp.CubQMCSobolG ( payoff,abs_tol =0 .001 ) >>> price,data_drift = qmc_stopper.integrate () >>> print (f" Option price = ${ price : .4f } using { , → data_drift.time_integrate : .3f } seconds, n = { , → data_drift.n_total : .2e }") >>> print (f" which requires {100* data_drift.time_integrate / , → data.time_integrate : .0f }% of the time and {100* , → data_drift.n_total / data.n_total : .0f }% of the n")Option price = $0.1756 using 0 .475 seconds, n = 1 .64e +04which requires 35% of the time, 12% of the n The choice of a good drift is an art.The improvement in time is less than improvement in n because theintegrand is more expensive to compute when the drift is employed. Referringto (11), in the case of no drift, the λ corresponds to the density for thediscrete Brownian motion, and the variable transformation ΨΨΨ is chosen sothat f ( xxx ) = g ( ΨΨΨ ( xxx )). However, in the case of a drift, the integrand becomes f IS ( xxx ) = g (cid:0) ΨΨΨ IS ( xxx ) (cid:1) λ (cid:0) ΨΨΨ IS ( xxx ) (cid:1) /λ IS (cid:0) ΨΨΨ IS ( xxx ) (cid:1) , which requires more computationtime per integrand value. In this section, we look at the inner workings of QMCPy and point out featureswe hope will benefit the community of QMC researchers and practitioners. Wealso highlight important idiosyncrasies of QMC methods and how QMCPyaddresses these challenges. For details, readers should refer to the QMCPydocumentation [4].
LD sequences are the backbone of QMC methods. QMCPy provides generatorsthat combine research from across the QMC community to enable advancedfeatures and customization options.Two popular LD sequences are integration lattices and digital nets whichwe previously outlined in Table 1. These LD generators are comprised of twoparts: the static generating vectors
ZZZ , ZZZ , ZZZ , . . . ∈ [0 , d and the callablegenerator function. By default, QMCPy provides a number of high qualitygenerating vectors for users to choose from. For instance, the default ordinarylattice vector was constructed by Cools, Kuo, and Nuyens [5] to support3600 dimensions and 2 samples. However, users who require more samplesbut fewer dimensions may switch to a generating vector constructed using Page: 17 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
LatNet Builder [26, 27] to support 750 dimensions and 2 samples. Moreover,the qp.Lattice and qp.DigitalNet objects allow users to input their owngenerating vectors to produce highly customized sequences. To find suchvectors, we recommend using LatNet Builder’s construction routines as theresults can be easily parsed into a QMCPy-compatible format.Along with the selection of a generating vector, QMCPy’s low discrepancysequence routines expose a number of other customization parameters. Forinstance, the lattice generator extends the Magic Point Shop [31] to supporteither linear or natural ordering. Digital sequences permit either standardor Graycode ordering and maybe randomized via a digital shift optionallycombined with a linear scrambling [29]. Halton sequences may be randomizedvia the routines of either Owen [34] or Hofert and Lemieux [18]. Although QMCPy’s
DiscreteDistribution s have many of the same param-eters and methods, users should be careful when swapping IID sequenceswith LD sequences. While IID node-sets have no preferred sample size, LDsequences often require special sampling ranges to ensure optimal discrepancy.As mentioned earlier, digital nets and integration lattices show better evennessfor sample sizes that are powers of 2. On the other hand, the preferred samplesizes for d -dimensional Halton sequences are n = Q dj =1 p m j j where p j is the j th prime number and m j ∈ N for j = 1 , . . . , d . Due to the infrequency of suchvalues, Halton sequences are often regarded as not having a preferred samplesize.Users may also run into trouble when trying to generate too many points.Since QMCPy’s generators construct sequences in 32-bit precision, generatinggreater than 2 consecutive samples will cause the sequence to repeat. In thefuture, we plan to expand our generators to support optional 64-bit precisionat the cost of greater computational overhead.Another subtlety arises when transforming LD sequences to mimic differentdistributions. As mentioned earlier, unrandomized lattice and digital sequencesinclude the origin, making transformations such as (7b) produce infinite values.Some popular implementations of LD sequences drop the first point, whichis the origin in the absence of randomization. The rationale is to avoid thetransformation of the origin to infinity when mimicking a Gaussian or otherdistribution with an infinite sample space. Unfortunately, dropping the firstpoint destroys some of the nice properties of the first n = 2 m points of LDsequences, which can degrade the order of convergence for QMC cubature. Acareful discussion of this matter is given by [33]. Page: 18 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 19
The transformation
ΨΨΨ connects a
DiscreteDistribution and
TrueMeasure .So far, we have assumed the
DiscreteDistribution mimics a U [0 , d dis-tribution with PDF % ( xxx ) = 1. However, it may be advantageous to utilizea DiscreteDistribution that mimics a different distribution. For instance,QMCPy includes a
DiscreteDistribution corresponding to an IID stan-dard Gaussian sequence. It may be transformed to mimic a general Gaussiandistribution
TrueMeasure while avoiding the inverse CDF transform requiredfor nodes mimicking U [0 , d .Suppose we have a DiscreteDistribution mimicking density % supportedon X . Then using the variable transformation ΨΨΨ , µ = Z T g ( ttt ) λ ( ttt ) d ttt = Z X f ( xxx ) % ( xxx )d xxx for f ( xxx ) = g (cid:0) ΨΨΨ ( xxx ) (cid:1) λ (cid:0) ΨΨΨ ( xxx ) (cid:1) % ( xxx ) | ΨΨΨ ( xxx ) | , which generalizes (9). QMCPy also includes support for composed variabletransformations. Suppose that the variable transformation is a composition ofseveral transformations: ΨΨΨ = b ΨΨΨ L = ΨΨΨ L ◦ ΨΨΨ L − ◦ · · · ◦ ΨΨΨ as in (11). Here, ΨΨΨ l : X l − → X l , X = X , and X L = T so that the transformations are compatiblewith the DiscreteDistribution and
TrueMeasure . Let b ΨΨΨ l = ΨΨΨ l ◦ ΨΨΨ l − ◦ · · · ◦ ΨΨΨ denote the composition of the first l transforms and assume that b ΨΨΨ ( xxx ) = xxx ,the identity transform. Then we may write µ = R X f ( xxx ) % ( xxx )d xxx for f ( xxx ) = g (cid:0) b ΨΨΨ L ( xxx ) (cid:1) λ (cid:0) b ΨΨΨ L ( xxx ) (cid:1) % ( xxx ) L Y l =1 (cid:12)(cid:12) ΨΨΨ l (cid:0) b ΨΨΨ l − ( xxx ) (cid:1)(cid:12)(cid:12) . It is often the case that
ΨΨΨ l is chosen such that ΨΨΨ l ( XXX ) is stochasticallyequivalent to a random variable with density λ l on sample space X l when XXX is a random variable with density % l on sample space X l − . This implies % l ( xxx ) = λ l ( ΨΨΨ l ( xxx )) | ΨΨΨ l ( xxx ) | so that f ( xxx ) = g (cid:0) b ΨΨΨ L ( xxx ) (cid:1) λ (cid:0) b ΨΨΨ L ( xxx ) (cid:1) % ( xxx ) L Y l =1 % l ( b ΨΨΨ l − ( xxx )) λ l ( b ΨΨΨ l ( xxx )) . For an example, we return to the Keister integral (10). The following codeconstructs three
Keister instances: one without importance sampling, oneimportance sampled by a Gaussian distribution, and one importance sampledby the composition of a Gaussian distribution with a Kumaraswamy distribu-tion. All
Integrand s use a Sobol’
DiscreteDistribution , making % ( xxx ) = 1and X = [0 , d . The TrueMeasure is N (000 , I /
2) making λ ( ttt ) = π − d/ exp( − ttt T ttt )and T = R d . Page: 19 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
The table below displays the variable transformations and the measures forthese three cases. In all cases % ( xxx ) = · · · = % L ( xxx ) = 1 because the ΨΨΨ l utilizeinverse cumulative distributions. Integrand
L λ ΨΨΨ λ ΨΨΨ f K N (000 , I /
2) (7b) g ( ΨΨΨ ( · )) K_gauss N (000 , I /
4) (7b) g ( ΨΨΨ ( · )) λ ( ΨΨΨ ( · )) λ ( ΨΨΨ ( · )) K_gauss_kuma
FFF − N (000 , I ) (7b) g ( ΨΨΨ ( ΨΨΨ ( · ))) λ ( ΨΨΨ ( ΨΨΨ ( · ))) λ ( ΨΨΨ ( · )) λ ( ΨΨΨ ( ΨΨΨ ( · )))Here Kum denotes the multivariate Kumaraswamy distribution with inde-pendent marginals, and FFF − denotes the element-wise inverse cumulativedistribution function. The following code evaluates the Keister integral (10)for d = 1 and error tolerance ε = 5E −
8. The timings for each of these differentintegrands are displayed. >>> sobol = qp.Sobol (1) >>>
K = qp.Keister ( sobol ) >>>
K_gauss = qp.Keister ( ... qp.Gaussian ( sobol,covariance = .75 )) >>>
K_gauss_kuma = qp.Keister ( ... qp.Gaussian ( ... qp.Kumaraswamy ( sobol,a = .8,b = .8 ))) >>> for this_K in [ K,K_gauss,K_gauss_kuma ]: ... stopper = qp.CubQMCSobolG ( this_K,abs_tol =5e -8) ... sol,data = stopper.integrate () ... print (" Time : % .3f "% data.time_integrate )Time : 2 .023Time : 0 .350Time : 0 .071
Successful importance sampling places more points where the originalintegrand, g , varies more. Equivalently, successful importance sampling makesthe transformed integrand, f , flatter. The shorter execution times correspondto flatter integrands, as illustrated in Figure 7. The above example uses d = 1 to facilitate the plot in Figure 7, however, the same example works forarbitrary dimensions.Automatic Bayesian stopping criteria choose n needed to satisfy the errortolerance via a Bayesian credible interval for µ [19, 20]. These are also optionsin QMCPy. Page: 20 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 21
Fig. 7
Keister functions with and without importance sampling ( IS ). Note that theKeister functions using importance sampling are generally less variable and thereforeeasier to integrate, as evidenced by the faster integration times. QMCPy is ripe for growth and development in several areas. We hope thatthe QMC community will join us in making this a reality.Multi-level (quasi-)Monte Carlo (ML(Q)MC) methods make possible thecomputation of expectations of functionals of stochastic differential equationsand partial differential equations with random coefficients. Such use casesappear in quantitative finance and in geophysical applications. QMCPy’sML(Q)MC’s capability is rudimentary, but under active development.We hope to add a greater variety of use cases and are engaging collaboratorsto help. Sobol’ indices, partial differential equations with random coefficients,expected improvement measures for Bayesian optimization, and multivariateprobabilities are some of those on our radar.Recently, some QMC experts have focused on developing LD generatorsfor Python. Well established packages such as SciPy [37] and PyTorch [35]are actively working to develop QMC modules that support numerous LDsequences and related functionalities. When these modules are released to thepublic, we plan to integrate the routines as optional backends for QMCPy’sLD generator objects. Creating ties to these other packages will allow users tocall their preferred generators from within the QMCPy framework. Moreover,as features in QMCPy become more common and prove their value, we will tryto incorporate them into SciPy and other popular, general purpose packages.
Page: 21 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58
We also plan to expand our library of digital net generating matrices. Wewish to incorporate interlaced digital nets, polynomial lattices, and Niederreitersequences, among others. By including high quality defaults in QMCPy, wehope to make these sequences more readily available to the public.Our
DiscreteDistrution places equal weights on each support point,
XXX i .In the future we might generalize this to unequal weights.QMCPy already includes importance sampling, but the choice of samplingdistribution must be chosen a priori. We would like to see an automatic,adaptive choice.Control variates can be useful for QMC, as well as for IID MC [17]. Theseshould be incorporated into QMCPy in a seamless way.We close with an invitation. Try QMCPy. If you find bugs or missing fea-tures, submit an issue to https://github.com/QMCSoftware/QMCSoftware/issues. If you wish to add your great algorithm or use case, submit a pullrequest to our GitHub repository at https://github.com/QMCSoftware/QMCSoftware/pulls. We hope that the community will embrace QMCPy. Acknowledgements
The authors would like to thank the organizers for a wonderfulMCQMC 2020. We also thank the referees for their many helpful suggestions. This workis supported in part by SigOpt and National Science Foundation grant DMS-1522687.
References
1. Burkhardt, J.: Various software (2020). URL http://people.sc.fsu.edu/~jburkardt/2. Choi, S.C.T., Ding, Y., Hickernell, F.J., Jiang, L., Jiménez Rugama, Ll.A., Li, D.,Jagadeeswaran, R., Tong, X., Zhang, K., Zhang, Y., Zhou, X.: GAIL: GuaranteedAutomatic Integration Library (versions 1.0–2.3.1). MATLAB software, http://gailgithub.github.io/GAIL_Dev/ (2020). DOI 10.5281/zenodo.40181903. Choi, S.C.T., Hickernell, F.J., Jagadeeswaran, R., McCourt, M., Sorokin, A.: QMCPy:A quasi-Monte Carlo Python library. https://qmcsoftware.github.io/QMCSoftware/(2020). DOI 10.5281/zenodo.39644894. Choi, S.C.T., Hickernell, F.J., Jagadeeswaran, R., McCourt, M.J., Sorokin, A.:QMCPy documentation (2020). URL https://qmcpy.readthedocs.io/en/latest/5. Cools, R., Kuo, F.Y., Nuyens, D.: Constructing embedded lattice rules for multivariateintegration. SIAM Journal on Scientific Computing (6), 2162–2188 (2006). DOI10.1137/06065074X6. Cools, R., Nuyens, D. (eds.): Monte Carlo and Quasi-Monte Carlo Methods: MCQMC,Leuven, Belgium, April 2014, Springer Proceedings in Mathematics and Statistics ,vol. 163. Springer-Verlag, Berlin (2016)7. Dick, J., Kuo, F., Sloan, I.H.: High dimensional integration — the Quasi-MonteCarlo way. Acta Numer. , 133–288 (2013). DOI 10.1017/S09624929130000448. Giles, M.: Multi-level (quasi-)Monte Carlo software (2020). URL https://people.maths.ox.ac.uk/gilesm/mlmc/9. Harris, C.R., Millman, K.J., van der Walt, S.J., Gommers, R., Virtanen, P., Courna-peau, D., Wieser, E., Taylor, J., Berg, S., Smith, N.J., Kern, R., Picus, M., Hoyer, S.,van Kerkwijk, M.H., Brett, M., Haldane, A., del R’ıo, J.F., Wiebe, M., Peterson, P.,G’erard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, Page: 22 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58 uasi-Monte Carlo Software 23C., Oliphant, T.E.: Array programming with NumPy. Nature (7825), 357–362(2020). DOI 10.1038/s41586-020-2649-210. Hickernell, F.J., Sorokin, A.: Quasi-Monte Carlo (QMC) software in QMCPy GoogleColaboratory notebook (2020). URL http://tinyurl.com/QMCPyTutorial11. Hickernell, F.J., Sorokin, A.: Quasi-Monte Carlo (QMC) software in QMCPy GoogleColaboratory notebook for MCQMC2020 article (2020). URL https://tinyurl.com/QMCPyArticle202112. Hickernell, F.J.: A generalized discrepancy and quadrature error bound. Math. Comp. , 299–322 (1998). DOI 10.1090/S0025-5718-98-00894-113. Hickernell, F.J.: Goodness-of-fit statistics, discrepancies and robust designs. Statist.Probab. Lett. , 73–78 (1999)14. Hickernell, F.J., Jiang, L., Liu, Y., Owen, A.B.: Guaranteed conservative fixed widthconfidence intervals via Monte Carlo sampling. In: J. Dick, F.Y. Kuo, G.W. Peters,I.H. Sloan (eds.) Monte Carlo and Quasi-Monte Carlo Methods 2012, SpringerProceedings in Mathematics and Statistics , vol. 65, pp. 105–128. Springer-Verlag,Berlin (2013). DOI 10.1007/978-3-642-41095-615. Hickernell, F.J., Jiménez Rugama, Ll.A.: Reliable adaptive cubature using digitalsequences. In: Cools and Nuyens [6], pp. 367–383. ArXiv:1410.8615 [math.NA]16. Hickernell, F.J., Jiménez Rugama, Ll.A., Li, D.: Adaptive quasi-Monte Carlo meth-ods for cubature. In: J. Dick, F.Y. Kuo, H. Woźniakowski (eds.) ContemporaryComputational Mathematics — a celebration of the 80th birthday of Ian Sloan, pp.597–619. Springer-Verlag (2018). DOI 10.1007/978-3-319-72456-017. Hickernell, F.J., Lemieux, C., Owen, A.B.: Control variates for quasi-Monte Carlo.Statist. Sci. , 1–31 (2005). DOI 10.1214/08834230400000046818. Hofert, M., Lemieux, C.: qrng R package (2017). URL https://cran.r-project.org/web/packages/qrng/qrng.pdf19. Jagadeeswaran, R., Hickernell, F.J.: Fast automatic Bayesian cubature using latticesampling. Stat. Comput. , 1215–1229 (2019). DOI 10.1007/s11222-019-09895-920. Jagadeeswaran, R., Hickernell, F.J.: Fast automatic Bayesian cubature using Sobol’sampling (2021+)21. Jiménez Rugama, Ll.A., Hickernell, F.J.: Adaptive multidimensional integrationbased on rank-1 lattices. In: Cools and Nuyens [6], pp. 407–422. ArXiv:1411.196622. Keister, B.D.: Multidimensional quadrature algorithms. Computers in Physics L -discrepancy for anchored boxes. J. Complexity , 527–556(1998)30. Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods.CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia(1992) Page: 23 job: QMCSoftwareArticleNoRho macro: svmult.cls date/time: 17-Feb-2021/1:58