Pierre L’Ecuyer
Université de Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pierre L’Ecuyer.
Archive | 2005
Pierre L’Ecuyer; Christiane Lemieux
We survey some of the recent developments on quasi-Monte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a high-dimensional integral. We review several QMC constructions and different randomizations that have been proposed to provide unbiased estimators and for error estimation. Randomizing QMC methods allows us to view them as variance reduction techniques. New and old results on this topic are used to explain how these methods can improve over the MC method in practice. We also discuss how this methodology can be coupled with clever transformations of the integrand in order to reduce the variance further. Additional topics included in this survey are the description of figures of merit used to measure the quality of the constructions underlying these methods, and other related techniques for multidimensional integration.
Finance and Stochastics | 2009
Pierre L’Ecuyer
We review the basic principles of quasi-Monte Carlo (QMC) methods, the randomizations that turn them into variance-reduction techniques, the integration error and variance bounds obtained in terms of QMC point set discrepancy and variation of the integrand, and the main classes of point set constructions: lattice rules, digital nets, and permutations in different bases. QMC methods are designed to estimate s-dimensional integrals, for moderate or large (perhaps infinite) values of s. In principle, any stochastic simulation whose purpose is to estimate an integral fits this framework, but the methods work better for certain types of integrals than others (e.g., if the integrand can be well approximated by a sum of low-dimensional smooth functions). Such QMC-friendly integrals are encountered frequently in computational finance and risk analysis. We summarize the theory, give examples, and provide computational results that illustrate the efficiency improvement achieved. This article is targeted mainly for those who already know Monte Carlo methods and their application in finance, and want an update of the state of the art on quasi-Monte Carlo methods.
Archive | 2004
Pierre L’Ecuyer
Lattice rules are quasi-Monte Carlo methods for estimating largedimensional integrals over the unit hypercube. In this paper, after briefly reviewing key ideas of quasi-Monte Carlo methods, we give an overview of recent results, generalize some of them, and provide new results, for lattice rules defined in spaces of polynomials and of formal series with coefficients in the finite ring ℤb. Some of the results are proved only for the case where b is a prime (so ℤb, is a finite field). We discuss basic properties, implementations, a randomized version, and quality criteria (i.e., measures of uniformity) for selecting the parameters. Two types of polynomial lattice rules are examined: dimensionwise lattices and resolutionwise lattices. These rules turn out to be special cases of digital net constructions, which we reinterpret as yet another type of lattice in a space of formal series. Our development underlines the connections between integration lattices and digital nets.
Annals of Operations Research | 2011
Pierre L’Ecuyer; Bruno Tuffin
We consider a class of Markov chain models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multicomponent systems in reliability settings. We are interested in the design of efficient importance sampling (IS) schemes to estimate the reliability of such systems by simulation. For these models, there is in fact a zero-variance IS scheme that can be written exactly in terms of a value function that gives the expected cost-to-go (the exact reliability, in our case) from any state of the chain. This IS scheme is impractical to implement exactly, but it can be approximated by approximating this value function. We examine how this can be effectively used to estimate the reliability of a highly-reliable multicomponent system with Markovian behavior. In our implementation, we start with a simple crude approximation of the value function, we use it in a first-order IS scheme to obtain a better approximation at a few selected states, then we interpolate in between and use this interpolation in our final (second-order) IS scheme. In numerical illustrations, our approach outperforms the popular IS heuristics previously proposed for this class of problems. We also perform an asymptotic analysis in which the HRMS model is parameterized in a standard way by a rarity parameter ε, so that the relative error (or relative variance) of the crude Monte Carlo estimator is unbounded when ε→0. We show that with our approximation, the IS estimator has bounded relative error (BRE) under very mild conditions, and vanishing relative error (VRE), which means that the relative error converges to 0 when ε→0, under slightly stronger conditions.
Sixth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing | 2006
Pierre L’Ecuyer; Christian Lécot; Bruno Tuffin
We study a randomized quasi-Monte Carlo method for estimating the state distribution at each step of a Markov chain with totally ordered (discrete or continuous) state space. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d+1)-dimensional low-discrepancy point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. The method can be used in particular to get a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when state-dependent costs are paid at each step. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial.
Mathematics and Computers in Simulation | 2017
Pierre L’Ecuyer; David Munger; Boris Oreshkin; Richard J. Simard
We examine the requirements and the available methods and software to provide (or imitate) uniform random numbers in parallel computing environments. In this context, for the great majority of applications, independent streams of random numbers are required, each being computed on a single processing element at a time. Sometimes, thousands or even millions of such streams are needed. We explain how they can be produced and managed. We devote particular attention to multiple streams for GPU devices.
Archive | 1998
Pierre L’Ecuyer
We recall some requirements for “good” random number generators and argue that while the construction of generators and the choice of their parameters must be based on theory, a posteriori empirical testing is also important. We then give examples of tests failed by some popular generators and examples of generators passing these tests.
Archive | 2000
Christiane Lemieux; Pierre L’Ecuyer
We explore how lattice rules can reduce the variance of the estimators for simulation problems, in comparison with the Monte Carlo method. To do this, we compare these two methods on option valuation problems in finance, along with two types of (t, s)-sequences. We also look at the effect of combining variance reduction techniques with the preceding approaches. Our numerical results seem to indicate that lattice rules are less affected by the “curse of dimensionality” than the other types of quasi-Monte Carlo methods and provide more precise estimators than Monte Carlo does.
Archive | 2010
Héctor Cancela; Pierre L’Ecuyer; Matias David Lee; Gerardo Rubino; Bruno Tuffin
Many dependability analyses are performed using static models, that is, models where time is not an explicit variable. In these models, the system and its components are considered at a fixed point in time, and the word “static” means that the past or future behavior is not relevant for the analysis. Examples of such models are reliability diagrams, or fault trees. The main difficulty when evaluating the dependability of these systems is the combinatorial explosion associated with exact solution techniques. For large and complex models, one may turn to Monte Carlo methods, but these methods have to be modified or adapted in the presence of rare important events, which are commonplace in reliability and dependability systems. This chapter examines a recently proposed method designed to deal with the problem of estimating reliability metrics for highly dependable systems where the failure of the whole system is a rare event. We focus on the robustness properties of estimators. We also propose improvements to the original technique, including its combination with randomized quasi-Monte Carlo, for which we prove that the variance converges at a faster rate (asymptotically) than for standard Monte Carlo.
European Journal of Operational Research | 2014
Nicolas Chapados; Marc Joliveau; Pierre L’Ecuyer; Louis-Martin Rousseau
In spite of its tremendous economic significance, the problem of sales staff schedule optimization for retail stores has received relatively scant attention. Current approaches typically attempt to minimize payroll costs by closely fitting a staffing curve derived from exogenous sales forecasts, oblivious to the ability of additional staff to (sometimes) positively impact sales. In contrast, this paper frames the retail scheduling problem in terms of operating profit maximization, explicitly recognizing the dual role of sales employees as sources of revenues as well as generators of operating costs. We introduce a flexible stochastic model of retail store sales, estimated from store-specific historical data, that can account for the impact of all known sales drivers, including the number of scheduled staff, and provide an accurate sales forecast at a high intra-day resolution. We also present solution techniques based on mixed-integer (MIP) and constraint programming (CP) to efficiently solve the complex mixed integer non-linear scheduling (MINLP) problem with a profit-maximization objective. The proposed approach allows solving full weekly schedules to optimality, or near-optimality with a very small gap. On a case-study with a medium-sized retail chain, this integrated forecasting–scheduling methodology yields significant projected net profit increases on the order of 2–3% compared to baseline schedules.