Ludolf E. Meester
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ludolf E. Meester.
Probability in the Engineering and Informational Sciences | 1993
Ludolf E. Meester; J. George Shanthikumar
We define a notion of regularity ordering among stochastic processes called directionally convex (dcx) ordering and give examples of doubly stochastic Poisson and Markov renewal processes where such ordering is prevalent. Further-more, we show that the class of segmented processes introduced by Chang, Chao, and Pinedo [3] provides a rich set of stochastic processes where the dcx ordering can be commonly encountered. When the input processes to a large class of queueing systems (single stage as well as networks) are dcx ordered, so are the processes associated with these queueing systems. For example, if the input processes to two tandem /M/ c 1 →/M/ c 2 →…→/M/ c m queueing systems are dcx ordered, so are the numbers of customers in the systems. The concept of directionally convex functions (Shaked and Shanthikumar [15]) and the notion of multivariate stochastic convexity (Chang, Chao, Pinedo, and Shanthikumar [4]) are employed in our analysis.
Advances in Applied Probability | 1990
Ludolf E. Meester; J. George Shanthikumar
One considers a tandem queueing system with m stages and finite intermediate buffer storage spaces. Each stage has a single server and the service times are independent and exponentially distributed. There is an unlimited supply of customers in front of the first stage. For this system one shows that the number of customers departing frome ach of the m stages during the time interval [O,t] for any t≥0 is strongly stochastically increasing and concave in the buffer storage capacities. Consequently the throughput of this tandem queueing system is an increasing and concave function of the buffer storage capacities
Mathematics of Operations Research | 1999
Ludolf E. Meester; J. George Shanthikumar
This paper presents a general theory of stochastic convexity. The notions of stochastic convexity formulated by Shaked and Shanthikumar 1988a, 1988b, 1990 are defined for general partially ordered spaces. All of the closure properties of the one-dimensional real theory are proved to be true in this general framework as well and results concerning the temporal convexity of Markov chains are sharpened. Many proofs are based on new ideas, some of which also provide insightful alternatives for proofs in Shaked and Shanthikumar 1990. The general theory encompasses the largely one-dimensional stochastic convexity theory as known from these papers, and at the same time permits treatment of multivariate multiparameter families as well as more general random objects. Among others, it applies to real vector spaces and yields a theory of stochastic convexity for random vectors and stochastic processes. We illustrate this new scope with examples and applications from queueing theory, coverage processes, reliability and branching processes. We show that the virtual waiting time process of an NHPP driven i¾·/G/1 queue is stochastically convex in the arrival intensity function, which explains the known adverse effect of fluctuating arrival rates; that the expected size of an i.i.d. union of random sets grows concavely; that the expected utility of repairable items under imperfect repair policies is increasing and convex in the probabilities of successful repair.
Reliability Engineering & System Safety | 2011
M. Rajabalinejad; Ludolf E. Meester; P.H.A.J.M. van Gelder; J.K. Vrijling
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.
Transplant International | 2002
Johan De Meester; Marijke Bogers; Hilde de Winter; J. Smits; Ludolf E. Meester; Michel Dekking; Freerk A. Lootsma; G. G. Persijn; Ferdinand Mühlbacher
Abstract ABO blood group matching policy between donor and recipient is a key element of organ allocation. Unequal distribution of the ABO blood groups in the population can lead to inequities in the distribution of organs to potential recipients. Furthermore, High Urgency liver transplant candidates might compromise the chances of transplantation for the elective patients. To compare the influence of the various ABO blood group matching policies on the transplantation rate of HU patients and on the subsequent donor liver availability for elective patients, a simulation study was undertaken. The study shows that in the Eurotransplant liver allocation program, a restricted ABO‐compatible matching policy for HU liver patients offers the highest probability of acquiring a liver transplant, for both high Urgency‐ and elective patients, irrespective of their ABO blood group. A simulation study once again proved to be an elegant tool for objectively analysing various options in a complex organ allocation algorithm.
Concurrency and Computation: Practice and Experience | 2013
Michel Meulpolder; Ludolf E. Meester; Dick H. J. Epema
As peer‐to‐peer (P2P) file‐sharing systems revolve around cooperation, the design of upload incentives has been one of the most important topics in P2P research for more than a decade. Several deployed systems, such as private BitTorrent communities, successfully manage to foster cooperation by banning peers when their sharing ratio becomes too low. Interestingly, recent measurements have shown that such systems tend to have an oversupply instead of an undersupply of bandwidth designers that have been obsessed with since the dawn of P2P. In such systems, the ‘selfish peer’ problem is finally solved, but a new problem has arisen: because peers have to keep up their sharing ratios, they now have to compete to upload. In this paper, we explore this new problem and show how even highly cooperative peers might in the end not survive the upload competition. On the basis of recent measurements of over half a million peers in private P2P communities, we propose and analyze several algorithms for uploader selection under oversupply. Our algorithms enable sustained sharing ratio enforcement and are easy to implement in both existing and new systems. Overall, we offer an important design consideration for the new generation of P2P systems in which selfishness is no longer an issue. Copyright
Stochastic Processes and their Applications | 1996
Gerard Hooghiemstra; Ludolf E. Meester
We consider extremal properties of Markov chains. Rootzen (1988) gives conditions for stationary, regenerative sequences so that the normalized process of level exceedances converges in distribution to a compound Poisson process. He also provides expressions for the extremal index and the compounding probabilities; in general it is not easy to evaluate these. We show how in a number of instances Markov chains can be coupled with two random walks which, in terms of extremal behaviour, bound the chain from above and below. Using a limiting argument it is shown that the lower bound converges to the upper one, yielding the extremal index and the compounding probabilities of the Markov chain. An FFT algorithm by Grubel (1991) for the stationary distribution of a G/G/1 queue is adapted for the extremal index; it yields approximate, but very accurate results. Compounding probabilities are calculated explicitly in a similar fashion. The technique is applied to the G/G/1 queue, G/M/c queues and ARCH processes, whose extremal behaviour de Haan et al. (1989) characterized using simulation.
Archive | 2000
F. M. Dekking; S. De Graaf; Ludolf E. Meester
The external nodes of a binary search tree are of two types: arm nodes whose parents have degree 2, and foot nodes whose parents have degree 1. We study the positioning of these two types on the tree. We prove that the conditional distribution of the insertion depth of a key given that it is inserted in a foot node equals that of the conditional distribution given that it is inserted in an arm node shifted by 1. We further prove that the normalized path length of the arm nodes converges almost surely to \( \frac{1}{3}\) times the limit distribution of the normalized path length of all external nodes.
Advances in Applied Probability | 2003
F. M. Dekking; Ludolf E. Meester
This paper studies path lengths in random binary search trees under the random permutation model. It is known that the total path length, when properly normalized, converges almost surely to a nondegenerate random variable Z. The limit distribution is commonly referred to as the ‘quicksort distribution’. For the class 𝒜 m of finite binary trees with at most m nodes we partition the external nodes of the binary search tree according to the largest tree that each external node belongs to. Thus, the external path length is divided into parts, each part associated with a tree in 𝒜 m . We show that the vector of these path lengths, after normalization, converges almost surely to a constant vector times Z.
Journal of Derivatives | 2013
Jasper Anderluh; Ludolf E. Meester
Monte Carlo simulation is generally required when a derivaMonte Carlo simulation is generally required when a derivative’s payoff is path dependent. For many such instruments, the payoff depends on whether the price of the underlying reaches a stopping point before option expiration, such as the default time for a credit product, or a knock-out barrier. For more complicated products like “Parisian” options, the issue is not just penetrating a fixed price barrier, but how long the asset price stays beyond it. In this article, the authors show how valuation of these instruments can be speeded up, sometimes by a remarkable amount, by modeling the (sequence of) hitting times and filling in the in-between prices as needed. An important tool in doing this is the authors’ new technique for simulating paths backward from a future barrier crossing. One additional advantage of their method is that it is possible to use a broad range of simpler derivatives as control variates in the simulation. They provide pseudocode for the procedure in an Appendix.