George S. Fishman
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George S. Fishman.
Archive | 2001
George S. Fishman
This book describes the fundamentals of discrete-event simulation from the perspective of highly interactive PC and workstation environments. It focuses on modeling, programming, input-data preparation, output-data analysis, and presentation of results. It includes a detailed account of alternative modeling and programming methods and a description of the principal concepts of delay systems, the generic setting for most discrete-event simulations. Examples carried throughout the book illustrate how different concepts apply. The book is intended for an introductory course on discrete-event simulation for graduate students and advanced undergraduates in the mathematical and engineering sciences, particularly operations research, industrial engineering, operations management (business), computer science, telecommunications engineering, and transportation engineering. It will also serve as a useful reference for professionals in these fields who wish to broaden their knowledge of discrete-event simulation.
Siam Journal on Scientific and Statistical Computing | 1986
George S. Fishman; Louis R. Moore
This paper presents the results of an exhaustive search to find optimal full period multipliers for the multiplicative congruential random number generator with prime modulus
Operations Research | 1986
George S. Fishman
2^{31} - 1
IEEE Transactions on Reliability | 1986
George S. Fishman
. Here a multiplier is said to be optimal if the distance between adjacent parallel hyperplanes on which k-tuples lie does not exceed the minimal achievable distance by more than 25 percent for
Journal of the American Statistical Association | 1982
George S. Fishman; Louis R. Moore
k = 2, \cdots ,6
Informs Journal on Computing | 1997
George S. Fishman; L. Stephen Yarberry
. This criterion is considerably more stringent than prevailing standards of acceptability and leads to a total of only 414 multipliers among the more than 534 million candidate multipliers.Section 1 reviews the basic properties of linear congruential generators and § 2 describes worst case performance measures. These include the maximal distance between adjacent parallel hyperplanes, the minimal number of parallel hyperplanes, the minimal distance between k-tuples, the lattice ratio and the discrepancy. Section 3 presents the five best multipliers and compares their performances with those of three commonly employed multipliers for all measures but the lattice test. Comparisons using packing measures in the space of k-tuples and in the dual space are also made. Section 4 presents the results of applying a battery of statistical tests to the best five to detect local departures from randomness. None were found. The Appendix contains a list of all optimal multipliers.
Operations Research | 1972
George S. Fishman
For an undirected network G = V, E whose arcs are subject to random failure, we present a relatively complete and comprehensive description of a general class of Monte Carlo sampling plans for estimating g = gs, T, the probability that a specified node s is connected to all nodes in a node set T. We also provide procedures for implementing these plans. Each plan uses known lower and upper bounds [B, A] on g to produce an estimator of g that has a smaller variance A-gg-B/K on K independent replications than that obtained for crude Monte Carlo sampling B = 0, A = 1. We describe worst-case bounds on sample sizes K, in terms of B and A, for meeting absolute and relative error criteria. We also give the worst-case bound on the amount of variance reduction that can be expected when compared with crude Monte Carlo sampling. Two plans arc studied in detail for the case T = {t}. An example illustrates the variance reductions achievable with these plans. We also show how to assess the credibility that a specified error criterion for g is met as the Monte Carlo experiment progresses, and show how confidence intervals can be computed for g. Lastly, we summarize the steps needed to implement the proposed technique.
Simulation | 1968
George S. Fishman; Philip J. Kiviat
This paper describes and compares the performance of four alternative Monte Carlo sampling plans for estimating the probability that two nodes, s and t, are connected in an undirected network whose arcs fail randomly and independently. Models of this type are commonly used when computing the reliability of a system of randomly failing components. The first method, dagger sampling, relies for its advantage on inducing negative correlation between the outcomes of the replications in the sample. The second method, sequential destruction/construction exploits permutation properties of arc failures and successes. The third method uses easily determined bounds on the reliability probability to gain an advantage. The fourth approach enumerates all failure sets of the network to achieve its appeal. An example based on a 30-arc 20-node network illustrates the variance reducing features and the time and space needs of each sampling plan. Dagger sampling and sequential construction offer limited benefits when compared to the bounds and failure-sets methods. The failure-sets method performs best on the example for small failure probabilities. However, in general it offers no guarantee of a smaller variance than crude Monte Carlo and requires substantially more memory than the other methods. Moreover, this memory requirement can grow rapidly as the size of the network increases. By contrast, the bounds method guarantees a smaller variance than for crude Monte Carlo sampling and has modest memory requirements.
Communications of The ACM | 1983
George S. Fishman; Baosheng D. Huang
Abstract This article presents the results of empirically testing 16 alternative multipliers for a multiplicative congruential random number generator with modulus 231 — 1. Two of the multipliers are in common use, six are the best of 50 candidate multipliers according to the theoretical spectral and lattice tests, and eight are the worst, with regard to 2-tuples, among the 50. The test results raise serious doubts about several of the multipliers, including one in common use. The tests were also applied to a well-known theoretically poor generator, RANDU, and gave strong empirical evidence of its inadequacy. Since comparison of the results of the First eight multipliers with those for the eight worst multipliers failed to show any apparent gross differences, one may want to relax the currently employed stringent criteria for acceptable performance on the lattice and spectral tests.
Communications of The ACM | 1976
George S. Fishman
This article introduces the LBATCH and ABATCH rules for applying the batch means method to analyze output of Monte Carlo and, in particular, discrete-event simulation experiments. Sufficient conditions are given for these rules to produce strongly consistent estimators of the variance of the sample mean and asymptotically valid confidence intervals for the mean. The article studies the performance of these rules and two others suggested in the literature, comparing confidence interval coverage rates and mean half-lengths. The article also gives detailed algorithms for implementing the rules in O(t) time with O(log2 t) space. FORTRAN, C, and SIMSCRIPT II.5 implementations of the procedures are available by anonymous file transfer protocol.