Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruce W. Schmeiser is active.

Publication


Featured researches published by Bruce W. Schmeiser.


Operations Research | 1982

Batch Size Effects in the Analysis of Simulation Output.

Bruce W. Schmeiser

Batching is a commonly used method for calculating confidence intervals on the mean of a sequence of correlated observations arising from a simulation experiment. Several recent papers have considered the effect of using batch sizes too small to satisfy assumptions of normality and/or independence, and the resulting incorrect probabilities of the confidence interval covering the mean. This paper quantifies the effects of using batch sizes larger than necessary to satisfy normality and independence assumptions. These effects include (1) correct probability of covering the mean, (2) an increase in expected half length, (3) an increase in the standard deviation and coefficient of variation of the half length, and (4) an increase in the probability of covering points not equal to the mean. For any sample size and independent and normal batch means, the results are (1) the effects of less than 10 batches are large and the effects of more than 30 batches small, and (2) additional batches have lesser effects on ...


Iie Transactions | 1977

A Versatile Four Parameter Family of Probability Distributions Suitable for Simulation

Bruce W. Schmeiser; Stuart Jay Deutsch

Abstract Methods are well-known for generating random values from many common statistical distributions. These common distributions are sometimes used in simulation studies due to the lack of convenient methods of generating random values from distributions having more arbitrary shapes. A family of distributions is presented here which assumes many shapes, including those of the exponential, Bernoulli and uniform distributions. Any given first four moments may be obtained through manipulation of four parameters. The inverse cdf exists in closed form, allowing straightforward generation of random values given a source of U(0,1) values. Properties of the distribution and methods of parameter determination are developed.


Communications of The ACM | 1988

Binomial random variate generation

Voratas Kachitvichyanukul; Bruce W. Schmeiser

Existing binomial random-variate generators are surveyed, and a new generator designed for moderate and large means is developed. The new algorithm, BTPE, has fixed memory requirements and is faster than other such algorithms, both when single, or when many variates are needed.


The American Statistician | 2000

Survival Distributions Satisfying Benford's Law

Lawrence M. Leemis; Bruce W. Schmeiser; Diane L. Evans

Abstract Hill stated that “An interesting open problem is to determine which common distributions (or mixtures thereof) satisfy Benfords law …”. This article quantifies compliance with Benfords law for several popular survival distributions. The traditional analysis of Benfords law considers its applicability to datasets. This article switches the emphasis to probability distributions that obey Benfords law.


international symposium on neural networks | 1994

Optimal linear combinations of neural networks: an overview

Sherif Hashem; Bruce W. Schmeiser; Yuehwern Yih

Neural networks based modeling often involves trying multiple networks with different architectures and/or training parameters in order to achieve acceptable model accuracy. Typically, one of the trained NNs is chosen as best, while the rest are discarded. Hashem and Schmeiser (1992) propose using optimal linear combinations of a number of trained neural networks instead of using a single best network. In this paper, we discuss and extend the idea of optimal linear combinations of neural networks. Optimal linear combinations are constructed by forming weighted sums of the corresponding outputs of the networks. The combination-weights are selected to minimize the mean squared error with respect to the distribution of random inputs. Combining the trained networks may help integrate the knowledge acquired by the component networks and thus improve model accuracy. We investigate some issues concerning the estimation of the optimal combination-weights and the role of the optimal linear combination in improving model accuracy for both well-trained and poorly trained component networks. Experimental results based on simulated data are included. For our examples, the model accuracy resulting from using estimated optimal linear combinations is better than that of the best trained network and that of the simple averaging of the outputs of the component networks.<<ETX>>


Journal of Computational and Graphical Statistics | 1993

Performance of the Gibbs, Hit-and-Run, and Metropolis Samplers

Ming-Hui Chen; Bruce W. Schmeiser

Abstract We consider the performance of three Monte Carlo Markov-chain samplers—the Gibbs sampler, which cycles through coordinate directions; the Hit-and-Run (HR and the Metropolis sampler, which moves with a probability that is a ratio of likelihoods. We obtain several analytical results. We provide a sufficient condition of the geometric convergence on a bounded region S for the H&R sampler. For a general region S, we review the Schervish and Carlin sufficient geometric convergence condition for the Gibbs sampler. We show that for a multivariate normal distribution this Gibbs sufficient condition holds and for a bivariate normal distribution the Gibbs marginal sample paths are each an AR(1) process, and we obtain the standard errors of sample means and sample variances, which we later use to verify empirical Monte Carlo results. We empirically compare the Gibbs and H&R samplers on bivariate normal examples. For zero correlation, the Gibbs sampler provid...


Operations Research | 1982

Bivariate Gamma Random Vectors

Bruce W. Schmeiser; Ram Lal

A seven-parameter family of bivariate probability distributions is developed which allows for any gamma marginal distributions, any associated correlation (positive or negative), and a range of regression curves. The form of the family, which relies on the reproducibility property of the gamma distribution, is motivated by the search for tractable parameter estimation, general dependency structure, and straightforward computer sampling for simulation modeling. A modification with closed-form parameter estimation, but less general dependency structure, is also given. Finally, the use of these distributions in the form of first order autoregressive time series is discussed.


Operations Research | 1993

Variance of the sample mean: properties and graphs of quadratic-form estimators

Wheyming Tina Song; Bruce W. Schmeiser

Many commonly used estimators of the variance of the sample mean from a covariance-stationary process can be written as quadratic forms. We study the class of quadratic-form estimators algebraically and graphically, including five specific types of estimators, some from the literature and some that are new. Finite and asymptotic bias, variance, and covariance are derived and examined, with emphasis on developing intuition and insight by interpreting these properties graphically. The graphs depict the nonoptimal statistical behavior of some of the simulation literature estimators such as nonoverlapping batch means, as well as the better behavior of estimators obtained by overlapping batches.


Iie Transactions | 2001

Stochastic root finding via retrospective approximation

Huifen Chen; Bruce W. Schmeiser

Given a user-provided Monte Carlo simulation procedure to estimate a function at any specified point, the stochastic root-finding problem is to find the unique argument value to provide a specified function value. To solve such problems, we introduce the family of Retrospective Approximation (RA) algorithms. RA solves, with decreasing error, a sequence of sample-path equations that are based on increasing Monte Carlo sample sizes. Two variations are developed: IRA, in which each sample-path equation is generated independently of the others, and DRA, in which each equation is obtained by appending new random variates to the previous equation. We prove that such algorithms converge with probability one to the desired solution as the number of iterations grows, discuss implementation issues to obtain good performance in practice without tuning algorithm parameters, provide experimental results for an illustrative application, and argue that IRA dominates DRA in terms of the generalized mean squared error.


Journal of the American Statistical Association | 1980

Squeeze Methods for Generating Gamma Variates

Bruce W. Schmeiser; Ram Lal

Abstract Two algorithms are given for generating gamma distributed random variables. The algorithms, which are valid when the shape parameter is greater than one, use a uniform majorizing function for the body of the distribution and exponential majorizing functions for the tails. The algorithms are self-contained, requiring only U (0, 1) variates. Comparisons are made to four competitive algorithms in terms of marginal execution times, initialization time, and memory requirements. Marginal execution times are less than those of existing methods for all values of the shape parameter, as implemented here in FORTRAN.

Collaboration


Dive into the Bruce W. Schmeiser's collaboration.

Top Co-Authors

Avatar

Wheyming Tina Song

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Huifen Chen

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar

James R. Wilson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Goldsman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stuart Jay Deutsch

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge