E. Jack Chen
University of Cincinnati
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by E. Jack Chen.
European Journal of Operational Research | 2005
E. Jack Chen; W. David Kelton
Abstract Two-stage selection procedures have been widely studied and applied to determine appropriate sample sizes for selecting the best of k designs. However, standard “indifference-zone” procedures are derived with a statistically conservative least-favorable-configuration assumption. The enhanced two-stage selection (ETSS) is a procedure that takes into account not only the variance of samples but also the difference between sample means when determining the sample sizes. This paper discusses an implementation of sequential ranking and selection procedures due to the ETSS procedure to avoid relying too much on information obtained in just one stage. We show that the needed ratios of sample sizes to maximize the probability of correct selection is approximately maintained at all iterations. An experimental performance evaluation demonstrates the efficiency of our sequential procedures.
Computers & Operations Research | 2008
E. Jack Chen; W. David Kelton
This paper discusses a unified approach for estimating, via a histogram, the steady-state distribution of a stochastic process observed by simulation. The quasi-independent (QI) procedure increases the simulation run length progressively until a certain number of essentially independent and identically distributed samples are obtained. It is known that order-statistics quantile estimators are asymptotically unbiased when the output sequences satisfy certain conditions. We compute sample quantiles at certain grid points and use Lagrange interpolation to estimate any p quantile. Our quantile estimators satisfy a proportional-precision requirement at the first phase, and a relative- or absolute-precision requirement at the second phase. An experimental performance evaluation demonstrates the validity of using the QI procedure to estimate quantiles and construct a histogram to estimate the steady-state distribution.
European Journal of Operational Research | 2006
E. Jack Chen; W. David Kelton
This paper discusses two sequential procedures to construct proportional half-width confidence intervals for a simulation estimator of the steady-state quantile and tolerance intervals for a stationary stochastic process having the (reasonable) property that the autocorrelation of the underlying process approaches zero with increasing lag. At each quantile to be estimated, the marginal cumulative distribution function must be absolutely continuous in some neighborhood of that quantile with a positive, continuous probability density function. These algorithms sequentially increase the simulation run length so that the quantile and tolerance-interval estimates satisfy pre-specified precision requirements. An experimental performance evaluation demonstrates the validity of these procedures.
Simulation Modelling Practice and Theory | 2003
E. Jack Chen; W. David Kelton
Abstract This paper discusses implementation of a sequential procedure to determine the simulation run length and construct a confidence interval for the mean of a steady-state simulation. The quasi-independent (QI) procedure increases the simulation run length progressively until a certain number of essentially independent and identically distributed systematic samples are obtained. We estimate the variance of the sample mean through an empirical distribution (histogram). Several experimental performance evaluations demonstrate the validity of the QI procedure and histogram approximation.
Simulation | 2007
E. Jack Chen; W. David Kelton
Batch means are sample means of subsets of consecutive subsamples from a simulation output sequence. Independent and normally distributed batch means are not only the requirement for constructing a confidence interval for the mean of the steady-state distribution of a stochastic process, but are also the prerequisite for other simulation procedures such as ranking and selection (R&S). We propose a procedure to generate approXimately independent and normally distributed batch means, as determined by the von Neumman test of independence and the chi-square test of normality, and then to construct a confidence interval for the mean of a steady-state eXpected simulation response. It is our intention for the batch means to play the role of the independent and identically normally distributed observations that confidence intervals and the original versions of R&S procedures require. We perform an empirical study for several stochastic processes to evaluate the performance of the procedure and to investigate the problem of determining valid batch sizes.
Iie Transactions | 2009
E. Jack Chen; W. David Kelton
A Quasi-Independent (QI) subsequence is a subset of time series observations obtained by systematic sampling. Because the observations appear to be independent, as determined by the runs tests, classical statistical techniques can be used on those observations directly. This paper discusses implementation of a sequential procedure to determine the simulation run length to obtain a QI subsequence, and the batch size for constructing confidence intervals for an estimator of the steady-state mean of a stochastic process. The proposed QI procedures increase the simulation run length and batch size progressively until a certain number of essentially independent and identically distributed samples are obtained. The only (mild) assumption is that the correlations of the stochastic-process output sequence eventually die off as the lag increases. An experimental performance evaluation demonstrates the validity of the QI procedure.
Journal of Simulation | 2014
E. Jack Chen; W. David Kelton
This paper evaluates sequential procedures for estimating the steady-state density of a stochastic process, typically (though not necessarily) observed by simulation, with or without intra-process independence. The procedure computes sample densities at certain points and uses Lagrange interpolation to estimate the density f(x) for each user-specified x. The procedure sequentially determines the sample size by an intrinsic quasi-independent sequence and estimates the density by central finite differences. An experimental performance evaluation demonstrates the validity of using the procedure to estimate densities of steady-state stochastic processes.This paper evaluates sequential procedures for estimating the steady-state density of a stochastic process, typically (though not necessarily) observed by simulation, with or without intra-process independence. The procedure computes sample densities at certain points and uses Lagrange interpolation to estimate the density f(x) for each user-specified x. The procedure sequentially determines the sample size by an intrinsic quasi-independent sequence and estimates the density by central finite differences. An experimental performance evaluation demonstrates the validity of using the procedure to estimate densities of steady-state stochastic processes.
A Quarterly Journal of Operations Research | 2001
Pierre L'Ecuyer; Richard J. Simard; E. Jack Chen
Operations Research | 2002
Pierre L'Ecuyer; Richard J. Simard; E. Jack Chen; W. David Kelton
winter simulation conference | 2000
E. Jack Chen; W. David Kelton