Alan M. Polansky
Northern Illinois University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan M. Polansky.
Journal of Quality Technology | 1998
Youn Min Chou; Alan M. Polansky; Robert L. Mason
Quality characteristics analyzed in statistical process control (SPC) often are required to be normally distributed. This is true in many types of control charts and acceptance sampling plans, as well as in process capability studies. If a characteristi..
Journal of Statistical Computation and Simulation | 2000
Alan M. Polansky; Edsel R. Baker
The use of a kernel estimator as a smooth estimator for a distribution function has been suggested by many authors An expression for the bandwidth that minimizes the mean integrated square error asymptotically has been available for some time. However, few practical data based methods ior estimating this bandwidth have been investigated. In this paper we propose multisstage plug-in type estimater for this optimal bandwith and derive its asymptotic properties. In particular we show that two stages are required for good asymptotic properties. This behavior is verified for finite samples using a simulation study.
Technometrics | 2001
Alan M. Polansky
The determination of the capability of a stable process with a multivariate quality characteristic using standard methods usually requires the assumption that the quality characteristic of interest follows a multivariate normal distribution. Unfortunately, multivariate normality is a difficult assumption to assess. Further, departures from this assumption can result in erroneous conclusions. In this article, I propose assessing the capability of a process using a nonparametric estimator. This estimator is based on a kernel estimate of an integral of a multivariate density. Bandwidth selection for this method is based on a smoothed bootstrap estimate of the mean squared error of the estimator. I also address the issue of constructing approximate confidence intervals. An example is presented that applies the proposed method to bivariate nonnormal process data. The performance of the resulting estimator is then compared to the sample proportion and a normal parametric estimate in a simulation study.
Quality and Reliability Engineering International | 1998
Alan M. Polansky
The determination of the capability of a stable process using the standard process capability indices requires that the quality characteristic of interest be normally distributed. Departures from normality can result in erroneous conclusions when using these indices. In this paper we propose assessing the capability of a process using a nonparametric estimator. This estimator is based on kernel estimation of the distribution function. Bandwidth selection for this method can be based either on a normal reference distribution or on a nonparametric estimate. An example is presented that applies this proposed method to non-normal process data. The performance of the resulting estimator is then compared with the sample proportion and a normal-based estimate in a simulation study.
The American Statistician | 1999
Alan M. Polansky
Abstract Since its inception, a major use of the bootstrap methodology has been in the construction of approximate nonparametric confidence intervals. As evidenced by many spirited discussions over the past few years, the best way of constructing these intervals has not been resolved. In particular, empirical studies have shown that many of these intervals have disappointing finite sample coverage probabilities. The purpose of this article is to show that intervals based on percentiles of the bootstrap distribution have bounds on their finite sample coverage probabilities. Depending on the functional of interest and the distribution of the data, these bounds can be quite low. We argue that these bounds are valid even for methods that are asymptotically second-order accurate. These results are useful to researchers who are contemplating using this type of confidence interval when the sample size is small. These bounds are computed for several examples including the moments and quantiles of several distribu...
Journal of The Royal Statistical Society Series B-statistical Methodology | 1997
Alan M. Polansky; William R. Schucany
Some studies of the bootstrap have assessed the effect of smoothing the estimated distribution that is resampled, a process usually known as the smoothed bootstrap. Generally, the smoothed distribution for resampling is a kernel estimate and is often rescaled to retain certain characteristics of the empirical distribution. Typically the effect of such smoothing has been measured in terms of the mean‐squared error of bootstrap point estimates. The reports of these previous investigations have not been encouraging about the efficacy of smoothing. In this paper the effect of resampling a kernel‐smoothed distribution is evaluated through expansions for the coverage of bootstrap percentile confidence intervals. It is shown that, under the smooth function model, proper bandwidth selection can accomplish a first‐order correction for the one‐sided percentile method. With the objective of reducing the coverage error the appropriate bandwidth for one‐sided intervals converges at a rate of n−1/4, rather than the familiar n−1/5 for kernel density estimation. Applications of this same approach to bootstrap t and two‐sided intervals yield optimal bandwidths of order n−1/2. These bandwidths depend on moments of the smooth function model and not on derivatives of the underlying density of the data. The relationship of this smoothing method to both the accelerated bias correction and the bootstrap t methods provides some insight into the connections between three quite distinct approximate confidence intervals.
Journal of Quality Technology | 1999
Alan M. Polansky; Youn Min Chou; Robert L. Mason
A FORTRAN program which selects the estimated best-fit Johnson transformation for a set of non-normal data is presented. The parameter estimates are computed using sample percentiles and the quality of the fit is assessed using the Shapiro-Wilk test sta..
Quality Engineering | 1998
Alan M. Polansky; Youn Min Chou; Robert L. Mason
Truncated data is a common occurrence in many industrial processes. This is particularly true in the situation of a supplier that has several customers, each with different specifications on the same product. Each customer will typically only observe a ..
Computational Statistics & Data Analysis | 2007
Alan M. Polansky
Markov chains provide a flexible model for dependent random variables with applications in such disciplines as physics, environmental science and economics. In the applied study of Markov chains, it may be of interest to assess whether the transition probability matrix changes during an observed realization of the process. If such changes occur, it would be of interest to estimate the transitions where the changes take place and the probability transition matrix before and after each change. For the case when the number of changes is known, standard likelihood theory is developed to address this problem. The bootstrap is used to aid in the computation of p-values. When the number of changes is unknown, the AIC and BIC measures are used for model selection. The proposed methods are studied empirically and are applied to example sets of data.
Archive | 2011
Alan M. Polansky
Sequences of Real Numbers and Functions Introduction Sequences of Real Numbers Sequences of Real Functions The Taylor Expansion Asymptotic Expansions Inversion of Asymptotic Expansions Random Variables and Characteristic Functions Introduction Probability Measures and Random Variables Some Important Inequalities Some Limit Theory for Events Generating and Characteristic Functions Convergence of Random Variables Introduction Convergence in Probability Stronger Modes of Convergence Convergence of Random Vectors Continuous Mapping Theorems Laws of Large Numbers The Glivenko-Cantelli Theorem Sample Moments Sample Quantiles Convergence of Distributions Introduction Weak Convergence of Random Variables Weak Convergence of Random Vectors The Central Limit Theorem The Accuracy of the Normal Approximation The Sample Moments The Sample Quantiles Convergence of Moments Convergence in rth Mean Uniform Integrability Convergence of Moments Central Limit Theorems Introduction Non-Identically Distributed Random Variables Triangular Arrays Transformed Random Variables Asymptotic Expansions for Distributions Approximating a Distribution Edgeworth Expansions The Cornish-Fisher Expansion The Smooth Function Model General Edgeworth and Cornish-Fisher Expansions Studentized Statistics Saddlepoint Expansions Asymptotic Expansions for Random Variables Approximating Random Variables Stochastic Order Notation The Delta Method The Sample Moments Differentiable Statistical Functionals Introduction Functional Parameters and Statistics Differentiation of Statistical Functionals Expansion Theory for Statistical Functionals Asymptotic Distribution Parametric Inference Introduction Point Estimation Confidence Intervals Statistical Hypothesis Tests Observed Confidence Levels Bayesian Estimation Nonparametric Inference Introduction Unbiased Estimation and U-Statistics Linear Rank Statistics Pitman Asymptotic Relative Efficiency Density Estimation The Bootstrap Appendix A: Useful Theorems and Notation Appendix B: Using R for Experimentation References Exercises and Experiments appear at the end of each chapter.