Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James J. Swain is active.

Publication


Featured researches published by James J. Swain.


Naval Research Logistics | 1994

Tests for transient means in simulated time series

David Goldsman; Lee W. Schruben; James J. Swain

We present a family of tests to detect the presence of a transient mean in a simulation process. These tests compare variance estimators from different parts of a simulation run, and are based on the methods of batch means and standardized time series. Our tests can be viewed as natural generalizations of some previously published work. We also include a power analysis of the new tests, as well as some illustrative examples.


winter simulation conference | 1988

Input modeling with the Johnson system of distributions

David J. DeBrota; Stephen D. Roberts; James J. Swain; Robert S. Dittus; James R. Wilson; Sekhar Venkatraman

This paper provides an introduction to the Johnson translation system of probability distributions, and it describes methods for using the Johnson system to model input processes in simulation experiments. The fitting methods based on available data are incorporated into the public-domain software package FITTR1. To handle situations in which little or no data is available, we present a visual interactive method for subjective distribution fitting that has been implemented in the public-domain software package VISIFIT. We present several examples illustrating the use of FITTRI and VISIFIT for simulation input modeling.


Quality and Reliability Engineering International | 1999

A weighted variance capability index for general non‐normal processes

Hsin-Hung Wu; James J. Swain; Phillip A. Farrington; Sherri L. Messimer

Process capability indices are considered to be one of the important quality measurement tools for the continuous improvement of quality and productivity. The most commonly used indices assume that process data are normally distributed. However, many studies have pointed out that the normally-based indices are very sensitive to non-normal processes. Therefore we propose a new process capability index applying the weighted variance control charting method for non-normal processes to improve the measurement of process performance when the process data are non-normally distributed. The main idea of the weighted variance method is to divide a skewed or asymmetric distribution into two normal distributions from its mean to create two new distributions which have the same mean but different standard deviations. In this paper we provide an example, a distribution generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and Johnson–Kotz–Pearn methods. This example shows that the weighted variance-based indices are more consistent than the other two methods in estimating process fallout for non-normal processes. Copyright


winter simulation conference | 1989

Modeling Input Processes With Johnson Distributions

David J. DeBrota; Stephen D. Roberts; Robert S. Dittus; James R. Wilson; James J. Swain; Sekhar Venkatraman

This paper provides an introduction to the Johnson translation system of probability distributions, and it describes methods for using the Johnson system to model input processes in simulation experiments. For situations in which little or no sample information is available, we have developed a visual interactive method to estimate bounded Johnson distributions subjectively; and we have implemented this technique in VESIFIT, a public-domain software package. For fitting all types of Johnson distributions based on sample data, we have implemented several new statistical-estimation methods as well as some standard techniques in FITTR1, another public-domain software package. We present several examples illustrating the use of VISIFIT and FITTR1 for simulation input modeling.


Operations Research | 1998

Stationary Policies in Multiechelon Inventory Systems with Deterministic Demand and Backlogging

Gamze Tokol; David Goldsman; Daniel H. Ockerman; James J. Swain; Fangruo Chen

A stationary policy conducts replenishment activities-the placement and fulfillment of orders-in a stationary fashion. That is, each facility receives a constant batch (facility specific) in equal time intervals (facility specific) under a stationary policy. Although the advantages of stationary policies are clear (i.e., smooth operations), they represent a restriction in policy selection. This paper investigates how costly this restriction can be. For two multiechelon systems (serial and distribution) with deterministic demand and backlogging, we show that stationary policies are 70%-effective. This bound is tight in the sense that an example exists where the bound is reached. On the other hand, the average effectiveness of stationary policies is very high. In a set of 1,000 randomly generated numerical examples, we observed that the average effectiveness was 99%, and the standard deviation was 1.5%. The numerical examples also suggest that the performance of stationary policies deteriorates in systems where the setup cost decreases dramatically from an upstream stage to a downstream stage. Finally, a key building block of the above results is the existing lower bounds on the average costs of all feasible policies in the above systems. We provide a simpler derivation of these bounds.


Communications in Statistics-theory and Methods | 1990

Effects of a single outlier on arma identification

Stuart Jay Deutsch; Jeery E. Richards; James J. Swain

Fox (1972), Box and Tiao (1975), and Abraham and Box (1979) have proposed methods for detecting outliers in time series whose ARMA form is known (or identified). We show that the existence of a single aberrant observation, innovation, or intervention causes an ARMA model to be misidentified using unadjusted autocorrelation (acf) and partial autocorrelation estimates. The magnitude, location, type of outlier, and in some cases the ARMAs parameters, affect the identification outcome. We use variance inflation, signal-to-noise ratios, and acf critical values to determine an ARMA models susceptibility to misidentifi-cation. Numerical and simulation examples suggest how to iteratively use the outlier detection methods in practice.


winter simulation conference | 1994

Designing simulation experiments for evaluating manufacturing systems

James J. Swain; Phillip A. Farrington

Simulation experiments can benefit from proper planning and design, which can often increase the precision of estimates and strengthen confidence in conclusions drawn from the simulations. While simulation experiments are broadly similar to any statistical experiment, there are a number of differences. In particular, it is often possible to exploit the control of random numbers used to drive the simulation model. To illustrate the methodology described, four examples drawn from manufacturing are used.


Quality and Reliability Engineering International | 2009

Lower confidence limits for process capability indices Cp and Cpk when data are autocorrelated

Cynthia R. Lovelace; James J. Swain; Hisham Zeinelabdin; Jatinder N. D. Gupta

Many organizations use a single estimate of Cp and/or Cpk for process benchmarking, without considering the sampling variability of the estimators and how that impacts the probability of meeting minimum index requirements. Lower confidence limits have previously been determined for the Cp and Cpk indices under the standard assumption of independent data, which are based on the sampling distributions of the index estimators. In this paper, lower 100(1-α)% confidence limits for Cp and Cpk were developed for autocorrelated processes. Simulation was used to generate the empirical sampling distribution of each estimator for various combinations of sample size (n), autoregressive parameter (ϕ), true index value (Cp or Cpk), and confidence level. In addition, the minimum values of the estimators required in order to meet quality requirements with 100(1-α)% certainty were also determined from these empirical sampling distributions. These tables may be used by practitioners to set minimum capability requirements for index estimators, rather than true values, for the autocorrelated case. The implications of these results for practitioners will be discussed. Copyright


Quality Engineering | 2009

Process Capability Analysis Methodologies for Zero-Bound, Non-Normal Process Data

Cynthia R. Lovelace; James J. Swain

ABSTRACT The original Japanese process capability indices and Shewhart quality control charts (1939) were designed for use with independent, normally distributed data. When tracking inherently non-normal processes that tend to exhibit multiplicative rather than additive error variation, the options for statistical process monitoring and capability estimation are more limited. In particular, for zero-bound process variables such as flatness or parallelism, the normality of the process data is significantly distorted as the process improves and approaches its desired level of zero. In this article, we propose a process capability index estimation methodology for C p and C pk for the case of non-normal, zero-bound process data using the delta distribution, a variant of the lognormal distribution. This approach utilizes quantile estimates derived from a proposed modification of lognormal quality control charts (originally introduced by Morrison 1958 and Ferrell, 1958), thus allowing statistical control to be tracked and achieved before index estimation. When process data are skewed, these process control and capability estimation techniques are superior to those that rely on normality assumptions; when the skewed data are also zero-bound, these techniques provide additional benefits over traditional quantile transform techniques.


Journal of Spacecraft and Rockets | 2008

Evaluating Technology Projections and Weight Prediction Method Uncertainty of Future Launch Vehicles

Alan Wilhite; Sampson Gholston; Phillip A. Farrington; James J. Swain

A process was developed for determining the impact of technology performance assumptions and weight predictionmethod uncertainty.Weight and performance uncertainties were defined for components from historical weight-estimating relationships that are typically used during the concept definition phase. A systems analysis model was developed that sizes vehicle geometry, propellant, and component weights to meet mission requirements. The uncertainties and system analysismodelwere integratedwith aMonteCarlo simulation to determine the uncertainty probability on system weight. These uncertainties were integrated into the analyses of single-stage and two-stage reusable launch concepts to demonstrate the technology uncertainty influence on concepts having different gross weight sensitivities to component weight changes. Finally, this process was extended as a model for measuring the progress of technology development programs.

Collaboration


Dive into the James J. Swain's collaboration.

Top Co-Authors

Avatar

Phillip A. Farrington

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

James R. Wilson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

David Goldsman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nitin S. Sharma

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sherri L. Messimer

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

Bernard J. Schroer

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

Gregory A Harris

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cynthia R. Lovelace

University of Alabama in Huntsville

View shared research outputs
Researchain Logo
Decentralizing Knowledge