Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel W. Apley is active.

Publication


Featured researches published by Daniel W. Apley.


Iie Transactions | 1999

The GLRT for statistical process control of autocorrelated processes

Daniel W. Apley; Jianjun Shi

This paper presents an on-line Statistical Process Control (SPC) technique, based on a Generalized Likelihood Ratio Test (GLRT), for detecting and estimating mean shifts in autocorrelated processes that follow a normally distributed Autoregressive Integrated Moving Average (ARIMA) model. The GLRT is applied to the uncorrelated residuals of the appropriate time-series model. The performance of the GLRT is compared to two other commonly applied residual-based tests ‐ a Shewhart individuals chart and a CUSUM test. A wide range of ARIMA models are considered, with the conclusion that the best residual-based test to use depends on the particular ARIMA model used to describe the autocorrelation. For many models, the GLRT performance is far superior to either a CUSUM or Shewhart test, while for others the diAerence is negligible or the CUSUM test performs slightly better. Simple, intuitive guidelines are provided for determining which residual-based test to use. Additional advantages of the GLRT are that it directly provides estimates of the magnitude and time of occurrence of the mean shift, and can be used to distinguish diAerent types of faults, e.g., a sustained mean shift versus a temporary spike.


Journal of Manufacturing Science and Engineering-transactions of The Asme | 1998

Diagnosis of Multiple Fixture Faults in Panel Assembly

Daniel W. Apley; Jianjun Shi

This paper presents a modeling procedure and diagnostic algorithm for fixture related faults in panel assembly, From geometric information about the panel and fixture, a fixture fault model can be constructed off-line. Combining the fault model with inline panel dimensional measurements, the algorithm is capable of detecting and classifying multiple fixture faults. The algorithm, which relies heavily on the fault model, is based on least squares estimation. Consequently, the test is of relatively simple form and is easily implemented and analyzed, Experimental results of applying the algorithm to an autobody assemble process are provided.


Journal of Mechanical Design | 2006

Understanding the effects of model uncertainty in robust design with computer experiments

Daniel W. Apley; Jun Liu; Wei Chen

The use of computer experiments and surrogate approximations (metamodels) introduces a source of uncertainty in simulation-based design that we term model interpolation uncertainty. Most existing approaches for treating interpolation uncertainty in computer experiments have been developed for deterministic optimization and are not applicable to design under uncertainty in which randomness is present in noise and/or design variables. Because the random noise and/or design variables are also inputs to the meta-model, the effects ofmetamodel interpolation uncertainty are not nearly as transparent as in deterministic optimization. In this work, a methodology is developed within a Bayesian framework for quantifying the impact of interpolation uncertainty on the robust design objective, under consideration of uncertain noise variables. By viewing the true response surface as a realization of a random process, as is common in kriging and other Bayesian analyses of computer experiments, we derive a closed-form analytical expression for a Bayesian prediction interval on the robust design objective function. This provides a simple, intuitively appealing tool for distinguishing the best design alternative and conducting more efficient computer experiments. We illustrate the proposed methodology with two robust design examples-a simple container design and an automotive engine piston design with mere nonlinear response behavior and mixed continuous-discrete design variables.


Technometrics | 2001

A Factor-Analysis Method for Diagnosing Variability in Mulitvariate Manufacturing Processes

Daniel W. Apley; Jianjun Shi

In many modern manufacturing processes, large quantities of multivariate process-measurement data are available through automated in-process sensing. This article presents a statistical technique for extracting and interpreting information from the data for the purpose of diagnosing root causes of process variability. The method is related to principal components analysis and factor analysis but makes more explicit use of a model describing the relationship between process faults and process variability. Statistical properties of the diagnostic method are discussed, and illustrative examples from autobody assembly are provided.


Journal of Mechanical Design | 2012

Quantification of Model Uncertainty: Calibration, Model Discrepancy, and Identifiability

Paul D. Arendt; Daniel W. Apley; Wei Chen

To use predictive models in engineering design of physical systems, one should first quantify the model uncertainty via model updating techniques employing both simulation and experimental data. While calibration is often used to tune unknown calibration parameters of a computer model, the addition of a discrepancy function has been used to capture model discrepancy due to underlying missing physics, numerical approximations, and other inaccuracies of the computer model that would exist even if all calibration parameters are known. One of the main challenges in model updating is the difficulty in distinguishing between the effects of calibration parameters versus model discrepancy. We illustrate this identifiability problem with several examples, explain the mechanisms behind it, and attempt to shed light on when a system may or may not be identifiable. In some instances, identifiability is achievable under mild assumptions, whereas in other instances, it is virtually impossible. In a companion paper, we demonstrate that using multiple responses, each of which depends on a common set of calibration parameters, can substantially enhance identifiability.


Journal of Quality Technology | 2002

The Autoregressive T2 Chart for Monitoring Univariate Autocorrelated Processes

Daniel W. Apley; Fugee Tsung

In this paper we investigate the autoregressive T2 control chart for statistical process control of autocorrelated processes. The method involves the monitoring, using Hotellings T2 statistic, of a vector formed from a moving window of observations of the univariate autocorrelated process. It is shown that the T2 statistic can be decomposed into the sum of the squares of the residual errors for various order autoregressive time series models fit to the process data. Guidelines for designing the autoregressive T2 chart are presented, and its performance is compared to that of residual-based CUSUM and Shewhart individual control charts. The autoregressive T2 chart has a number of characteristics, including some level of robustness with respect to modeling errors, that make it an attractive alternative to residual-based control charts for autocorrelated processes.


Journal of Computational and Graphical Statistics | 2015

Local Gaussian Process Approximation for Large Computer Experiments

Robert B. Gramacy; Daniel W. Apley

We provide a new approach to approximate emulation of large computer experiments. By focusing expressly on desirable properties of the predictive equations, we derive a family of local sequential design schemes that dynamically define the support of a Gaussian process predictor based on a local subset of the data. We further derive expressions for fast sequential updating of all needed quantities as the local designs are built up iteratively. Then we show how independent application of our local design strategy across the elements of a vast predictive grid facilitates a trivially parallel implementation. The end result is a global predictor able to take advantage of modern multicore architectures, providing a nonstationary modeling feature as a bonus. We demonstrate our method on two examples using designs with thousands of data points, and compare to the method of compactly supported covariances. Supplementary materials for this article are available online.


Iie Transactions | 2008

Adaptive CUSUM procedures with EWMA-based shift estimators

Wei Jiang; Lianjie Shu; Daniel W. Apley

Adaptive Cumulative SUM charts (ACUSUM) have been recently proposed for providing an overall good detection over a range of mean shift sizes. The basic idea of the ACUSUM chart is to first adaptively update the reference value based on an Exponentially Weighted Moving Average (EWMA) estimate and then to assign a weight on it using a certain type of weighting function. A linear weighting function is proposed that is motivated by likelihood ratio testing concepts and that achieves superior detection performance. Moreover, in view of the lower efficiency in tracking relative large mean shifts of the EWMA estimate, a generalized EWMA estimate is proposed as an alternative. A comparison of run length performance of the proposed ACUSUM scheme and other control charts is shown to be favorable to the former.


Technometrics | 2003

Design of Exponentially Weighted Moving Average Control Charts for Autocorrelated Processes With Model Uncertainty

Daniel W. Apley; Hyun Cheol Lee

Residual-based control charts are popular methods for statistical process control of autocorrelated processes. To implement these methods, a time series model of the process is required. The model must be estimated from data, in practice, and model estimation errors can cause the actual in-control average run length to differ substantially from the desired value. This article develops a method for designing residual-based exponentially weighted moving average (EWMA) charts under consideration of the uncertainty in the estimated model parameters. The resulting EWMA control limits are widened by an amount that depends on a number of factors, including the level of model uncertainty.


Technometrics | 2003

Identifying Spatial Variation Patterns in Multivariate Manufacturing Processes: A Blind Separation Approach

Daniel W. Apley; Ho Young Lee

Large sets of multivariate measurement data are now routinely available through automated in-process measurement in many manufacturing industries. These data typically contain valuable information regarding the nature of each major source of process variability. In this article we assume that each variation source causes a distinct spatial variation pattern in the measurement data. The model that we use to represent the variation patterns is of identical structure to one widely used in the so-called “blind source separation” problem that arises in many sensor-array signal processing applications. We argue that methods developed for blind source separation can be used to identify spatial variation patterns in manufacturing data. We also discuss basic blind source separation concepts and their applicability to diagnosing manufacturing variation.

Collaboration


Dive into the Daniel W. Apley's collaboration.

Top Co-Authors

Avatar

Wei Chen

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Jianjun Shi

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Jiang

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Ding

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Shishi Chen

Beijing Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge