Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott D. Grimshaw is active.

Publication


Featured researches published by Scott D. Grimshaw.


Journal of Quality Technology | 1997

Control Charts for Quantile Function Values

Scott D. Grimshaw; Frank B. Alt

A control chart is proposed which monitors the conformance of a sample to an in-control distribution using the quantile or inverse cumulative distribution function. This method permits detecting changes in the distributional shape which may be undetecte..


Technometrics | 2001

Eliciting Factor Importance in a Designed Experiment

Scott D. Grimshaw; Bruce Jay Collings; Wayne A. Larsen; Carolyn R Hurt

Recently, there has been great interest in the Bayes model for analyzing confounded designs. This model suggests that only a few of the main effects and interactions are “active” and estimates the posterior probability that a given factor is active. This article proposes using pairwise comparisons to elicit an experts opinion and form a well-defined, coherent prior. The prior probability that a factor is active is modeled as a “preference” in the Bradley–Terry linear model for pairwise comparisons. This article provides suggested schedules that minimize the number of comparisons offered to the expert based on the expression of a comparison schedule as a graph theory problem. Examples demonstrate that an experts knowledge can be obtained to adequate precision for the Bayes analysis of screening designs by asking a few simple questions.


Communications in Statistics - Simulation and Computation | 2005

Estimating Hazard Functions for Discrete Lifetimes

Scott D. Grimshaw; James B. McDonald; Grant Richard McQueen; Steven Thorley

ABSTRACT Frequently in inference, the observed data are modeled as a sample from a continuous probability model, implying the observed data are precisely measured. Usually, the actual data available to the investigator are discrete–-either because they are rounded, meaning the exact measurement is within an interval defined by some small measurement unit related to the precision of the measuring device, or because the data are discrete, meaning the time periods until the event of interest are countable instead of continuous. This article is motivated by the common practice of testing for duration dependence (non constant hazard function) in economic and financial data using the continuous Weibull distribution when the data are discrete. A simulation study shows that biased parameter estimates and distorted hypothesis tests result when the degree of discretization is severe. When observations are rounded, as in measuring the time between stock trades, it is proper to treat them as interval-censored. When observations are discrete, as in measuring the length of stock runs, a discrete hazard function must be specified. Both cases are examined in simulation studies and demonstrated on financial data.


Computational Statistics & Data Analysis | 1998

A quantitative method for identifying active contrasts in unreplicated factorial designs based on the half-normal plot

John S. Lawson; Scott D. Grimshaw; Jason Burt

Abstract Normal or half-normal plots are often used to judge the significance of effect contrasts in unreplicated factorial or fractional factorial designs. Many quantitative procedures have also been proposed in recent literature to reduce the subjectivity of the graphical methods. In this paper we describe a new method for judging significance of effects that is both quantitative and graphical. The method consists of fitting a simple least-squares line and prediction limits to the half-normal probability plot. The rational for the statistic comes from recent papers by Lenth and Loh. This new method is a blend of Lenths and Lohs method. It has the computational simplicity of Lenths method with the graphical interpretation and increased power of Lohs method. We describe the calculations of this new statistic which is a supplement to the half-normal plot. We present a table of critical values for the statistic and show a simulation study to describe the power properties of the test. Finally, we present an example of the new technique using a data set from Box and Meyer. The new statistic developed in this paper is a powerful quantitative tool for reducing the subjectiveness of the half-normal plot in accessing the significance of effects in unreplicated designs. It can be computed simply with commands available in standard statistical program such as SAS, MINITAB or S-PLUS, and can be used graphically or numerically for more accuracy.


Public Finance Review | 2010

The Effect of Local Option Sales Taxes on Local Sales

Gary C. Cornia; Scott D. Grimshaw; Ray D. Nelson; Lawrence C. Walters

Because retail sales taxes generate substantial revenue for many local governments, public officials contemplating differential local option tax rates must carefully assess the potential impacts of such decisions on purchasing decisions. The authors use a unique pooled time series to examine these impacts and apply a methodology that permits an analysis of the effects on purchasing decisions of sales tax rate differences across numerous consumer goods. The results indicate that the response to sales tax rate differences depends on the general characteristics of the goods being purchased. A unique variable that controls for the distance to the next significant alternative for making a purchase also provides key insights. The observed significance for this variable and its interaction with tax rates has significant public policy implications.


The American Statistician | 2015

A Framework for Infusing Authentic Data Experiences Within Statistics Courses

Scott D. Grimshaw

Working with complex data is one of the important updates to the 2014 ASA Curriculum Guidelines for Undergraduate Programs in Statistical Science. Infusing “authentic data experiences” within courses allow students opportunities to learn and practice data skills as they prepare a dataset for analysis. While more modest in scope than a senior-level culminating experience, authentic data experiences provide an opportunity to demonstrate connections between data skills and statistical skills. The result is more practice of data skills for undergraduate statisticians. [Received November 2014. Revised July 2015.]


Journal of Quality Technology | 2013

Spatial Control Charts for the Mean

Scott D. Grimshaw; Natalie J. Blades; Michael Miles

Developments in metrology provide the opportunity to improve process monitoring by obtaining many measurements on each sampled unit. Increasing the number of measurements may increase the sensitivity of control charts to detection of flaws in local regions; however, the correlation between spatially proximal measurements may introduce redundancy and inefficiency in the test. This paper extends multivariate statistical process control to spatial-data monitoring by recognizing the spatial correlation between multiple measurements on the same item and replacing the sample covariance matrix with a parameterized covariance based on the semivariogram. The properties of this control chart for the mean of a spatial process are explored with simulated data and the method is illustrated with an example using ultrasonic technology to obtain nondestructive measurements of bottle thickness.


Quality Engineering | 1999

Control Limits for Group Charts

Scott D. Grimshaw; G. Rex Bryce; David Meade

An application of group charts to a multiple filling head bottling machine at the BYU Creamery motivated an investigation of in-control average run length (ARL). It is shown that ARL of an in-control process with k streams is approximately 370/k. This..


Journal of Quality Technology | 2016

Nonlinear Profile Monitoring for Oven-Temperature Data

Willis A. Jensen; Scott D. Grimshaw; Ben Espen

Problem: The monitoring of a key input, temperature, in a manufacturing process produces large amounts of data. It is difficult to determine an appropriate control-chart methodology that allows the chart user to determine when there are problems with this step of the manufacturing process. Current approaches for process monitoring involve the output data gathered after the process has been completed. It would be preferable to establish process monitoring on the process inputs. However, this is challenging when there is a large amount of process input data. Current phase I monitoring of the process inputs involve the use of individual control charts on some selected data from the temperature profiles that represent some features determined based on expert judgment. This approach does not use all the data nor does it take into account the potential correlation that exists among the selected data. Approach: We propose the use of a nonlinear model for modeling the profiles, thereby reducing the profiles to a smaller set of parameter estimates. For this nonlinear model data reduction approach, the parameter estimates and residual variability can then be used in the appropriate monitoring procedure. We show that a control chart based on the classical covariance-matrix estimate fails to detect large significant process changes, but the successive differences covariance matrix performs better. The statistic based on the successive differences is modified to account for the correlation between the profiles. We illustrate both the phase I and phase II analysis for these data. Results: The proposed data reduction approach and monitoring procedure makes use of all the available data and detects important process shifts where the interpretation of the nonlinear model parameters facilitates the root-cause investigation. This parametric approach can be easily automated using existing statistical software and results in a smaller number of control charts, which is a manageable way to determine the current state of the process. We highlight some issues that are raised by this particular dataset that have not been adequately addressed in the profile-monitoring literature.


The American Statistician | 2001

Statistics in Preschool

Sterling C. Hilton; Scott D. Grimshaw; Genan Anderson

Statistics education has become established in the elementary school curriculum. Because the principles of statistics underlie many basic learning concepts, it is not surprising to discover statistics principles in the preschool curriculum as well. This article describes how statistical tools and concepts are included in the Brigham Young University (BYU) Child and Family Studies Laboratory preschool curriculum. At BYU, children study topics as long as they are interested, and teachers use projects to create a rich learning environment. This article describes how statistical projects—such as the “Question of the Day,” survey work, and experiments—are used to teach young children to pose questions, make operational definitions, summarize data, understand variation, gather data, construct bar charts, and apply the scientific method. BYU teachers also use statistical projects to teach many other important preschool skills.

Collaboration


Dive into the Scott D. Grimshaw's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Miles

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Meade

Advanced Micro Devices

View shared research outputs
Researchain Logo
Decentralizing Knowledge