Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura J. Freeman is active.

Publication


Featured researches published by Laura J. Freeman.


Journal of Quality Technology | 2010

Reliability Data Analysis for Life Test Experiments with Subsampling

Laura J. Freeman; G. Geoffrey Vining

Product reliability is an important characteristic for all manufacturers, engineers, and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. The standard analysis techniques currently in use for reliability data assume a completely randomized design. However, analysis methodologies for experimental designs more complex than completely randomized designs have not been a focus of the reliability field. We provide a new, yet simple, analysis technique for reliability data from designed experiments containing subsampling. The technique is illustrated on a popular reliability data set. This paper discusses implications of using previous analysis methods versus our new approach to the analysis problem.


Quality Engineering | 2010

The Evaluation of Median-Rank Regression and Maximum Likelihood Estimation Techniques for a Two-Parameter Weibull Distribution

Denisa Olteanu; Laura J. Freeman

ABSTRACT Practitioners frequently model failure times in reliability analysis via the Weibull distribution. Often risk managers must make decisions after only a few failures. Thus, an important question is how to estimate the parameters of this distribution for extremely small sample sizes and/or highly censored data. This study evaluates two methods: maximum likelihood estimation (MLE) and median-rank regression (MRR). Asymptotically, we know that MLE has superior properties; however, this study seeks to evaluate these two methods for small numbers of failures and high degrees of censoring, where one cannot depend on the asymptotic properties of maximum likelihood estimation. This research is the direct result of a practitioners question at Pratt & Whitney, where they use both methods of estimation. We evaluate the two estimation methods for extremely high censoring cases, focusing on censoring greater than 99%. Such extreme levels of censoring present certain difficulties. For example, the estimated parameters based on MLE follow extremely skewed distributions, even under a log transformation. This article compares the two estimation methods based on parameter estimation and ability to predict future failures. We provide recommendations on which method to use based on sample size, the parameter values, and the degree of censoring present in the data.


Quality Engineering | 2013

A Tutorial on the Planning of Experiments

Laura J. Freeman; Anne G. Ryan; Jennifer L.K. Kensler; Rebecca M. Dickinson; G. Geoffrey Vining

ABSTRACT This tutorial outlines the basic procedures for planning experiments within the context of the scientific method. Too often quality practitioners fail to appreciate how subject-matter expertise must interact with statistical expertise to generate efficient and effective experimental programs. This tutorial guides the quality practitioner through the basic steps, demonstrated by extensive past experience, that consistently lead to successful results. This tutorial makes extensive use of flowcharts to illustrate the basic process. Two case studies summarize the applications of the methodology.


Quality and Reliability Engineering International | 2013

Reliability Data Analysis for Life Test Designed Experiments with Sub‐Sampling

Laura J. Freeman; G. Geoffrey Vining

The ability to model lifetime data from life test experiments is of paramount importance to all manufacturers, engineers and consumers. The Weibull distribution is commonly used to model the data from life tests. Standard Weibull analysis assume completely randomized designs. However, not all life test experiments come from completely randomized designs. Experiments involving sub-sampling require a method for properly modeling the data. We provide a Weibull nonlinear mixed models (NLLMs) methodology for incorporating random effects in the analysis. We apply this methodology to a reliability life test on glass capacitors. We compare the NLLMs methodology to other available methods for incorporating random effects in reliability analysis. A simulation study reveals the method proposed in this paper is robust to both model misspecification and increasing levels of variance on the random effect. Copyright


Quality Engineering | 2014

A Practitioner's Guide to Analyzing Reliability Experiments with Random Blocks and Subsampling

Jennifer L.K. Kensler; Laura J. Freeman; G. Geoffrey Vining

ABSTRACT Reliability experiments provide important information regarding the life of a product, including how various factors affect product life. Current analyses of reliability data usually assume a completely randomized design. However, reliability experiments frequently contain subsampling, which represents a restriction on randomization. A typical experiment involves applying treatments to test stands, with several items placed on each test stand. In addition, raw materials used in experiments are often produced in batches, leading to a design involving blocks. This article proposes a method using Weibull regression for analyzing reliability experiments with random blocks and subsampling. An illustration of the method is provided.


Journal of Quality Technology | 2015

Analysis of Reliability Experiments with Random Blocks and Subsampling

Jennifer L.K. Kensler; Laura J. Freeman; G. Geoffrey Vining

Reliability experiments provide important information regarding the life of an item, including how various factors affect item life. Current analyses of reliability data usually assume a completely randomized design. However, reliability experiments are frequently executed with restrictions on randomization. Common restrictions on randomization include subsampling and blocking. This paper proposes a nonlinear mixed-model analysis for reliability experiments with random blocks and subsampling. A simulation study compares this nonlinear mixed model analysis with a two-stage analysis and traditional reliability analyses.


Quality Engineering | 2011

A Cautionary Tale: Small Sample Size Concerns for Grouped Lifetime Data

Laura J. Freeman

ABSTRACT Often, lifetime data come from experiments where failure times are grouped. The Weibull distribution is a popular distribution for modeling failure times. Maximum likelihood estimation (MLE) has outstanding large sample properties for the Weibull distribution. This article evaluates small sample properties of MLEs for grouped data. We evaluate sample size requirements for MLE asymptotic properties to take effect. We compare type I and type II censoring for pooled experiments and conclude that bias for the shape parameter estimate can be alarmingly high especially for type II censored data. Finally, we investigate the benefits of the pooled analysis.


Journal of Quality Technology | 2015

Statistical Methods for Combining Information: Stryker Family of Vehicles Reliability Case Study

Stefan H. Steiner; Rebecca M. Dickinson; Laura J. Freeman; Bruce A. Simpson; Alyson G. Wilson

Problem: Reliability is an essential element in assessing the operational suitability of Department of Defense weapon systems. Reliability takes a prominent role in both the design and analysis of operational tests. In the current era of reduced budgets and increased reliability requirements, it is challenging to verify reliability requirements in a single test. Furthermore, all available data should be considered in order to ensure evaluations provide the most appropriate analysis of the systems reliability. Approach: This paper describes the benefits of using parametric statistical models to combine information across multiple testing events. Both frequentist and Bayesian inference techniques are employed and they are compared and contrasted to illustrate different statistical methods for combining information. We apply these methods to data collected during the developmental and operational test phases for the Stryker family of vehicles. Results: We show that, when we combine the available information across two test phases for the Stryker family of vehicles, reliability estimates are more accurate and precise than those reported previously using traditional methods that use only operational test data in their reliability assessments.


Archive | 2015

An Overview of Designing Experiments for Reliability Data

G. Geoffrey Vining; Laura J. Freeman; Jennifer L.K. Kensler

The reliability of products and processes will become increasing important in the near future. One definition of reliability is “quality over time.” Customers increasing will make their purchasing decisions on how long they can expect their products and processes to deliver high quality results. As a result, there will be increasing demands for manufacturers to design appropriate experiments to improve reliability. This paper begins with a review of the current practice for planning reliability experiments. It then reviews some recent work that takes into proper account the experimental protocol. A basic issue is that most reliability engineers have little training in planning experiments while most experimental design experts have little background in reliability data.


Quality and Reliability Engineering International | 2016

Regularization for Continuously Observed Ordinal Response Variables with Piecewise-constant Functional Covariates

Matthew Avery; Mark Orndorff; Timothy J. Robinson; Laura J. Freeman

This paper investigates regularization for continuously observed covariates that resemble step functions. The motivating examples come from operational test data from a recent US Department of Defense test of the Shadow Tactical Unmanned Aircraft system. The response variable, quality of video provided by the Shadow to friendly ground units, was measured on an ordinal scale continuously over time. Functional covariates, altitude and distance, can be well approximated by step functions. Two approaches for regularizing these covariates are considered, including a thinning approach commonly used within the Department of Defense to address autocorrelated time series data, and a novel ‘smoothing’ approach, which first approximates the covariates as step functions and then treats each ‘step’ as a uniquely observed data point. Datasets resulting from both approaches are fit using a mixed model cumulative logistic regression, and we compare their results. While the thinning approach identifies altitude as having a significant impact on video quality, the smoothing approach finds no evidence of an effect. This difference is attributable to the larger effective sample size produced by thinning. System characteristics make it unlikely that video quality would degrade at higher altitudes, suggesting that the thinning approach has produced a Type 1 error. By accounting for the functional characteristics of the covariates, the novel smoothing approach has produced a more accurate characterization of the Shadows ability to provide full motion video to supported units. Copyright

Collaboration


Dive into the Laura J. Freeman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alyson G. Wilson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge