Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James O. Westgard is active.

Publication


Featured researches published by James O. Westgard.


Clinical Chemistry | 2008

Use and Interpretation of Common Statistical Tests in Method Comparison Studies

James O. Westgard

We have studied the usefulness of common statistical tests as applied to method comparison studies. We simulated different types of errors in test sets of data to determine the sensitivity of different statistical parameters. Least-squares parameters (slope of least-squares line, its y intercept, and the standard error of estimate in the y direction) provide specific estimates of proportional, constant, and random errors, but comparison data must be presented graphically to detect limitations caused by nonlinearity and errant points. t -test parameters ( bias, standard deviation of difference) provide estimates of constant and random errors, but only when proportional error is absent. Least-squares analysis can estimate proportional error and should be considered a prerequisite to t -test analysis. The correlation coefficient ( r ) is sensitive only to random error, but is not easily interpreted. Values for r, t , and F are not useful in making decisions on the acceptability of performance. These decisions should be judgments on the errors that are tolerable. Statistical tests can be applied in a manner that provides specific estimates of these errors


American Journal of Clinical Pathology | 2006

The Quality of Laboratory Testing Today An Assessment of σ Metrics for Analytic Quality Using Performance Data From Proficiency Testing Surveys and the CLIA Criteria for Acceptable Performance

James O. Westgard; Sten A. Westgard

To assess the analytic quality of laboratory testing in the United States, we obtained proficiency testing survey results from several national programs that comply with Clinical Laboratory Improvement Amendments (CLIA) regulations. We studied regulated tests (cholesterol, glucose, calcium, fibrinogen, and prothrombin time) and nonregulated tests (international normalized ratio [INR], glycohemoglobin, and prostate-specific antigen [PSA]). Quality was assessed on the sigma scale with a benchmark for minimum process performance of 3 sigma and a goal for world-class quality of 6 sigma. Based on the CLIA criteria for acceptable performance in proficiency testing (allowable total errors [TEa]), the national quality of cholesterol testing (TEa = 10%) estimated sigma values as 2.9 to 3.0; glucose (TEa = 10%), 2.9 to 3.3; calcium (TEa = 1.0 mg/dL), 2.8 to 3.0; prothrombin time (TEa = 15%), 1.8; INR (TEa = 20%), 2.4 to 3.5; fibrinogen (TEa = 20%), 1.8 to 3.2; glycohemoglobin (TEa = 10%), 1.9 to 2.6; and PSA (TEa = 10%), 1.2 to 1.8. The analytic quality of laboratory tests requires improvement in measurement performance and more intensive quality control monitoring than the CLIA minimum of 2 levels per day.


Annals of Clinical Biochemistry | 2016

Quality control review: implementing a scientifically based quality control system

James O. Westgard; Sten A. Westgard

This review focuses on statistical quality control in the context of a quality management system. It describes the use of a ‘Sigma-metric’ for validating the performance of a new examination procedure, developing a total quality control strategy, selecting a statistical quality control procedure and monitoring ongoing quality on the sigma scale. Acceptable method performance is a prerequisite to the design and implementation of statistical quality control procedures. Statistical quality control can only monitor performance, and when properly designed, alert analysts to the presence of additional errors that occur because of unstable performance. A new statistical quality control planning tool, called ‘Westgard Sigma Rules,’ provides a simple and quick way for selecting control rules and the number of control measurements needed to detect medically important errors. The concept of a quality control plan is described, along with alternative adaptations of a total quality control plan and a risk-based individualized quality control plan. Finally, the ongoing monitoring of analytic performance and test quality are discussed, including determination of measurement uncertainty from statistical quality control data collected under intermediate precision conditions and bias determined from proficiency testing/external quality assessment surveys. A new graphical tool, called the Sigma Quality Assessment Chart, is recommended for demonstrating the quality of current examination procedures on the sigma scale.


Clinical Chemistry and Laboratory Medicine | 2016

Useful measures and models for analytical quality management in medical laboratories

James O. Westgard

Abstract The 2014 Milan Conference “Defining analytical performance goals 15 years after the Stockholm Conference” initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.


Critical Reviews in Clinical Laboratory Sciences | 1981

Precision and Accuracy: Concepts and Assessment by Method Evaluation Testing

James O. Westgard; John A. Lott

Achieving precision and accuracy in routine clinical analyses is a complex task, requiring the identification, estimation, and elimination of sources of analytical error. This review first considers concepts of precision and accuracy, including discussions of the meaning of measurement process, analytical method, state of statistical control, precision, imprecision, accuracy, inaccuracy, systematic error, overall or total error, true value, traceability, and compatability. These concepts provide the basis upon which the performance of analytical methods can be evaluated. The second part of the review considers how precision and accuracy are assessed by the use of method evaluation experiments. The approach emphasizes the development of an evaluation protocol based on the analytical characteristics which represent the performance of the method. This includes discussions of the familiarization period; testing analytic range and linearity; testing precision by a replication experiment; testing accuracy by recovery, interference, and comparison of methods experiments; the selection of a comparative analytical method; the statistical analysis of method comparison data, including the interpretation of that data; the collaborative testing.


Scandinavian Journal of Clinical & Laboratory Investigation | 1999

The need for a system of quality standards for modern quality management

James O. Westgard

The management of analytical quality depends on the careful evaluation of the imprecision (uncertainty) and inaccuracy (trueness) of laboratory methods and the application of statistical quality control procedures to detect medically important analytical errors that may occur during routine analysis. A system of quality standards is recommended to incorporate different types of requirements, such as clinical outcome criteria, analytical outcome criteria, and analytical performance criteria. For practical applications, all need to be translated into operating specifications for the imprecision, inaccuracy, control rules, and number of control measurements that are necessary to assure analytical quality during routine production of test results.


Clinical Chemistry and Laboratory Medicine | 2015

Assessing quality on the Sigma scale from proficiency testing and external quality assessment surveys

James O. Westgard; Sten A. Westgard

Abstract Background: There is a need to assess the quality being achieved for laboratory examinations that are being utilized to support evidence-based clinical guidelines. Application of Six Sigma concepts and metrics can provide an objective assessment of the current analytical quality of different examination procedures. Methods: A “Sigma Proficiency Assessment Chart” can be constructed for data obtained from proficiency testing and external quality assessment surveys to evaluate the observed imprecision and bias of method subgroups and determine quality on the Sigma scale. Results: Data for hemoglobin A1c (HbA1c) from a 2014 survey by the College of American Pathologists (CAP) demonstrates that approximately two-thirds of the examination subgroups provide only two-Sigma quality when evaluated against the CAP requirement of an allowable total error of 6.0%. The weighted averages were 1.46 Sigma for a survey sample with an assigned value of 6.49% Hb (average bias 2.31%, CV 2.87%), 1.45 Sigma at 6.97% Hb (average bias 2.29%, CV 2.81%), and 1.75 at 9.65% Hb (average bias 1.55%, CV 2.71%). Maximum biases for examination subgroups were 5.7%, 5.8%, and 4.1%, respectively. Conclusions: Assessment of quality on the Sigma scale provides evidence of the analytical performance that is being achieved relative to requirements for intended use and should be useful for identifying and prioritizing improvements that are needed in the analytical quality of laboratory examinations. In spite of global and national standardization programs, bias is still a critical limitation of current HbA1c examination procedures.


Clinics in Laboratory Medicine | 2013

Perspectives on Quality Control, Risk Management, and Analytical Quality Management

James O. Westgard

Quality control (QC) practices are changing in US laboratories as Centers for Medicare and Medicaid Services adopts individualized QC plans as a new option for compliance with the Clinical Laboratory Improvement Amendments regulations. The Joint Commission provides general guidance for applying risk management in health care organizations. The EP23A (Evaluation Protocol 23A) document from the Clinical and Laboratory Standards Institute provides specific guidance on the use of risk management for developing analytical QC plans. Medical laboratories should integrate risk management tools with existing quality management techniques and activities to provide an overall plan for analytical quality management.


Computer Methods and Programs in Biomedicine | 1997

QC Validator® 2.0: a computer program for automatic selection of statistical QC procedures for applications in healthcare laboratories

James O. Westgard; Bernard Stein; Sten A. Westgard; Robert Kennedy

A computer program has been developed to help healthcare laboratories select statistical control rules and numbers of control measurements that will assure the quality required by clinical decision interval criteria or analytical total error criteria. The program (QC Validator 2.0 (QC Validator and OPSpecs are registered trademarks of Westgard Quality Corporation, which has applied for a patent for this automatic QC selection process. Windows is a registered trademark of Microsoft Corporation)) runs on IBM compatible personal computers operating under Windows. The user enters information about the method imprecision, inaccuracy, and expected frequency of errors, defines the quality required in terms of a medically important change (clinical decision interval) or an analytical allowable total error, then initiates automatic selection by indicating the number of control materials that are to be analyzed (1, 2, or 3). The program returns with a chart of operating specifications (OPSpecs chart) that displays the selected control rules and numbers of control measurements. The automatic QC selection process is based on user editable criteria for the types of control rules that can be implemented by the laboratory, total numbers of control measurements that are practical, maximum levels of false rejections that can be tolerated and minimum levels of error detection that are acceptable for detection of medically important systematic or random errors.


Clinica Chimica Acta | 2001

Electronic quality control, the total testing process, and the total quality control system

James O. Westgard

Traditional statistical quality control (QC) using matrix controls is often difficult to implement in point-of-care settings. Alternative QC procedures, such as electronic QC, have been developed by many manufacturers and approved for use by regulatory and accreditation organizations. Electronic QC usually involves the substitution of an electrical signal for the signal that would normally be generated by a sensor responding to an analyte in a specimen; sometimes an artificial nonliquid sample is substituted to cause the sensor to generate an electrical signal. The usefulness of electronic QC can be assessed by identifying the steps in the total testing process that are being monitored. An example is provided for blood gas measurements to illustrate the steps that can be monitored by different types of QC procedures and materials. The need to monitor all of the steps generally requires a combination of procedures and materials, or a total QC system that includes electronic QC, matrix controls, and even real patient specimens. Electronic QC is an essential part of the total QC system, but is not by itself sufficient.

Collaboration


Dive into the James O. Westgard's collaboration.

Top Co-Authors

Avatar

R. Neill Carey

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald H. Feldbruegge

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Pedro Encarnação

Catholic University of Portugal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George S. Cembrowski

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Arthur A. Eggert

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Daniel F.I. Kurtycz

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

David J. Hassemer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Donald A. Wiebe

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge