Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen L. R. Ellison is active.

Publication


Featured researches published by Stephen L. R. Ellison.


Pure and Applied Chemistry | 2006

The International Harmonized Protocol for the proficiency testing of analytical chemistry laboratories (IUPAC Technical Report)

Michael Thompson; Stephen L. R. Ellison; Roger Wood

The international standardizing organizations - International, ISO, and IUPAC - cooperated to produce the International Harmonized Protocol for the Proficiency Testing of (Chemical) Analytical Laboratories. The Working Group that produced the protocol agreed to revise that Protocol in the light of recent developments and the experience gained since it was first published. This revision has been prepared and agreed upon in the light of comments received following open consultation.


Analyst | 1999

Measurement uncertainty: Approaches to the evaluation of uncertainties associated with recovery†

Vicki J. Barwick; Stephen L. R. Ellison

A number of approaches for evaluating recovery and its contribution to uncertainty budgets for analytical methods are considered in detail. The recovery, R, for a particular sample is considered as comprising three elements, m, Rs and Rrep. These relate to the recovery for the method; the effect of sample matrix and/or analyte concentration on recovery; and how well the behaviour of spiked samples represents that of test samples. The uncertainty associated with R, u(R), will have contributions from u(m), u(Rs) and u(Rrep). The evaluation of these components depends on the method scope and the availability, or otherwise, of representative certified reference materials. Procedures for evaluating these parameters are considered and illustrated with worked examples. Techniques discussed include the use of certified reference materials and spiking studies, and the use of extraction profiling to predict recoveries. All the approaches discussed evaluate the recovery and its uncertainty for the analytical method as a whole. It is concluded that this is a useful approach as it reduces the amount of experimental work required. In addition, most of the required data are frequently available from method validation studies.


Analyst | 1998

Using validation data for ISO measurement uncertainty estimationPart 1. Principles of an approach using cause and effect analysis

Stephen L. R. Ellison

A strategy for reconciling the information requirements of formal measurement uncertainty estimation principles with data generated from classical analytical method validation studies is described in detail. The approach involves a detailed analysis of influence factors on the analytical results, employing cause and effect analysis, followed by a formal reconciliation stage. The methodology is shown to be consistent with the principles outlined in the ISO Guide to the Expression of Uncertainty in Measurement (GUM), given representative data. Any relevant data may be used, including those obtained from classical validation studies. The relationship between classical validation studies and ISO GUM uncertainty estimation is discussed briefly; it is concluded that the two methodologies are equivalent, subject to additional allowance for terms held constant during validation experiments.


Analyst | 2002

A decision theory approach to fitness for purpose in analytical measurement.

Tom Fearn; Sheila Fisher; Michael Thompson; Stephen L. R. Ellison

The choice of an analytical procedure and the determination of an appropriate sampling strategy are here treated as a decision theory problem in which sampling and analytical costs are balanced against possible end-user losses due to measurement error. Measurement error is taken here to include both sampling and analytical variances, but systematic errors are not considered. The theory is developed in detail for the case exemplified by a simple accept or reject decision following an analytical measurement on a batch of material, and useful approximate formulae are given for this case. Two worked examples are given, one involving a batch production process and the other a land reclamation site.


Accreditation and Quality Assurance | 2000

The evaluation of measurement uncertainty from method validation studies

Vicki J. Barwick; Stephen L. R. Ellison; Mark J.Q. Rafferty; Rattanjit S. Gill

A protocol has been developed illustrating the link between validation experiments and measurement uncertainty evaluation. The application of the protocol is illustrated with reference to a method for the determination of three markers (CI solvent red 24, quinizarin and CI solvent yellow 124) in fuel oil samples. The method requires the extraction of the markers from the sample matrix by solid phase extraction followed by quantification by high performance liquid chromatography (HPLC) with diode array detection. The uncertainties for the determination of the markers were evaluated using data from precision and trueness studies using representative sample matrices spiked at a range of concentrations, and from ruggedness studies of the extraction and HPLC stages.


Analyst | 1998

PerspectiveQuantifying uncertainty in qualitative analysis

Stephen L. R. Ellison; Soumi L. Gregory

The feasibility of adopting a consistent approach to the expression of uncertainties relating to identification is discussed. It is argued that qualitative analysis can be viewed as a classification problem, that it is at least as important as quantitative analysis and that inferences drawn from qualitative tests should take relevant uncertainties into account. A brief review of systems of reasoning under uncertainty is presented, and it is concluded that Bayes’ theorem provides the most suitable framework, providing for combination of separate items of evidence and implicitly allowing for both false positive and false negative probabilities in a single parameter. The chemical significance and practical evaluation of relevant probabilities are considered, and the applications and reporting of ‘identification certainty’ figures are discussed.


Analyst | 2006

Reporting measurement uncertainty and coverage intervals near natural limits

Simon Cowen; Stephen L. R. Ellison

Different methods of treating data which lie close to a natural limit in a feasible range, such as zero or 100% mass or mole fraction, are discussed and recommendations made concerning the most appropriate. The methods considered include discarding observations beyond the limit, shifting observations to the limit, truncation of a classical confidence interval based on Students t (coupled with shifting the result to the limit if outside the feasible range), truncation and renormalisation of an assumed normal distribution, and the maximum density interval of a Bayesian estimate based on a normal measurement distribution and a uniform prior within the feasible range. Based on consideration of bias and simulation to assess coverage, it is recommended that for most purposes, a confidence interval near a natural limit should be constructed by first calculating the usual confidence interval based on Students t, then truncating the out-of-range portion to leave an asymmetric interval and adjusting the reported value to within the resulting interval if required. It is suggested that the original standard uncertainty is retained for uncertainty propagation purposes.


Analytical Communications | 1998

Estimating measurement uncertainty using a cause and effect and reconciliation approachPart 2.† Measurement uncertainty estimates compared with collaborative trial expectation‡

Vicki J. Barwick; Stephen L. R. Ellison

Measurement uncertainty estimates are presented for ten analytical methods, covering over 45 combinations of analyte, matrix and concentration. The uncertainty estimates were produced using a cause and effect approach published previously. Techniques include gas and liquid chromatography, elemental analysis by graphite furnace atomic absorption spectrometry, inductively coupled plasma-mass spectrometry, inductively coupled plasma-optical emission spectrometry, and titrimetry. A range of different types of data is used, including quality control data and method validation studies. The major contributions to the uncertainty for each method and a brief description of their evaluation are given. In most cases, a combination of an experimental estimate of overall precision and the uncertainty associated with overall bias, measured as recovery, across representative sample types and analyte concentrations contributes the majority of the uncertainty, although in several cases it was essential to consider additional factors. The uncertainty estimates are also compared with collaborative trial results where available, and with reproducibility estimates obtained from the Horwitz function.


Accreditation and Quality Assurance | 2012

Erratum to: Causes of error in analytical chemistry: results of a web-based survey of proficiency testing participants

Stephen L. R. Ellison; William A. Hardcastle

Results of a voluntary-response survey of respondent-identified causes of unacceptable results in nine proficiency testing schemes are reported. The PT schemes were predominantly environment and food analysis schemes. 111 respondents reported 230 identified causes of error. Sample preparation (16 % of causes reported), Equipment failures (13 %), ‘Human error’ (13 %) and Calibration (10 %) were the top four general causes of poor analytical results. Among sample preparation errors, sample extraction or recovery problems were the most important causes reported. Most calibration errors were related to errors in calculation and dilution and not in availability or quality of calibration materials. No failures were attributed to failures in commercial software; software-related problems were largely associated with user input errors. Corrective actions were generally specific to the particular problem identified. Review of all reported causes indicated that about 44 % could be attributed to simple operator errors.


Accreditation and Quality Assurance | 2013

House-of-security approach to measurement in analytical chemistry: quantification of human error using expert judgments

Ilya Kuselman; Elena Kardash; Emil Bashkansky; Francesca R. Pennecchi; Stephen L. R. Ellison; Karen Ginsbury; Malka Epstein; Aleš Fajgelj; Yury Karpov

A new technique for quantification of human errors in chemical analysis using expert judgments is described. This technique is based on the house-of-security approach developed recently in the field of safety and security for prevention of terrorist and criminal attacks against an organization. The following relative quantification parameters (expressed in %) are proposed in the technique: (a) likelihood score of human error in a chemical analytical measurement/testing method, (b) severity score of human error for reliability of the test results, (c) importance score of a component of a laboratory quality system, and (d) effectiveness score of the quality system as a whole in preventing/blocking human error. As an example, 34 scenarios of human error in pH measurement of groundwater are discussed and quantified.

Collaboration


Dive into the Stephen L. R. Ellison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bertil Magnusson

SP Technical Research Institute of Sweden

View shared research outputs
Top Co-Authors

Avatar

Ilya Kuselman

National Physical Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aleš Fajgelj

International Atomic Energy Agency

View shared research outputs
Researchain Logo
Decentralizing Knowledge