DDEGREES OF EQUIVALENCE IN A KEY COMPARISON Thang H. L., Nguyen D. D.
Vietnam Metrology Institute, Address: 8 Hoang Quoc Viet, Hanoi, Vietnam
Abstract:
In an interlaboratory key comparison, a data analysis procedure for this comparison was proposed and recommended by CIPM [1, 2, 3], therein the degrees of equivalence of measurement standards of the laboratories participated in the comparison and the ones between each two laboratories were introduced but a corresponding clear and plausible measurement model was not given. Authors in [4] offered possible measurement models for a given comparison and a suitable model was selected out after rigorous analyzing steps for expectation values of these degrees of equivalence. The systematic laboratory-effects model was then selected as a right one in this report. Those models were all based on the one true value existence assumption. However in the year 2008, a new version of the Vocabulary for International Metrology (VIM) [7] was issued where the true value of a given measurement standard should be now perceived as multiple true values which following a given statistics distribution. Applying this perception of true values of a measurement standard with combination of the steps in [4], measurement models have been developed and degrees of equivalence have been analyzed. The results show that although with new definition, the systematic laboratory-effects model is still the reasonable one in a given key comparison. I. Introduction
In reference [2], concept of degrees of equivalence between laboratories was stated as one of important criteria in Mutual Recognition Arrangement (MRA) between National Metrology Institutes (NMIs). Degrees of equivalence are defined in [1] as following:
Degree of equivalence of a measurement standard : the degree to which the value of a measurement standard is consistent with the key comparison reference value. This is expressed quantitatively by the deviation from the key comparison reference value and the uncertainty of this deviation. The degree of equivalence between two measurement standards is expressed as the difference between their respective deviations from the key comparison reference value and the uncertainty of this difference. Mathematically, the degree is expressed as d i = x i - x K and u (d i ) = u (x i ) - u (x K ) . The degree of equivalence between two measurement standards is expressed as d ij = x i - x j and u (d ij ) = u (x i ) + u (x j ) [3]. To illuminate the statistics natures of those quantities, measurement models for a key comparison have been offered and analyzed in [4]. In those models, a given measurement standard is assumed having only one true value. Actually, as discussed in [7], a Email: [email protected] ore general view should be of understanding that for a given measurement standard, there exist a set of true values which we then assume following a given statistics distribution.
II. Mathematical modeling
Let consider a given key comparison where a measurement quantity having a set of true values Y i , i = 1 to N (N is the number of participants) which is following a unique stable distribution during the comparison time. The expectation and variance of Y i will be E(Y i ) = Y and V(Y i ) = s (Y i ). Call X , X X N and x , x x N are expectation values and measured values of the measurement quantity measured and provided by the i th laboratory. Each measured value will have a reliable measurement uncertainty u(x i ). Call b = (X – Y ), b = (X – Y ),…, b N = (X N – Y N ). The set of b , b ,…, b N are not always zero due to some unrecognizable errors during the measurement but all of the measurement values of a certain laboratory should still have the same expectation value. Next some measurement models with different assumptions will be developed and their analysis will be carried out. 1. None laboratory effect In this case the measurement equation will be of the form: x i = Y i + e i (1) The equation for expectation values will be: E(x i ) = X i = Y. Here b i = 0 implies the participating laboratory makes no errors on the measurement or all the errors were recognizable and corrected. The corresponding variance equation will be: V(x i ) = V(Y i ) + V(e i ) or V(x i ) = s (Y i ) + u (e i ) (2) 2. Random laboratory effect The measurement equation will be: x i = Y i + b i + e i (3) The expectation equation: E(x i ) = E(Y i ) + E(b i ) + E(e i ) or E(x i ) = Y (4) where b i is assumed to follow a statistics distribution with zero expectation. The variance equation: V(x i ) = V(Y i ) + V(b i ) + V(e i ) or V(x i ) = s (Y i ) + s (b i ) + u (e i ) (5) 3. Systematic laboratory effect The measurement equation will be: x i = Y i + b i + e i (6) where b i becomes a constant now. The expectation and variance equation: E(x i ) = E(Y i ) + E(b i ) + E(e i ) (7) V(x i ) = V(Y i ) + V(b i ) + V(e i ) (8) or E(x i ) = Y + b i (9) V(x i ) = V(Y i ) + u (e i ) (10) III. Key reference values None laboratory effect The key reference value: x K = ( Σ i x i / (s (Y i ) + u (e i )))/( Σ i
1/ (s (Y i ) + u (e i ))), u(x K ) = 1/ √ ( Σ i
1/ (s (Y i ) + u (e i ))) (11) 2. Random laboratory effect The key reference value: x K = ( Σ i x i / (s (Y i ) + s (b i ) + u (e i )))/( Σ i
1/ (s (Y i ) + s (b i ) + u (e i ))), u(x K ) = 1/ √ ( Σ i
1/ (s (Y i ) + s (b i ) + u (e i ))) (12) 3. Systematic laboratory effect The key reference value: x K = ( Σ i x i / (s (Y i ) + u (e i )))/( Σ i
1/ (s (Y i ) + u (e i ))), u(x K ) = 1/ √ ( Σ i
1/ (s (Y i ) + u (e i ))) (13) IV. Degrees of equivalence None laboratory effect Measurement models of any two participating laboratories: x i = Y i + e i and x j = Y j + e j (14) Deviation of measured values of two laboratories: d ij = x i - x j = Y i - Y j + e i - e j (15) Deviation of a measured value and the key reference value: d i = x i - x K = Y i + e i - ( Σ j x j / (s (Y j ) + u (e j )))/( Σ j
1/ (s (Y j ) + u (e j ))) (16) The expectation values: E(d ij ) = E(Y i ) - E(Y j ) + E(e i ) - E(e j ) = 0, E(d i ) = E(x i ) - E(x K ) = E(Y i ) + E(e i ) - ( Σ j E(x j )/ (s (Y j ) + u (e j )))/( Σ j
1/ (s (Y j ) + u (e j ))) = E(Y i ) + E(e i ) - ( Σ j E(Y j + e j )/ (s (Y j ) + u (e j )))/( Σ j
1/ (s (Y j ) + u (e j ))) = E(Y i ) + E(e i ) - ( Σ j E(Y j )/ (s (Y j ) + u (e j )))/( Σ j
1/ (s (Y j ) + u (e j ))) = E(Y i ) - ( Σ j E(Y j )/ (s (Y j ) + u (e j )))/( Σ j
1/ (s (Y j ) + u (e j ))) = E(Y i ) - E(Y j ) = 0 (17) 2. Random laboratory effect Measurement models of any two participating laboratories: x i = Y i + b i + e i and x j = Y j + b j + e j (18) Deviation of measured values of two laboratories: d ij = x i - x j = Y i - Y j + b i - b j + e i - e j (19) Deviation of a measured value and the key reference value: d i = x i - x K = Y i + b i + e i - ( Σ j x j / (s (Y j ) + s (b j ) + u (e j )))/( Σ j
1/ (s (Y j ) + s (b j ) + u (e j ))) (20) The expectation values: E(d i ) = E(Y i ) + E(b i ) + E(e i ) - ( Σ j E(x j )/ (s (Y j ) + s (b j ) + u (e j )))/( Σ j
1/ (s (Y j ) + s (b j ) + u (e j ))) = E(Y i ) + E(b i ) + E(e i ) - E(Y j ) = 0 and d ij = 0 (21) 3. Systematic laboratory effect Measurement models of any two participating laboratories: x i = Y i + b i + e i và x j = Y j + b j + e j (22) Deviation of measured values of two laboratories: d ij = x i - x j = Y i - Y j + b i - b j + e i - e j (23) Deviation of a measured value and the key reference value: d i = x i - x K = Y i + b i + e i - ( Σ j x j / u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) (24) The expectation values: E(d i ) = E(Y i ) + E(b i ) + E(e i ) - ( Σ j E(Y j + b j + e j )/ u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) = E(Y i ) + E(b i ) + E(e i ) - ( Σ j E(Y j ) + E(b j ) + E(e j ))/ u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) = E(Y i ) + E(b i ) + E(e i ) - E(Y j )- Σ j b j / u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) = E(b i ) - Σ j b j / u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) = b i - ( Σ j b j / u (e j ))/( Σ j
1/ (u (Y j ) + u (e j ))) and E(d ij ) = E(Y i ) - E(Y j ) + E(b i ) - E(b j ) + E(e i ) - E(e j ) = b i - b j (25) . Discussion The approach in this report accepted the assumption of existence of a set of true values instead of the existence of only one unique true value for a given measurement standard of the artifact in a key comparison. Those true values are distributed in a common probabilistic density function. The corresponding degrees of equivalence, or in other words, the deviations and their measurement uncertainties are then analyzed. It is then seen that if a given participating laboratory did not contribute any error to the measurement or the error contributed of this laboratory to the measurement is random in nature as seen in equations (17) and (21), then the expectations are always zero. These imply that the laboratories under question are always equivalent which is not a reasonable acceptance. This fact implies that they should not be good models for a key comparison. In contrast, if a participating laboratory contributed to the measurement a systematic error then the expectations of deviations are not possibly zero in all cases as seen in equation (25). The systematic errors committed by each one b i and b j and their uncertainties will definitely decide if they are equivalent or not. And then this model could be assigned to be a good model to describe the measurement process. It is worthy to notice that this conclusion is coincident to the one in [4]. VI. Conclusion
In this report, the degree of equivalence is considered in three different models. The explicit deviations of each laboratory pairs and that of one laboratory with the key reference value are derived. The expectations of the deviations and then the degrees of equivalence are analyzed for each model with the assumption of multiple true values. The result support that the laboratory’s systematic error model is the accepted one. The result is similar to the one in [4].
Acknowledgement : Dr. Nguyen Duc Dung and Dr. Tran Bao have contributed to the discussion and minutes of the report.
References [1] International Committee for Weights and Measures (CIPM):
Mutual recognition of national measurement standards and of calibration and measurement certificates issued by national metrology institutes
Guidelines for CIPM key comparisons
The evaluation of key comparison data , Metrologia 39, pp 589-595, 2002. [4] R. N. Kacker, R. U. Datla, and A. C. Parr, National Institute of Standards and Technology, Gaithersburg, MD 20899-0001 USA: