Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Possolo is active.

Publication


Featured researches published by Antonio Possolo.


Metrologia | 2007

Assessment of measurement uncertainty via observation equations

Antonio Possolo; Blaza Toman

According to the Guide to the Expression of Uncertainty in Measurement (GUM) (1995, Geneva, Switzerland: International Organization for Standardization (ISO)), the uncertainty in an estimate of the value of a measurand is assessed by propagating the uncertainty in estimates of values of input quantities, based on a measurement equation that expresses the former value as a known function of the latter values. However, in measurement situations where some of the input quantities in turn depend on the measurand, this approach is circuitous and ultimately impracticable.An alternative approach starts from the observation equation, which relates the experimental data to the measurand: this allows a uniform treatment of the most diverse metrological problems, and, once it is used in the context of Bayesian inference, also facilitates the exploitation of any information that may pre-exist about the measurand, alongside the information that fresh experimental data provide about it.The widest applicability of the observation equation approach is illustrated with detailed examples concerning the lifetime of mechanical parts, the measurement of mass, the calibration of a non-linear model in biochemistry and the estimation of a consensus value for arsenic concentration in a sample measured by multiple laboratories.


Analytical Chemistry | 2015

Post hoc interlaboratory comparison of single particle ICP-MS size measurements of NIST gold nanoparticle reference materials.

Antonio R. Montoro Bustos; Elijah J. Petersen; Antonio Possolo; Michael R. Winchester

Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.


Metrologia | 2010

Copulas for uncertainty analysis

Antonio Possolo

Applying the Monte Carlo method for propagation of measurement uncertainty described in the Supplement 1 to the Guide to the Expression of Uncertainty in Measurement (GUM), when the input quantities are correlated, involves the specification of a joint probability distribution for these quantities. This applies equally whether the output quantity is a scalar or a vector.In practice, however, all that typically is available are probability distributions for the individual input quantities (their marginal distributions) and estimates of the correlations between them. Even though there are infinitely many joint distributions that are consistent with given marginal distributions and correlations, a method is needed to manufacture a particular one that may reasonably be used in practice. This paper explains how copulas may be used to this effect, illustrates their use in examples, including example H.2 from the GUM, discusses the choice of copula and provides an algorithm to delineate minimum volume coverage regions for vectorial measurands.


Metrologia | 2009

Contribution to a conversation about the Supplement 1 to the GUM

Antonio Possolo; Blaza Toman; Tyler Estler

A recent contribution to this journal describes a particular situation of uncertainty assessment where probabilistically symmetric coverage intervals, produced in accordance with the Supplement 1 to the GUM, cover the measurand with frequency much smaller than their nominal coverage probability, in a long sequence of simulated repetitions of the measurement process and corresponding uncertainty assessment, in each of which the value of the measurand is assumed known.These findings motivate our contribution to the ongoing, accelerating discussion of that Supplement, which began years before its publication, and has only gained momentum with the recent release of its final form.We begin by suggesting that the coverage intervals whose frequentist performance has been found to be poor, indeed are coverage intervals for a quantity different from the measurand that is the focus of attention, and we offer a fresh viewpoint wherefrom to appreciate the situation.Next, we point out that the Monte Carlo method for uncertainty propagation that is the core contribution of Supplement 1 is valid under very general conditions, indeed much more general than the conditions Supplement 1 lists as sufficient for its valid application.To produce a coverage interval according to the Supplement 1 involves two steps: generating a sample of values of the measurand via a Monte Carlo procedure and then summarizing this sample into a coverage interval. Although the Supplement favours probabilistically symmetric intervals, it also states explicitly that other prescriptions are tenable. With this in mind, finally we explain that the choice of summarization can be interpreted as reflecting a priori beliefs about the measurand.In the particular case under consideration in the motivating contribution, where the measurand is known to be non-negative, the choice of either a probabilistically symmetric interval or an interval whose left endpoint is 0 would correspond to two quite different prior beliefs. Considering this lesson, we suggest that, in all cases, the best course of action is to adopt a Bayesian approach that naturally reveals all participating assumptions and beliefs openly, and makes all of them accessible to examination and criticism, as befits every scientific procedure.


Statistics in Medicine | 2017

Bayesian estimation in random effects meta-analysis using a non-informative prior.

Olha Bodnar; Alfred Link; Barbora Arendacká; Antonio Possolo; Clemens Elster

Pooling information from multiple, independent studies (meta-analysis) adds great value to medical research. Random effects models are widely used for this purpose. However, there are many different ways of estimating model parameters, and the choice of estimation procedure may be influential upon the conclusions of the meta-analysis. In this paper, we describe a recently proposed Bayesian estimation procedure and compare it with a profile likelihood method and with the DerSimonian-Laird and Mandel-Paule estimators including the Knapp-Hartung correction. The Bayesian procedure uses a non-informative prior for the overall mean and the between-study standard deviation that is determined by the Berger and Bernardo reference prior principle. The comparison of these procedures focuses on the frequentist properties of interval estimates for the overall mean. The results of our simulation study reveal that the Bayesian approach is a promising alternative producing more accurate interval estimates than those three conventional procedures for meta-analysis. The Bayesian procedure is also illustrated using three examples of meta-analysis involving real data. Copyright


Metrologia | 2014

Statistical models and computation to evaluate measurement uncertainty

Antonio Possolo

In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead.These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories.This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.


Journal of Contaminant Hydrology | 2012

Use of statistical tools to evaluate the reductive dechlorination of high levels of TCE in microcosm studies

Mark Harkness; Angela Fisher; Michael D. Lee; E. Erin Mack; Jo Ann Payne; Sandra Dworatzek; Jeff Roberts; Carolyn M. Acheson; Ronald Herrmann; Antonio Possolo

A large, multi-laboratory microcosm study was performed to select amendments for supporting reductive dechlorination of high levels of trichloroethylene (TCE) found at an industrial site in the United Kingdom (UK) containing dense non-aqueous phase liquid (DNAPL) TCE. The study was designed as a fractional factorial experiment involving 177 bottles distributed between four industrial laboratories and was used to assess the impact of six electron donors, bioaugmentation, addition of supplemental nutrients, and two TCE levels (0.57 and 1.90 mM or 75 and 250 mg/L in the aqueous phase) on TCE dechlorination. Performance was assessed based on the concentration changes of TCE and reductive dechlorination degradation products. The chemical data was evaluated using analysis of variance (ANOVA) and survival analysis techniques to determine both main effects and important interactions for all the experimental variables during the 203-day study. The statistically based design and analysis provided powerful tools that aided decision-making for field application of this technology. The analysis showed that emulsified vegetable oil (EVO), lactate, and methanol were the most effective electron donors, promoting rapid and complete dechlorination of TCE to ethene. Bioaugmentation and nutrient addition also had a statistically significant positive impact on TCE dechlorination. In addition, the microbial community was measured using phospholipid fatty acid analysis (PLFA) for quantification of total biomass and characterization of the community structure and quantitative polymerase chain reaction (qPCR) for enumeration of Dehalococcoides organisms (Dhc) and the vinyl chloride reductase (vcrA) gene. The highest increase in levels of total biomass and Dhc was observed in the EVO microcosms, which correlated well with the dechlorination results.


Medical Physics | 2017

Standardizing CT lung density measure across scanner manufacturers

Huaiyu H. Chen-Mayer; Matthew K. Fuld; Bernice Hoppel; Philip F. Judy; Jered Sieren; Junfeng Guo; David A. Lynch; Antonio Possolo; Sean B. Fain

Purpose: Computed Tomography (CT) imaging of the lung, reported in Hounsfield Units (HU), can be parameterized as a quantitative image biomarker for the diagnosis and monitoring of lung density changes due to emphysema, a type of chronic obstructive pulmonary disease (COPD). CT lung density metrics are global measurements based on lung CT number histograms, and are typically a quantity specifying either the percentage of voxels with CT numbers below a threshold, or a single CT number below which a fixed relative lung volume, nth percentile, falls. To reduce variability in the density metrics specified by CT attenuation, the Quantitative Imaging Biomarkers Alliance (QIBA) Lung Density Committee has organized efforts to conduct phantom studies in a variety of scanner models to establish a baseline for assessing the variations in patient studies that can be attributed to scanner calibration and measurement uncertainty. Methods: Data were obtained from a phantom study on CT scanners from four manufacturers with several protocols at various tube potential voltage (kVp) and exposure settings. Free from biological variation, these phantom studies provide an assessment of the accuracy and precision of the density metrics across platforms solely due to machine calibration and uncertainty of the reference materials. The phantom used in this study has three foam density references in the lung density region, which, after calibration against a suite of Standard Reference Materials (SRM) foams with certified physical density, establishes a HU‐electron density relationship for each machine‐protocol. We devised a 5‐step calibration procedure combined with a simplified physical model that enabled the standardization of the CT numbers reported across a total of 22 scanner‐protocol settings to a single energy (chosen at 80 keV). A standard deviation was calculated for overall CT numbers for each density, as well as by scanner and other variables, as a measure of the variability, before and after the standardization. In addition, a linear mixed‐effects model was used to assess the heterogeneity across scanners, and the 95% confidence interval of the mean CT number was evaluated before and after the standardization. Results: We show that after applying the standardization procedures to the phantom data, the instrumental reproducibility of the CT density measurement of the reference foams improved by more than 65%, as measured by the standard deviation of the overall mean CT number. Using the lung foam that did not participate in the calibration as a test case, a mixed effects model analysis shows that the 95% confidence intervals are [−862.0 HU, −851.3 HU] before standardization, and [‐859.0 HU, −853.7 HU] after standardization to 80 keV. This is in general agreement with the expected CT number value at 80 keV of −855.9 HU with 95% CI of [−857.4 HU, −854.5 HU] based on the calibration and the uncertainty in the SRM certified density. Conclusions: This study provides a quantitative assessment of the variations expected in CT lung density measures attributed to non‐biological sources such as scanner calibration and scanner x‐ray spectrum and filtration. By removing scanner‐protocol dependence from the measured CT numbers, higher accuracy and reproducibility of quantitative CT measures were attainable. The standardization procedures developed in study may be explored for possible application in CT lung density clinical data.


Metrologia | 2014

Evaluating the uncertainty of input quantities in measurement models

Antonio Possolo; Clemens Elster

The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM.While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia).Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in uncertainty propagation exercises. In this we deviate markedly and emphatically from the GUM Supplement 1, which gives pride of place to the Principle of Maximum Entropy as a means to assign probability distributions to input quantities.


Computational Statistics & Data Analysis | 2011

Laplace random effects models for interlaboratory studies

Andrew L. Rukhin; Antonio Possolo

A model is introduced for measurements obtained in collaborative interlaboratory studies, comprising measurement errors and random laboratory effects that have Laplace distributions, possibly with heterogeneous, laboratory-specific variances. Estimators are suggested for the common median and for its standard deviation. We provide predictors of the laboratory effects, and of their pairwise differences, along with the standard errors of these predictors. Explicit formulas are given for all estimators, whose sampling performance is assessed in a Monte Carlo simulation study.

Collaboration


Dive into the Antonio Possolo's collaboration.

Top Co-Authors

Avatar

Blaza Toman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Lafarge

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew L. Rukhin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George C. Rhoderick

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Johanna E. Camara

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jon R. Pratt

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Lane C. Sander

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael E. Kelley

Florida Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael R. Winchester

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge