Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael S. Williams is active.

Publication


Featured researches published by Michael S. Williams.


Risk Analysis | 2011

Framework for Microbial Food‐Safety Risk Assessments Amenable to Bayesian Modeling

Michael S. Williams; Eric D. Ebel; David Vose

Regulatory agencies often perform microbial risk assessments to evaluate the change in the number of human illnesses as the result of a new policy that reduces the level of contamination in the food supply. These agencies generally have regulatory authority over the production and retail sectors of the farm-to-table continuum. Any predicted change in contamination that results from new policy that regulates production practices occurs many steps prior to consumption of the product. This study proposes a framework for conducting microbial food-safety risk assessments; this framework can be used to quantitatively assess the annual effects of national regulatory policies. Advantages of the framework are that estimates of human illnesses are consistent with national disease surveillance data (which are usually summarized on an annual basis) and some of the modeling steps that occur between production and consumption can be collapsed or eliminated. The framework leads to probabilistic models that include uncertainty and variability in critical input parameters; these models can be solved using a number of different Bayesian methods. The Bayesian synthesis method performs well for this application and generates posterior distributions of parameters that are relevant to assessing the effect of implementing a new policy. An example, based on Campylobacter and chicken, estimates the annual number of illnesses avoided by a hypothetical policy; this output could be used to assess the economic benefits of a new policy. Empirical validation of the policy effect is also examined by estimating the annual change in the numbers of illnesses observed via disease surveillance systems.


International Journal of Food Microbiology | 2015

Temporal patterns of Campylobacter contamination on chicken and their relationship to campylobacteriosis cases in the United States

Michael S. Williams; Neal J. Golden; Eric D. Ebel; Emily T. Crarey; Heather Tate

The proportion of Campylobacter contaminated food and water samples collected by different surveillance systems often exhibit seasonal patterns. In addition, the incidence of foodborne campylobacteriosis also tends to exhibit strong seasonal patterns. Of the various product classes, the occurrence of Campylobacter contamination can be high on raw poultry products, and chicken is often thought to be one of the leading food vehicles for campylobacteriosis. Two different federal agencies in the United States collected samples of raw chicken products and tested them for the presence of Campylobacter. During the same time period, a consortium of federal and state agencies operated a nationwide surveillance system to monitor cases of campylobacteriosis in the United States. This study uses a common modeling approach to estimate trends and seasonal patterns in both the proportion of raw chicken product samples that test positive for Campylobacter and cases of campylobacteriosis. The results generally support the hypothesis of a weak seasonal increase in the proportion of Campylobacter positive chicken samples in the summer months, though the number of Campylobacter on test-positive samples is slightly lower during this time period. In contrast, campylobacteriosis cases exhibit a strong seasonal pattern that generally precedes increases in contaminated raw chicken. These results suggest that while contaminated chicken products may be responsible for a substantial number of campylobacteriosis cases, they are most likely not the primary driver of the seasonal pattern in human illness.


International Journal of Food Microbiology | 2013

Characterizing uncertainty when evaluating risk management metrics: Risk assessment modeling of Listeria monocytogenes contamination in ready-to-eat deli meats

Daniel L. Gallagher; Eric D. Ebel; Owen Gallagher; David LaBARRE; Michael S. Williams; Neal J. Golden; Régis Pouillot; Kerry L. Dearfield; Janell Kause

This report illustrates how the uncertainty about food safety metrics may influence the selection of a performance objective (PO). To accomplish this goal, we developed a model concerning Listeria monocytogenes in ready-to-eat (RTE) deli meats. This application used a second order Monte Carlo model that simulates L. monocytogenes concentrations through a series of steps: the food-processing establishment, transport, retail, the consumers home and consumption. The model accounted for growth inhibitor use, retail cross contamination, and applied an FAO/WHO dose response model for evaluating the probability of illness. An appropriate level of protection (ALOP) risk metric was selected as the average risk of illness per serving across all consumed servings-per-annum and the model was used to solve for the corresponding performance objective (PO) risk metric as the maximum allowable L. monocytogenes concentration (cfu/g) at the processing establishment where regulatory monitoring would occur. Given uncertainty about model inputs, an uncertainty distribution of the PO was estimated. Additionally, we considered how RTE deli meats contaminated at levels above the PO would be handled by the industry using three alternative approaches. Points on the PO distribution represent the probability that - if the industry complies with a particular PO - the resulting risk-per-serving is less than or equal to the target ALOP. For example, assuming (1) a target ALOP of -6.41 log10 risk of illness per serving, (2) industry concentrations above the PO that are re-distributed throughout the remaining concentration distribution and (3) no dose response uncertainty, establishment POs of -4.98 and -4.39 log10 cfu/g would be required for 90% and 75% confidence that the target ALOP is met, respectively. The PO concentrations from this example scenario are more stringent than the current typical monitoring level of an absence in 25 g (i.e., -1.40 log10 cfu/g) or a stricter criteria of absence in 125 g (i.e., -2.1 log10 cfu/g). This example, and others, demonstrates that a PO for L. monocytogenes would be far below any current monitoring capabilities. Furthermore, this work highlights the demands placed on risk managers and risk assessors when applying uncertain risk models to the current risk metric framework.


International Journal of Food Microbiology | 2012

Methods for fitting a parametric probability distribution to most probable number data

Michael S. Williams; Eric D. Ebel

Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method.


International Journal of Food Microbiology | 2013

Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method

Michael S. Williams; Yong Cao; Eric D. Ebel

Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided.


Journal of Food Protection | 2018

Adoption of Neutralizing Buffered Peptone Water Coincides with Changes in Apparent Prevalence of Salmonella and Campylobacter of Broiler Rinse Samples

Michael S. Williams; Eric D. Ebel; Stephanie A. Hretz; Neal J. Golden

Buffered peptone water is the rinsate commonly used for chicken rinse sampling. A new formulation of buffered peptone water was developed to address concerns about the transfer of antimicrobials, used during poultry slaughter and processing, into the rinsate. This new formulation contains additives to neutralize the antimicrobials, and this neutralizing buffered peptone water replaced the original formulation for all chicken carcass and chicken part sampling programs run by the Food Safety and Inspection Service beginning in July 2016. Our goal was to determine whether the change in rinsate resulted in significant differences in the observed proportion of positive chicken rinse samples for both Salmonella and Campylobacter. This assessment compared sampling results for the 12-month periods before and after implementation. The proportion of carcass samples that tested positive for Salmonella increased from approximately 0.02 to almost 0.06. Concurrently, the proportion of chicken part samples that tested for Campylobacter decreased from 0.15 to 0.04. There were no significant differences associated with neutralizing buffered peptone water for the other two product-pathogen pairs. Further analysis of the effect of the new rinsate on corporations that operate multiple establishments demonstrated that changes in the percent positive rates differed across the corporations, with some corporations being unaffected, while others saw all of the establishments operated by the corporation move from passing to failing the performance standard and vice versa. The results validated earlier concerns that antimicrobial contamination of rinse samples was causing false-negative Salmonella testing results for chicken carcasses. The results also indicate that additional development work may still be required before the rinsate is sufficiently robust for its use in Campylobacter testing.


Epidemiology and Infection | 2018

Temporal patterns in principal Salmonella serotypes in the USA; 1996–2014

M. R. Powell; S. M. Crim; Robert M. Hoekstra; Michael S. Williams; Weidong Gu

Analysing temporal patterns in foodborne illness is important to designing and implementing effective food safety measures. The reported incidence of illness due to Salmonella in the USA. Foodborne Diseases Active Surveillance Network (FoodNet) sites has exhibited no declining trend since 1996; however, there have been significant annual trends among principal Salmonella serotypes, which may exhibit complex seasonal patterns. Data from the original FoodNet sites and penalised cubic B-spline regression are used to estimate temporal patterns in the reported incidence of illness for the top three Salmonella serotypes during 1996-2014. Our results include 95% confidence bands around the estimated annual and monthly curves for each serotype. The results show that Salmonella serotype Typhimurium exhibits a statistically significant declining annual trend and seasonality (P < 0.001) marked by peaks in late summer and early winter. Serotype Enteritidis exhibits a significant annual trend with a higher incidence in later years and seasonality (P < 0.001) marked by a peak in late summer. Serotype Newport exhibits no significant annual trend with significant seasonality (P < 0.001) marked by a peak in late summer.


Epidemiology and Infection | 2016

Time valuation of historical outbreak attribution data.

Eric D. Ebel; Michael S. Williams; Neal J. Golden; Wayne Schlosser; C. Travis

Human illness attribution is recognized as an important metric for prioritizing and informing food-safety decisions and for monitoring progress towards long-term food-safety goals. Inferences regarding the proportion of illnesses attributed to a specific commodity class are often based on analyses of datasets describing the number of outbreaks in a given year or combination of years. In many countries, the total number of pathogen-related outbreaks reported nationwide for an implicated food source is often fewer than 50 instances in a given year and the number of years for which data are available can be fewer than 10. Therefore, a high degree of uncertainty is associated with the estimated fraction of pathogen-related outbreaks attributed to a general food commodity. Although it is possible to make inferences using only data from the most recent year, this type of estimation strategy ignores the data collected in previous years. Thus, a strong argument exists for an estimator that could borrow strength from data collected in the previous years by combining the current data with the data from previous years. While many estimators exist for combining multiple years of data, most either require more data than is currently available or lack an objective and biologically plausible theoretical basis. This study introduces an estimation strategy that progressively reduces the influence of data collected in past years in accordance with the degree of departure from a Poisson process. The methodology is applied to the estimation of the attribution fraction for Salmonella and Escherichia coli O157:H7 for common food commodities and the estimates are compared against two alternative estimators.


Food Control | 2012

Methods for fitting the Poisson-lognormal distribution to microbial testing data

Michael S. Williams; Eric D. Ebel


Food Control | 2014

Temporal patterns in the occurrence of Salmonella in raw meat and poultry products and their relationship to human illnesses in the United States

Michael S. Williams; Eric D. Ebel; Neal J. Golden; Wayne Schlosser

Collaboration


Dive into the Michael S. Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Vose

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar

Emily T. Crarey

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar

Heather Tate

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert M. Hoekstra

Centers for Disease Control and Prevention

View shared research outputs
Top Co-Authors

Avatar

Régis Pouillot

Center for Food Safety and Applied Nutrition

View shared research outputs
Top Co-Authors

Avatar

S. M. Crim

Centers for Disease Control and Prevention

View shared research outputs
Top Co-Authors

Avatar

Weidong Gu

Centers for Disease Control and Prevention

View shared research outputs
Researchain Logo
Decentralizing Knowledge