Joseph V. Rodricks
Business International Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph V. Rodricks.
Risk Analysis | 2010
Eileen Abt; Joseph V. Rodricks; Jonathan I. Levy; Lauren Zeise; Thomas A. Burke
At the request of the U.S. Environmental Protection Agency (EPA), the National Research Council (NRC) recently completed a major report, Science and Decisions: Advancing Risk Assessment, that is intended to strengthen the scientific basis, credibility, and effectiveness of risk assessment practices and subsequent risk management decisions. The report describes the challenges faced by risk assessment and the need to consider improvements in both the technical analyses of risk assessments (i.e., the development and use of scientific information to improve risk characterization) and the utility of risk assessments (i.e., making assessments more relevant and useful for risk management decisions). The report tackles a number of topics relating to improvements in the process, including the design and framing of risk assessments, uncertainty and variability characterization, selection and use of defaults, unification of cancer and noncancer dose-response assessment, cumulative risk assessment, and the need to increase EPAs capacity to address these improvements. This article describes and summarizes the NRC report, with an eye toward its implications for risk assessment practices at EPA.
Critical Reviews in Toxicology | 2010
Joseph V. Rodricks; James A. Swenberg; Joseph F. Borzelleca; Robert R. Maronpot; Annette M. Shipp
Triclosan (2,4,4′-trichloro-2′-hydroxy-diphenyl ether) is an antibacterial compound that has been used in consumer products for about 40 years. The tolerability and safety of triclosan has been evaluated in human volunteers with little indication of toxicity or sensitization. Although information in humans from chronic usage of personal care products is not available, triclosan has been extensively studied in laboratory animals. When evaluated in chronic oncogenicity studies in mice, rats, and hamsters, treatment-related tumors were found only in the liver of male and female mice. Application of the Human Relevance Framework suggested that these tumors arose by way of peroxisome proliferator-activated receptor α (PPARα) activation, a mode of action not considered to be relevant to humans. Consequently, a Benchmark Dose (BMDL10) of 47u2009mg/kg/day was developed based on kidney toxicity in the hamster. Estimates of the amount of intake from in the use of representative personal care products for men, women, and children were derived in two ways: (1) using known or assumed triclosan levels in various consumer products and assumed usage patterns (product-based estimates); and (2) using upper bound measured urinary triclosan levels from human volunteers (biomonitoring-based estimates) using data from the Centers for Disease Control and Prevention. For the product-based estimates, the margin of safety (MOS) for the combined exposure estimates of intake from the use of all triclosan-containing products considered were approximately 1000, 730, and 630 for men, women, and children, respectively. The MOS calculated from the biomonitoring-based estimated intakes were 5200, 6700, and 11,750 for men, women, and children, respectively. Based on these results, exposure to triclosan in consumer products is not expected to cause adverse health effects in children or adults who use these products as intended.
International Journal of Toxicology | 2007
Joseph V. Rodricks
Quantitative approaches to evaluating the risks of chemical toxicity entered the lives of toxicologists in the mid-1970s, and the continuing interaction of toxicology and risk assessment has been of benefit to both disciplines. I will summarize the origins of the interaction, the reasons for it, and the difficult course it has followed. In doing so, I will set the stage for a discussion of how the type of thinking that informs risk-based decision-making provides important benefits to the continuing development of the science of toxicology. There will continue to be societal pressure for the development of reliable knowledge about the public health importance of the enormous variety of chemical exposures we all incur, from conception to death. Risk assessment is the framework used to organize and convey that knowledge. Toxicology is the principle discipline used to give scientific substance to that framework. Social acceptance of every manifestation of the modern chemical age requires high assurance that the public health is not threatened, and that assurance depends upon continued improvements in these two mutually dependant disciplines.
Critical Reviews in Food Science and Nutrition | 2009
A. Catharine Ross; Robert M. Russell; Sanford A. Miller; Ian C. Munro; Joseph V. Rodricks; Elizabeth A. Yetley; Elizabeth Julien
The methodology used to establish tolerable upper intake levels (UL) for nutrients borrows heavily from risk assessment methods used by toxicologists. Empirical data are used to identify intake levels associated with adverse effects, and Uncertainty Factors (UF) are applied to establish ULs, which in turn inform public health decisions and standards. Use of UFs reflects lack of knowledge regarding the biological events that underlie response to the intake of a given nutrient, and also regarding the sources of variability in that response. In this paper, the Key Events Dose-Response Framework (KEDRF) is used to systematically consider the major biological steps that lead from the intake of the preformed vitamin A to excess systemic levels, and subsequently to increased risk of adverse effects. Each step is examined with regard to factors that influence whether there is progression toward the adverse effect of concern. The role of homeostatic mechanisms is discussed, along with the types of research needed to improve understanding of dose-response for vitamin A. This initial analysis illustrates the potential of the KEDRF as a useful analytical tool for integrating current knowledge regarding dose-response, generating questions that will focus future research efforts, and clarifying how improved knowledge and data could be used to reduce reliance on UFs.
Critical Reviews in Toxicology | 2013
P. Robinan Gentry; Joseph V. Rodricks; Duncan Turnbull; Annette Bachand; Cynthia Van Landingham; Annette M. Shipp; Richard J. Albertini; Richard D. Irons
Abstract A recent study (Zhang et al., 2010) has provided results attributed to aneuploidy in circulating stem cells that has been characterized as providing potential support for proposed mechanisms for formaldehyde to impact bone marrow. A critical review of the study, as well as a reanalysis of the underlying data, was performed and the results of this reanalysis suggested factors other than formaldehyde exposure may have contributed to the effects reported. In addition, although the authors stated in their paper that “all scorable metaphase spreads on each slide were analyzed, and a minimum of 150 cells per subject was scored,” this protocol was not followed. In fact, the protocol to evaluate the presence of monosomy 7 or trisomy 8 was followed for three or less samples in exposed workers and six or less samples in non-exposed workers. In addition, the assays used (CFU-GM) do not actually measure the proposed events in primitive cells involved in the development of acute myeloid leukemia. Evaluation of these data indicates that the aneuploidy measured could not have arisen in vivo, but rather arose during in vitro culture. The results of our critical review and reanalysis of the data, in combination with recent toxicological and mechanistic studies, do not support a mechanism for a causal association between formaldehyde exposure and myeloid or lymphoid malignancies.
The American Journal of Clinical Nutrition | 2017
Elizabeth A. Yetley; Amanda J. MacFarlane; Linda S. Greene-Finestone; Cutberto Garza; Jamy Ard; Stephanie A. Atkinson; Dennis M. Bier; Alicia L. Carriquiry; William R. Harlan; Dale Hattis; Janet C. King; Daniel Krewski; Deborah L. O’Connor; Ross L. Prentice; Joseph V. Rodricks; George A. Wells
Dietary Reference Intakes (DRIs) are used in Canada and the United States in planning and assessing diets of apparently healthy individuals and population groups. The approaches used to establish DRIs on the basis of classical nutrient deficiencies and/or toxicities have worked well. However, it has proved to be more challenging to base DRI values on chronic disease endpoints; deviations from the traditional framework were often required, and in some cases, DRI values were not established for intakes that affected chronic disease outcomes despite evidence that supported a relation. The increasing proportions of elderly citizens, the growing prevalence of chronic diseases, and the persistently high prevalence of overweight and obesity, which predispose to chronic disease, highlight the importance of understanding the impact of nutrition on chronic disease prevention and control. A multidisciplinary working group sponsored by the Canadian and US government DRI steering committees met from November 2014 to April 2016 to identify options for addressing key scientific challenges encountered in the use of chronic disease endpoints to establish reference values. The working group focused on 3 key questions: 1) What are the important evidentiary challenges for selecting and using chronic disease endpoints in future DRI reviews, 2) what intake-response models can future DRI committees consider when using chronic disease endpoints, and 3) what are the arguments for and against continuing to include chronic disease endpoints in future DRI reviews? This report outlines the range of options identified by the working group for answering these key questions, as well as the strengths and weaknesses of each option.
The Open Epidemiology Journal | 2011
Keeve E. Nachman; Mary A. Fox; Mary C. Sheehan; Thomas A. Burke; Joseph V. Rodricks; Tracey J. Woodruff
The field of environmental public health is at an important crossroad. Our current biomonitoring efforts document widespread exposure to a host of chemicals for which toxicity information is lacking. At the same time, advances in the fields of genomics, proteomics, metabolomics, genetics and epigenetics are yielding volumes of data at a rapid pace. Our ability to detect chemicals in biological and environmental media has far outpaced our ability to interpret their health relevance, and as a result, the environmental risk paradigm, in its current state, is antiquated and ill-equipped to make the best use of these new data. In light of new scientific developments and the pressing need to characterize the public health burdens of chemicals, it is imperative to reinvigorate the use of environmental epidemiology in chemical risk assessment. Two case studies of chemical assessments from the Environmental Protection Agency Integrated Risk Information System database are presented to illustrate opportunities where epidemiologic data could have been used in place of experimental animal data in dose-response assessment, or where different approaches, techniques, or studies could have been employed to better utilize existing epidemiologic evidence. Based on the case studies and what can be learned from recent scientific advances and improved approaches to utilizing human data for dose-response estimation, recommendations are provided for the disciplines of epidemiology and risk assessment for enhancing the role of epidemiologic data in hazard identification and dose-response assessment.
Journal of Nutrition | 2003
Joseph V. Rodricks
Risk assessment is a well-established framework for organizing and evaluating diverse, and sometimes conflicting information, to assess the likelihood that agents in the environment may harm human health under known or expected conditions of exposure. Risk assessments are used by regulatory and public health officials to guide judgments and actions regarding the need for risk reduction and the appropriate means to achieve it. These judgments and actions are called risk management, and are guided by law, historical precedent and public health, economic and social concerns. Those in the nutrition community who have been called upon to make recommendations regarding adequate nutrient intakes have long been engaged in the practice of risk assessment. That is, they have assessed the harmful health effects of inadequate intakes and defined intakes likely to avoid such harm. During the past decade attention has turned to the potential health risks of excessive nutrient and nutritional supplement intakes. A recent study released by a committee of the Institute of Medicine illustrates the difficulties in deriving risk-based upper levels of intake for nutrients and nutritional supplements. Amino acids were the subject of extensive discussions by this committee, but in no case was an upper level of intake recommended. It is clear that the extent of scientific investigation of the harmful effects of amino acids has been highly uneven, and that significant questions remain regarding the appropriate methodologies to study such effects.
Human and Ecological Risk Assessment | 2014
Joseph V. Rodricks
ABSTRACT The publication in 1962 of Rachel Carsons Silent Spring marks the mid-point in a century that saw, in its first half, the emergence of public health concerns related to human exposures to chemicals, and, in its second half, the emergence of public policies to deal with those concerns. Those policies made it imperative that the scientific community come to grips with the problem of identifying exposure levels not likely to cause harm. This problem was not significantly discussed within the scientific community until the 1950s, and well-described methods for practical solutions to it did not appear until the 1970s. An important report from the National Academy of Sciences, published in 1983 (Risk Assessment in the Federal Government), provided an analysis of these emerging methods, and recommended a useful framework for the assessment and management of risk. This framework remains central to public health and regulatory decision-making. A high-level perspective is offered on events leading to and following the 1983 report. The article describes early thinking about chemical toxicity and the scientific path that thinking followed through the 20th century, and to the present.
Toxicological Sciences | 2013
Joseph V. Rodricks; Jonathan I. Levy
In 2009, the National Research Council (NRC) released the latest in a series of advisory reports on human health risk assessment, titled Science and Decisions: Advancing Risk Assessment. This wide-ranging report made a number of recommendations related to risk assessment practice at the U.S. Environmental Protection Agency that could both influence and be influenced by evolving toxicological practice. In particular, Science and Decisions emphasized the scientific and operational necessity of a new approach for dose-response modeling; addressed the recurring challenge of defaults in risk assessment and the question of when research results can be used in place of defaults; and reinforced the value of cumulative risk assessment, which would require enhanced understanding of the joint influence of chemical and nonchemical stressors on health outcomes. The objective of this article is to summarize key messages from Science and Decisions, both as a stand-alone report and in comparison with another recent NRC report, Toxicity Testing in the 21st Century: A Vision and a Strategy. Although these reports have many conclusions in common and reinforce similar themes, there are important differences that merit careful consideration, such as the move away from apical endpoints in Toxicity Testing and the emphasis on benefit-cost analyses and related decision tools in Science and Decisions that would be strengthened by quantification of apical endpoints. Moving risk assessment forward will require toxicologists to wrestle with the implications of Science and Decisions from a toxicological perspective.