Jeremy M. Gernand
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeremy M. Gernand.
Risk Analysis | 2014
Jeremy M. Gernand; Elizabeth A. Casman
This article presents a regression-tree-based meta-analysis of rodent pulmonary toxicity studies of uncoated, nonfunctionalized carbon nanotube (CNT) exposure. The resulting analysis provides quantitative estimates of the contribution of CNT attributes (impurities, physical dimensions, and aggregation) to pulmonary toxicity indicators in bronchoalveolar lavage fluid: neutrophil and macrophage count, and lactate dehydrogenase and total protein concentrations. The method employs classification and regression tree (CART) models, techniques that are relatively insensitive to data defects that impair other types of regression analysis: high dimensionality, nonlinearity, correlated variables, and significant quantities of missing values. Three types of analysis are presented: the RT, the random forest (RF), and a random-forest-based dose-response model. The RT shows the best single model supported by all the data and typically contains a small number of variables. The RF shows how much variance reduction is associated with every variable in the data set. The dose-response model is used to isolate the effects of CNT attributes from the CNT dose, showing the shift in the dose-response caused by the attribute across the measured range of CNT doses. It was found that the CNT attributes that contribute the most to pulmonary toxicity were metallic impurities (cobalt significantly increased observed toxicity, while other impurities had mixed effects), CNT length (negatively correlated with most toxicity indicators), CNT diameter (significantly positively associated with toxicity), and aggregate size (negatively correlated with cell damage indicators and positively correlated with immune response indicators). Increasing CNT N2 -BET-specific surface area decreased toxicity indicators.
Risk Analysis | 2016
Vicki Stone; Helinor Johnston; Dominique Claire Balharry; Jeremy M. Gernand; Mary Gulumian
The development of alternative testing strategies (ATS) for hazard assessment of new and emerging materials is high on the agenda of scientists, funders, and regulators. The relatively large number of nanomaterials on the market and under development means that an increasing emphasis will be placed on the use of reliable, predictive ATS when assessing their safety. We have provided recommendations as to how ATS development for assessment of nanomaterial hazard may be accelerated. Predefined search terms were used to identify the quantity and distribution of peer-reviewed publications for nanomaterial hazard assessment following inhalation, ingestion, or dermal absorption. A summary of knowledge gaps relating to nanomaterial hazard is provided to identify future research priorities and areas in which a rich data set might exist to allow ATS identification. Consultation with stakeholders (e.g., academia, industry, regulators) was critical to ensure that current expert opinion was reflected. The gap analysis revealed an abundance of studies that assessed the local and systemic impacts of inhaled particles, and so ATS are available for immediate use. Development of ATS for assessment of the dermal toxicity of chemicals is already relatively advanced, and these models should be applied to nanomaterials as relatively few studies have assessed the dermal toxicity of nanomaterials to date. Limited studies have investigated the local and systemic impacts of ingested nanomaterials. If the recommendations for research prioritization proposed are adopted, it is envisioned that a comprehensive battery of ATS can be developed to support the risk assessment process for nanomaterials. Some alternative models are available for immediate implementation, while others require more developmental work to become widely adopted. Case studies are included that can be used to inform the selection of alternative models and end points when assessing the pathogenicity of fibers and mode of action of nanomaterial toxicity.
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering | 2016
Jeremy M. Gernand; Elizabeth A. Casman
Due to their unique physicochemical properties, nanomaterials have the potential to interact with living organisms in novel ways. Nanomaterial variants are too numerous to be screened for toxicity individually by traditional animal testing. Existing data on the toxicity of inhaled nanomaterials in animal models are sparse in comparison to the number of potential factors that may affect toxicity. This paper presents meta-analysis-based risk models developed with the machine-learning technique, random forests (RFs), to determine the relative contribution of different physical and chemical attributes on observed toxicity. The findings from this analysis indicate that carbon nanotube (CNT) impurities explain at most 30% of the variance in pulmonary toxicity as measured by polymorphonuclear neutrophils (PMNs) count. Titanium dioxide nanoparticle size and aggregation affected the observed toxic response by less than 10%. Differences in observed effects for a group of metal oxide nanoparticles associated with di...
ASME 2013 International Mechanical Engineering Congress and Exposition | 2013
Jeremy M. Gernand; Elizabeth A. Casman
Due to their size and unique chemical properties, nanomaterials have the potential to interact with living organisms in novel ways, leading to a spectrum of negative consequences. Though a relatively new materials science, already nanomaterial variants in the process of becoming too numerous to be screened for toxicity individually by traditional and expensive animal testing. As with conventional pollutants, the resulting backlog of untested new materials means that interim industry and regulatory risk management measures may be mismatched to the actual risk. The ability to minimize toxicity risk from a nanomaterial during the product or system design phase would simplify the risk assessment process and contribute to increased worker and consumer safety.Some attempts to address this problem have been made, primarily analyzing data from in vitro experiments, which are of limited predictive value for the effects on whole organisms. The existing data on the toxicity of inhaled nanomaterials in animal models is sparse in comparison to the number of potential factors that may contribute to or aggravate nanomaterial toxicity, limiting the power of conventional statistical analysis to detect property/toxicity relationships. This situation is exacerbated by the fact that exhaustive chemical and physical characterization of all nanomaterial attributes in these studies is rare, due to resource or equipment constraints and dissimilar investigator priorities.This paper presents risk assessment models developed through a meta-analysis of in vivo nanomaterial rodent-inhalational toxicity studies. We apply machine learning techniques including regression trees and the related ensemble method, random forests in order to determine the relative contribution of different physical and chemical attributes on observed toxicity. These methods permit the use of data records with missing information without substituting presumed values and can reveal complex data relationships even in nonlinear contexts or conditional situations.Based on this analysis, we present a predictive risk model for the severity of inhaled nanomaterial toxicity based on a given set of nanomaterial attributes. This model reveals the anticipated change in the expected toxic response to choices of nanomaterial design (such as physical dimensions or chemical makeup). This methodology is intended to aid nanomaterial designers in identifying nanomaterial attributes that contribute to toxicity, giving them the opportunity to substitute safer variants while continuing to meet functional objectives.Findings from this analysis indicate that carbon nanotube (CNT) impurities explain at most 30% of the variance pulmonary toxicity as measured by polymorphonuclear neutrophils (PMN) count. Titanium dioxide nanoparticle size and aggregation affected the observed toxic response by less than ±10%. Difference in observed effects for a group of metal oxide nanoparticle associated with differences in Gibbs Free Energy on lactate dehydrogenase (LDH) concentrations amount to only 4% to the total variance. Other chemical descriptors of metal oxides were unimportant.Copyright
Heat Transfer Engineering | 2009
Jeremy M. Gernand; Yildiz Bayazitoglu
A spiral microchannel methanol reformer has been developed to provide power in conjunction with a micro fuel cell for a portable, low-power device. The design is optimized for low pumping power and rapid operation as well as thermal efficiency, overall size, and complete generation of the available hydrogen. An iterative, implicit, finite-element solution code, which locates the boundaries between liquid, two-phase, and gaseous flow, provides a complete solution of the fluid and heat transfer properties throughout the device. The solution employs experimentally verified available microchannel fluid dynamics relations to develop accurate results. Based on this analysis, the proposed microreformer design will have an overall maximum energy efficiency of 70%.
Journal of The Air & Waste Management Association | 2018
Zoya Banan; Jeremy M. Gernand
ABSTRACT Shale gas has become an important strategic energy source with considerable potential economic benefits and the potential to reduce greenhouse gas emissions in so far as it displaces coal use. However, there still exist environmental health risks caused by emissions from exploration and production activities. In the United States, states and localities have set different minimum setback policies to reduce the health risks corresponding to the emissions from these locations, but it is unclear whether these policies are sufficient. This study uses a Gaussian plume model to evaluate the probability of exposure exceedance from EPA concentration limits for PM2.5 at various locations around a generic wellsite in the Marcellus shale region. A set of meteorological data monitored at ten different stations across Marcellus shale gas region in Pennsylvania during 2015 serves as an input to this model. Results indicate that even though the current setback distance policy in Pennsylvania (500 ft. or 152.4 m) might be effective in some cases, exposure limit exceedance occurs frequently at this distance with higher than average emission rates and/or greater number of wells per wellpad. Setback distances should be 736 m to ensure compliance with the daily average concentration of PM2.5, and a function of the number of wells to comply with the annual average PM2.5 exposure standard. Implications: The Marcellus Shale gas is known as a significant source of criteria pollutants and studies show that the current setback distance in Pennsylvania is not adequate to protect the residents from exceeding the established limits. Even an effective setback distance to meet the annual exposure limit may not be adequate to meet the daily limit. The probability of exceeding the annual limit increases with number of wells per site. We use a probabilistic dispersion model to introduce a technical basis to select appropriate setback distances.
Nature Nanotechnology | 2016
Elizabeth A. Casman; Jeremy M. Gernand
Meta-analysis of the literature on quantum dot toxicity using a machine-learning tool helps reveal hidden relationships between material properties and toxicity.
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering | 2016
Jeremy M. Gernand
The safety of mining in the United States has improved significantly over the past few decades, although it remains one of the more dangerous occupations. Following the Sago mine disaster in January 2006, federal legislation (The Mine Improvement and New Emergency Response {MINER} Act of 2006) tightened regulations and sought to strengthen the authority and safety inspection practices of the Mine Safety and Health Administration (MSHA). While penalties and inspection frequency have increased, understanding of what types of inspection findings are most indicative of serious future incidents is limited. The most effective safety management and oversight could be accomplished by a thorough understanding of what types of infractions or safety inspection findings are most indicative of serious future personnel injuries. However, given the large number of potentially different and unique inspection findings, varied mine characteristics, and types of specific safety incidents, this question is complex in terms of the large number of potentially relevant input parameters. New regulations rely on increasing the frequency and severity of infraction penalties to encourage mining operations to improve worker safety, but without the knowledge of which specific infractions may truly be signaling a dangerous work environment. This paper seeks to inform the question, what types of inspection findings are most indicative of serious future incidents for specific types of mining operations? This analysis utilizes publicly available MSHA databases of cited infractions and reportable incidents. These inspection results are used to train machine learning Classification and Regression Tree (CART) and Random Forest (RF) models that divide the groups of mines into peer groups based on their recent infractions and other defining characteristics with the aim of predicting whether or not a fatal or serious disabling injury is more likely to occur in the following 12-month period. With these characteristics available, additional scrutiny may be appropriately directed at those mining operations at greatest risk of experiencing a worker fatality or disabling injury in the near future. Increased oversight and attention on these mines where workers are at greatest risk may more effectively reduce the likelihood of worker deaths and injuries than increased penalties and inspection frequency alone.
Volume 14: Emerging Technologies; Safety Engineering and Risk Analysis; Materials: Genetics to Structures | 2015
Jason C. York; Jeremy M. Gernand
The potential benefits of a safety program are generally, only realized after an incident has occurred. Resource allocation in an organization’s safety program has the imperative task of balancing costs and often unrealized benefits. Management can be wary to allocate additional resources to a safety program because it is difficult to estimate the return on investment, especially since the returns are a set of negative outcomes not manifested.One way that safety professionals can provide an estimate of potential return on investment is to forecast how the organizations incident rate can be affected by implementing the different resource allocation strategies, and what the expectation for the incident rate would have been without intervention. Safety professionals often trend the performance of their organization’s safety program by benchmarking incident rates against other organizations. Previous studies have employed different statistical forecasting methods to predict how incident rates will react to changes in resource allocation.This paper analyzes the performance of four statistical forecasting methods employed in previous resource allocation studies along another statistical forecasting method, never before used for incident rate prediction, to ascertain the method that provides the highest level of forecast accuracy. By identifying the most accurate forecasting method, the uncertainty of which method a safety professional should utilize for incident rate prediction is reduced. Incident data from the Mine Safety and Health Administration (MSHA) Part 50, was used to forecast both short and long term incident rates. The performance of each of these forecasting methods were evaluated against one another to determine which method has the highest level of accuracy, lowest bias, and best complexity-adjusted goodness-of-fit metrics.Evaluation of the performance provides indications that the double exponential smoothing statistical forecasting method can provide the most accurate incident rate predictions. Analysis of forecast bias indicated that the error for the double exponential smoothing method is unbiased, within the acceptable range for tracking signal, and had a level of prediction accuracy above 70%. The results of this observational study indicate that the double exponential smoothing method should be the method to consider for incident rate prediction. Consistent use of the same forecasting methodology amongst safety professionals as part of their safety program’s resource allocation process, will allow for more consistent benchmarking of incident rate prediction.Copyright
Volume 14: Emerging Technologies; Safety Engineering and Risk Analysis; Materials: Genetics to Structures | 2015
Jeremy M. Gernand
Currently in the United States, agencies responsible for regulations related to worker or public exposures to dust set rules based on a few general categories determined by gross particle size categories like PM10 (particles < 10 μm) and PM2.5 (particles < 2.5 μm) and the total mass of certain specific compounds (e.g., 3.5 mg/m3 of carbon black). Environmental health researchers however, have begun to focus on a new category of ultrafine particles (PM0.1; particles < 100 nm) as being more indicative of actual health risks in people. The emerging field of nanotoxicology meanwhile is providing new insights into how and why certain particles cause damage in the lungs by investigating the effects of exposure in animals to very well characterized engineered nanomaterials.Based on this recent research the National Institute of Occupational Safety and Health (NIOSH) has issued new recommended exposure limits (RELs) for carbon nanotubes (CNTs) and titanium dioxide nanoparticles that are 2–3 orders of magnitude more stringent than RELs for larger particles of the same or similar substance. It remains unclear at present how stringent future regulations may be for engineered and inadvertently created nanoparticles or ultrafine dusts. Nor is it clear whether verification methods to demonstrate compliance with these rules could or should be devised to differentiate between engineered and inadvertently created nanoparticles.This study presents a review of the history of dust regulation in the United States, how emerging data on the health risks of ultrafine particles and engineered nanoparticles is changing our understanding of the risks of inhaled dust, and how future rulemaking in regards to these and similar particulate materials may unfold.This review shows the extent to which rules on dust have become more stringent over time specifically in the case of diesel emissions and silica exposure, and indicates that new rules on worker exposure to ultrafine dusts or engineered nanomaterials may be expected in the United States within 5–10 years based on past experience on the time delay in connecting research on new hazards to regulatory intervention. Current research suggests there will be several challenges to compliance with these rules depending on the structure of the final rule and the development of detection technologies. Although the research on ultrafine dust control technologies appears to indicate that once rulemaking begins there may be no serious feasibility limits to controlling these exposures. Based on ongoing exposure studies, those industries likely to be most affected by a new rule on ultrafine dusts not specific to engineered nanomaterials will include transportation, mining, paper and wood products, construction, and manufacturing.Copyright