Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph J. DeGeorge is active.

Publication


Featured researches published by Joseph J. DeGeorge.


Regulatory Toxicology and Pharmacology | 2017

Current nonclinical testing paradigms in support of safe clinical trials: An IQ Consortium DruSafe perspective

Lynne D. Butler; Peggy Guzzie-Peck; James Hartke; Matthew S. Bogdanffy; Yvonne Will; Dolores Diaz; Elisabeth Mortimer-Cassen; Mazin Derzi; Nigel Greene; Joseph J. DeGeorge

&NA; The transition from nonclinical to First‐in‐Human (FIH) testing is one of the most challenging steps in drug development. In response to serious outcomes in a recent Phase 1 trial (sponsored by Bial), IQ Consortium/DruSafe member companies reviewed their nonclinical approach to progress small molecules safely to FIH trials. As a common practice, safety evaluation begins with target selection and continues through iterative in silico and in vitro screening to identify molecules with increased probability of acceptable in vivo safety profiles. High attrition routinely occurs during this phase. In vivo exploratory and pivotal FIH‐enabling toxicity studies are then conducted to identify molecules with a favorable benefit‐risk profile for humans. The recent serious incident has reemphasized the importance of nonclinical testing plans that are customized to the target, the molecule, and the intended clinical plan. Despite the challenges and inherent risks of transitioning from nonclinical to clinical testing, Phase 1 studies have a remarkably good safety record. Given the rapid scientific evolution of safety evaluation, testing paradigms and regulatory guidance must evolve with emerging science. The authors posit that the practices described herein, together with science‐based risk assessment and management, support safe FIH trials while advancing development of important new medicines. HighlightsTransition from nonclinical phases to early clinical trials carries inherent risk.Current nonclinical paradigms have supported safe conduct of early clinical trials.High uncertainty may warrant a more conservative approach to the transition.This paper describes nonclinical approaches to assess and mitigate clinical risk.


Methods of Molecular Biology | 2008

Applications of Toxicogenomics to Nonclinical Drug Development: Regulatory Science Considerations

Frank D. Sistare; Joseph J. DeGeorge

Scientists in the pharmaceutical industry have ready access to samples from animal toxicology studies carefully designed to test the safety characteristics of a steady pipeline of agents advancing toward clinical testing. Applications of toxicogenomics to the evaluation of compounds could best be realized if this promising technology could be implemented in these studies fully anchored in the traditional study end points currently used to characterize phenotypic outcome and to support the safe conduct of clinical testing. Regulatory authorities worldwide have declared their support for toxicogenomics and related technological tools to positively impact drug development, and guidance has been published. However, applications of exploratory omics technologies to compounds undergoing safety testing remain inhibited due to two core data submission responsibility implications and ambiguities: (1) constraints arising from continual literature surveillance and data reanalysis burdens, under the shadow of looming subsequent reporting requirements to regulatory authorities as gene expression end points loosely linked to safety gain attention in the published literature, and (2) ambiguities in interpretation of validation stature remain between exploratory, probable valid, and known valid safety biomarkers. A proposal is offered to address these regulatory implementation barriers to open access for exploring this technology in prospective drug development animal toxicology studies.


Toxicologic Pathology | 1995

Food and Drug Administration Viewpoints on Toxicokinetics: The View from Review

Joseph J. DeGeorge

The importance of drug kinetics for interpretation of toxicity findings and for cross-species toxicity assessment has been long recognized. Recently, an international effort was initiated to standardize guidance on the kinetic data to be collected in conjunction with toxicity studies. The guidance addresses the kinetic data to be included in studies on carcinogenicity, reproduction toxicity, genotoxicity, and single- and repeat-dose toxicity. In various stages of development or implementation, the guidance is intentionally nondetailed regarding the specific kinetic assessments to be performed. This is to allow flexibility in study design and ensures that scientific judgment is used to determine the appropriate kinetic endpoints to achieve study- and drug-specific goals. Some examples of how kinetics have been used at the Food and Drug Administration in review of toxicity studies submitted in drug applications are presented. The examples discussed demonstrate successful and unsuccessful integration of kinetics into study design and interpretation and highlight the impact on the drug development program from a regulatory perspective.


Expert Opinion on Drug Metabolism & Toxicology | 2016

Toxicogenomics in drug development: a match made in heaven?

Chunhua Qin; Keith Q. Tanis; Alexei Podtelezhnikov; Warren E. Glaab; Frank D. Sistare; Joseph J. DeGeorge

Compound toxicity accounts for approximately half of all drug failures during development. Currently accepted preclinical studies for drug safety evaluation are time, resource, and animal intensive with often limited clinical predictivity. It is thus highly desirable to develop more efficient and predictive tools for early detection and assessment of potential compound liabilities. The emergence of genomics technologies over the last two decades promised to provide a solution. The premise of toxicogenomics (TGx) is straight forward: compounds with similar toxicity mechanisms and outcomes should perturb the transcriptome similarly and these perturbations could be used as more efficient and/or more predictive biomarkers of downstream toxicity outcome. This concept was reinforced by a number of pioneering studies demonstrating, for example, strong correlations between histopathology, clinical chemistry, and gene expression when different hepatocellular injuries were induced by chemical agents as reviewed in.[1,2] With such early advances, TGx was poised for earlier detection of a vast variety of drug-related outcomes, covering histopathologies across various organs, carcinogenicity, reproductive toxicity, etc., while deciphering mechanisms of action to create a more predictive and resource-sparing battery of tests for hazard identification, risk assessment, toxicity monitoring, and problem-solving across the drug development pipeline. This paradigm shift was anticipated to liberate the pharmaceutical and chemical industries from the current burden of toxicity liabilities, by enabling faster development of clinically safer compounds while reducing cost, infrastructure, and animal requirements.[1–3] TGx and drug discovery/development was expected to be a match made in heaven.


Regulatory Toxicology and Pharmacology | 2016

Role of chronic toxicology studies in revealing new toxicities

Alema Galijatovic-Idrizbegovic; Judith E. Miller; Wendy D. Cornell; James A. Butler; Gordon K. Wollenberg; Frank D. Sistare; Joseph J. DeGeorge

Chronic (>3 months) preclinical toxicology studies are conducted to support the safe conduct of clinical trials exceeding 3 months in duration. We have conducted a review of 32 chronic toxicology studies in non-rodents (22 studies in dogs and 10 in non-human primates) and 27 chronic toxicology studies in rats dosed with Merck compounds to determine the frequency at which additional target organ toxicities are observed in chronic toxicology studies as compared to subchronic studies of 3 months in duration. Our review shows that majority of the findings are observed in the subchronic studies since additional target organs were not observed in 24 chronic non rodent studies and in 21 chronic rodent studies. However, 6 studies in non rodents and 6 studies in rodents yielded new findings that were not seen in studies of 3-month or shorter duration. For 3 compounds the new safety findings did contribute to termination of clinical development plans. Although the incidence of compound termination associated with chronic toxicology study observations is low (∼10%), the observations made in these studies can be important for evaluating human safety risk.


Archive | 2013

The International Conference on Harmonisation: History of Safety Guidelines

Jan Willem van der Laan; Joseph J. DeGeorge

The International Conference on Harmonization started in 1989. An overview has been given about milestones and history of the progress, focused on the safety topics.


Archive | 2013

Toward More Scientific Relevance in Carcinogenicity Testing

Jan Willem van der Laan; Joseph J. DeGeorge; Frank D. Sistare; Jonathan G. Moggs

Carcinogenicity testing was chosen as one of the topics wherein xadharmonization could lead to more efficient guidance for the pharmaceutical industry without compromising human safety. An important difference in dose-selection strategy was the “toxicological” approach of the US FDA versus the “clinical dose margin” approach of the EU CPMP and the Japanese MHLW. The dose-selection guidance describes several acceptable approaches, including a new approach of the 25-fold AUC.


Archive | 2013

Global Approach in Safety Testing

Jan Willem van der Laan; Joseph J. DeGeorge

Abstract Abbreviations Abbreviations Methods—Reference standards and matrix preparation – Study sample preparation and analysis – Analytical conditions – Acceptance criteria Methods—Overview of experimental design – Toxicokinetic analysis including calculation and statistical methods Results—Overview of analysis – Study sample data – Calibration data – QC sample data – Repeat analyses – Incurred sample reanalysis (ISR) if applicable Results—Toxicokinetic parameters for parent compound with descriptive statistics – Toxicokinetic parameters for metabolites with descriptive statistics, if applicable – Additional statistical analyses, if applicable Discussion (keep brief and related to data contained in report) Discussion (keep brief and related to data contained in report) Archiving of data Archiving of data References References Tables—Summary of runs – Study sample data – Calibration curve parameters – QC sample data – Sample reanalysis data – ISR data if applicable Tables—Individual toxicokinetic parameters of parent compound and metabolites where applicable with descriptive statistics for each dose and sampling day – Additional statistical analysis (e.g. linearity of kinetics), if applicable – Individual plasma concentrations with descriptive statistics (parent compound and metabolites, if applicable) for each dose and sampling day Figures—Calibration curves – Representative chromatograms including blanks Figures—Individual plasma pro fi les of parent compound and metabolites where applicable for each dose combining fi rst, midand last dose where available – Average plasma pro fi les with variability of parent compound and metabolites where applicable combining fi rst, midand last dose where available – Linearity of kinetics 153 7 Toxicokinetics: A Guidance for Assessing Systemic Exposure in Toxicology... Failure to take this into account could result in signi fi cant errors of the safety margins, especially for compounds showing high plasma protein binding. In some instances where the in vitro toxicity has been measured, such as speci fi c receptor (5HT 2 B cardiac binding) or transporter binding (e.g. hERG), the ratio of the toxic ED50 or ED20 to the unbound highest therapeutic plasma level at steady state can be used as another assessment of safety margin. However, the interpretation of such data is confounded by protein binding and tissue uptake which cannot be easily unravelled, but this approach may provide additional information to allow safety dosing decisions to be taken. The extent of the desired safety margin will depend somewhat of the therapeutic area for the drug and the risk bene fi t ratio and the type of toxicity. Mortality weighs more heavily, for example, than liver hypertrophy, but as a general rule of thumb, margins greater than 30–50-fold seems a little high ́10 is usually the minimum with <10 for life threatening disease are deemed acceptable. For those compounds treating life-threatening diseases and for which there are very few, if any, alternatives, the safety margin tends to be lower. For this latter category, a tenfold safety margin or less may be acceptable. In practice, for the FIH, starting dose is based on MRSD and not necessarily on toxicokinetics, whilst the maximum dose in early human studies is dependent on the worst case parameter at the lowest NOAEL, independent of whether it is Cmax or AUC, bound or unbound, but can be exceeded if the human tolerates it better than the animals used in safety studies when ‘safety margins’ may be less than 1.


Ilar Journal | 2016

Scientific Knowledge and Technology, Animal Experimentation, and Pharmaceutical Development

Lewis Kinter; Joseph J. DeGeorge

Human discovery of pharmacologically active substances is arguably the oldest of the biomedical sciences with origins >3500 years ago. Since ancient times, four major transformations have dramatically impacted pharmaceutical development, each driven by advances in scientific knowledge, technology, and/or regulation: (1) anesthesia, analgesia, and antisepsis; (2) medicinal chemistry; (3) regulatory toxicology; and (4) targeted drug discovery. Animal experimentation in pharmaceutical development is a modern phenomenon dating from the 20th century and enabling several of the four transformations. While each transformation resulted in more effective and/or safer pharmaceuticals, overall attrition, cycle time, cost, numbers of animals used, and low probability of success for new products remain concerns, and pharmaceutical development remains a very high risk business proposition. In this manuscript we review pharmaceutical development since ancient times, describe its coevolution with animal experimentation, and attempt to predict the characteristics of future transformations.


Toxicologic Pathology | 2015

Regulatory Forum Commentary* Counterpoint: Dose Selection for Tg.rasH2 Mouse Carcinogenicity Studies.

Jarig Darbes; Frank D. Sistare; Joseph J. DeGeorge

High-dose selection for 6-month carcinogenicity studies of pharmaceutical candidates in Tg.rasH2-transgenic mice currently primarily relies on (1) estimation of a maximum tolerated dose (MTD) from the results of a 1-month range-finding study, (2) determination of the maximum dose administrable to the animals (maximum feasible dose [MFD]), (3) demonstration of a plateau in systemic exposure, and (4) use of a limit dose of 1,500 mg/kg/day for products with human daily doses not exceeding 500 mg. Eleven 6-month Tg.rasH2 carcinogenicity studies and their corresponding 1-month range-finding studies conducted at Merck were reviewed. High doses were set by estimation of the MTD in 6, by plateau of exposure in 3, and by MFD in 2 cases. For 4 of 6 studies where MTD was used for high-dose selection, the 1-month study accurately predicted the 6-month study tolerability whereas in the remaining 2 studies the high doses showed poorer tolerability than expected. The use of 3 or more drug-treated dose levels proved useful to ensure that a study would successfully and unambiguously demonstrate that a drug candidate was adequately evaluated for carcinogenicity at a minimally toxic high dose level, especially when the high dose may be found to exceed the MTD.

Collaboration


Dive into the Joseph J. DeGeorge's collaboration.

Researchain Logo
Decentralizing Knowledge