Patrick B. Ryan
Janssen Pharmaceutica
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Patrick B. Ryan.
Annals of Internal Medicine | 2010
Paul E. Stang; Patrick B. Ryan; Judith A. Racoosin; J. Marc Overhage; Abraham G. Hartzema; Christian G. Reich; Emily Welebob; Thomas Scarnecchia; Janet Woodcock
The U.S. Food and Drug Administration (FDA) Amendments Act of 2007 mandated that the FDA develop a system for using automated health care data to identify risks of marketed drugs and other medical products. The Observational Medical Outcomes Partnership is a public-private partnership among the FDA, academia, data owners, and the pharmaceutical industry that is responding to the need to advance the science of active medical product safety surveillance by using existing observational databases. The Observational Medical Outcomes Partnerships transparent, open innovation approach is designed to systematically and empirically study critical governance, data resource, and methodological issues and their interrelationships in establishing a viable national program of active drug safety surveillance by using observational data. This article describes the governance structure, data-access model, methods-testing approach, and technology development of this effort, as well as the work that has been initiated.
Clinical Pharmacology & Therapeutics | 2012
Rave Harpaz; William DuMouchel; Nigam H. Shah; David Madigan; Patrick B. Ryan; Carol Friedman
An important goal of the health system is to identify new adverse drug events (ADEs) in the postapproval period. Data‐mining methods that can transform data into meaningful knowledge to inform patient safety have proven essential for this purpose. New opportunities have emerged to harness data sources that have not been used within the traditional framework. This article provides an overview of recent methodological innovations and data sources used to support ADE discovery and analysis.
Statistics in Medicine | 2012
Patrick B. Ryan; David Madigan; Paul E. Stang; J. Marc Overhage; Judith A. Racoosin; Abraham G. Hartzema
BACKGROUND Expanded availability of observational healthcare data (both administrative claims and electronic health records) has prompted the development of statistical methods for identifying adverse events associated with medical products, but the operating characteristics of these methods when applied to the real-world data are unknown. METHODS We studied the performance of eight analytic methods for estimating of the strength of association-relative risk (RR) and associated standard error of 53 drug-adverse event outcome pairs, both positive and negative controls. The methods were applied to a network of ten observational healthcare databases, comprising over 130 million lives. Performance measures included sensitivity, specificity, and positive predictive value of methods at RR thresholds achieving statistical significance of p < 0.05 or p < 0.001 and with absolute threshold RR > 1.5, as well as threshold-free measures such as area under receiver operating characteristic curve (AUC). RESULTS Although no specific method demonstrated superior performance, the aggregate results provide a benchmark and baseline expectation for risk identification method performance. At traditional levels of statistical significance (RR > 1, p < 0.05), all methods have a false positive rate >18%, with positive predictive value <38%. The best predictive model, high-dimensional propensity score, achieved an AUC = 0.77. At 50% sensitivity, false positive rate ranged from 16% to 30%. At 10% false positive rate, sensitivity of the methods ranged from 9% to 33%. CONCLUSIONS Systematic processes for risk identification can provide useful information to supplement an overall safety assessment, but assessment of methods performance suggests a substantial chance of identifying false positive associations.
Clinical Pharmacology & Therapeutics | 2013
Rave Harpaz; William DuMouchel; Paea LePendu; Anna Bauer-Mehren; Patrick B. Ryan; Nigam H. Shah
Signal‐detection algorithms (SDAs) are recognized as vital tools in pharmacovigilance. However, their performance characteristics are generally unknown. By leveraging a unique gold standard recently made public by the Observational Medical Outcomes Partnership (OMOP) and by conducting a unique systematic evaluation, we provide new insights into the diagnostic potential and characteristics of SDAs that are routinely applied to the US Food and Drug Administration (FDA) Adverse Event Reporting System (AERS). We find that SDAs can attain reasonable predictive accuracy in signaling adverse events. Two performance classes emerge, indicating that the class of approaches that address confounding and masking effects benefits safety surveillance. Our study shows that not all events are equally detectable, suggesting that specific events might be monitored more effectively using other data sources. We provide performance guidelines for several operating scenarios to inform the trade‐off between sensitivity and specificity for specific use cases. We also propose an approach and demonstrate its application in identifying optimal signaling thresholds, given specific misclassification tolerances.
American Journal of Epidemiology | 2013
David Madigan; Patrick B. Ryan; Martijn J. Schuemie; Paul E. Stang; J. Marc Overhage; Abraham G. Hartzema; Marc A. Suchard; William DuMouchel; Jesse A. Berlin
Clinical studies that use observational databases to evaluate the effects of medical products have become commonplace. Such studies begin by selecting a particular database, a decision that published papers invariably report but do not discuss. Studies of the same issue in different databases, however, can and do generate different results, sometimes with strikingly different clinical implications. In this paper, we systematically study heterogeneity among databases, holding other study methods constant, by exploring relative risk estimates for 53 drug-outcome pairs and 2 widely used study designs (cohort studies and self-controlled case series) across 10 observational databases. When holding the study design constant, our analysis shows that estimated relative risks range from a statistically significant decreased risk to a statistically significant increased risk in 11 of 53 (21%) of drug-outcome pairs that use a cohort design and 19 of 53 (36%) of drug-outcome pairs that use a self-controlled case series design. This exceeds the proportion of pairs that were consistent across databases in both direction and statistical significance, which was 9 of 53 (17%) for cohort studies and 5 of 53 (9%) for self-controlled case series. Our findings show that clinical studies that use observational databases can be sensitive to the choice of database. More attention is needed to consider how the choice of data source may be affecting results.
Statistical Methods in Medical Research | 2013
Ivan Zorych; David Madigan; Patrick B. Ryan; Andrew Bate
Data mining disproportionality methods (PRR, ROR, EBGM, IC, etc.) are commonly used to identify drug safety signals in spontaneous report system (SRS) databases. Newer data sources such as longitudinal observational databases (LOD) provide time-stamped patient-level information and overcome some of the SRS limitations such as an absence of the denominator, total number of patients who consume a drug, and limited temporal information. Application of the disproportionality methods to LODs has not been widely explored. The scale of the LOD data provides an interesting computational challenge. Larger health claims databases contain information on more than 50 million patients and each patient has records for up to 10 years. In this article we systematically explore the application of commonly used disproportionality methods to simulated and real LOD data.
Drug Safety | 2013
Patrick B. Ryan; Martijn J. Schuemie; Emily Welebob; Jon D. Duke; Sarah Valentine; Abraham G. Hartzema
BackgroundMethodological research to evaluate the performance of methods requires a benchmark to serve as a referent comparison. In drug safety, the performance of analyses of spontaneous adverse event reporting databases and observational healthcare data, such as administrative claims and electronic health records, has been limited by the lack of such standards.ObjectivesTo establish a reference set of test cases that contain both positive and negative controls, which can serve the basis for methodological research in evaluating methods performance in identifying drug safety issues.Research DesignSystematic literature review and natural language processing of structured product labeling was performed to identify evidence to support the classification of drugs as either positive controls or negative controls for four outcomes: acute liver injury, acute kidney injury, acute myocardial infarction, and upper gastrointestinal bleeding.ResultsThree-hundred and ninety-nine test cases comprised of 165 positive controls and 234 negative controls were identified across the four outcomes. The majority of positive controls for acute kidney injury and upper gastrointestinal bleeding were supported by randomized clinical trial evidence, while the majority of positive controls for acute liver injury and acute myocardial infarction were only supported based on published case reports. Literature estimates for the positive controls shows substantial variability that limits the ability to establish a reference set with known effect sizes.ConclusionsA reference set of test cases can be established to facilitate methodological research in drug safety. Creating a sufficient sample of drug-outcome pairs with binary classification of having no effect (negative controls) or having an increased effect (positive controls) is possible and can enable estimation of predictive accuracy through discrimination. Since the magnitude of the positive effects cannot be reliably obtained and the quality of evidence may vary across outcomes, assumptions are required to use the test cases in real data for purposes of measuring bias, mean squared error, or coverage probability.
Studies in health technology and informatics | 2015
George Hripcsak; Jon D. Duke; Nigam H. Shah; Christian G. Reich; Vojtech Huser; Martijn J. Schuemie; Marc A. Suchard; Rae Woong Park; Ian C. K. Wong; Peter R. Rijnbeek; Johan van der Lei; Nicole L. Pratt; G. Niklas Norén; Yu Chuan Li; Paul E. Stang; David Madigan; Patrick B. Ryan
The vision of creating accessible, reliable clinical evidence by accessing the clincial experience of hundreds of millions of patients across the globe is a reality. Observational Health Data Sciences and Informatics (OHDSI) has built on learnings from the Observational Medical Outcomes Partnership to turn methods research and insights into a suite of applications and exploration tools that move the field closer to the ultimate goal of generating evidence about all aspects of healthcare to serve the needs of patients, clinicians and all other decision-makers around the world.
Statistics in Medicine | 2014
Martijn J. Schuemie; Patrick B. Ryan; William DuMouchel; Marc A. Suchard; David Madigan
Often the literature makes assertions of medical product effects on the basis of ‘ p < 0.05’. The underlying premise is that at this threshold, there is only a 5% probability that the observed effect would be seen by chance when in reality there is no effect. In observational studies, much more than in randomized trials, bias and confounding may undermine this premise. To test this premise, we selected three exemplar drug safety studies from literature, representing a case–control, a cohort, and a self-controlled case series design. We attempted to replicate these studies as best we could for the drugs studied in the original articles. Next, we applied the same three designs to sets of negative controls: drugs that are not believed to cause the outcome of interest. We observed how often p < 0.05 when the null hypothesis is true, and we fitted distributions to the effect estimates. Using these distributions, we compute calibrated p-values that reflect the probability of observing the effect estimate under the null hypothesis, taking both random and systematic error into account. An automated analysis of scientific literature was performed to evaluate the potential impact of such a calibration. Our experiment provides evidence that the majority of observational studies would declare statistical significance when no effect is present. Empirical calibration was found to reduce spurious results to the desired 5% level. Applying these adjustments to literature suggests that at least 54% of findings with p < 0.05 are not actually statistically significant and should be reevaluated.
Proceedings of the National Academy of Sciences of the United States of America | 2016
George Hripcsak; Patrick B. Ryan; Jon D. Duke; Nigam H. Shah; Rae Woong Park; Vojtech Huser; Marc A. Suchard; Martijn J. Schuemie; Frank J. DeFalco; Adler J. Perotte; Juan M. Banda; Christian G. Reich; Lisa M. Schilling; Michael E. Matheny; Daniella Meeker; Nicole L. Pratt; David Madigan
Observational research promises to complement experimental research by providing large, diverse populations that would be infeasible for an experiment. Observational research can test its own clinical hypotheses, and observational studies also can contribute to the design of experiments and inform the generalizability of experimental research. Understanding the diversity of populations and the variance in care is one component. In this study, the Observational Health Data Sciences and Informatics (OHDSI) collaboration created an international data network with 11 data sources from four countries, including electronic health records and administrative claims data on 250 million patients. All data were mapped to common data standards, patient privacy was maintained by using a distributed model, and results were aggregated centrally. Treatment pathways were elucidated for type 2 diabetes mellitus, hypertension, and depression. The pathways revealed that the world is moving toward more consistent therapy over time across diseases and across locations, but significant heterogeneity remains among sources, pointing to challenges in generalizing clinical trial results. Diabetes favored a single first-line medication, metformin, to a much greater extent than hypertension or depression. About 10% of diabetes and depression patients and almost 25% of hypertension patients followed a treatment pathway that was unique within the cohort. Aside from factors such as sample size and underlying population (academic medical center versus general population), electronic health records data and administrative claims data revealed similar results. Large-scale international observational research is feasible.