Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred Hauben is active.

Publication


Featured researches published by Manfred Hauben.


Drug Safety | 2005

Perspectives on the Use of Data Mining in Pharmacovigilance

June S. Almenoff; Joseph M. Tonning; A. Lawrence Gould; Ana Szarfman; Manfred Hauben; Rita Ouellet-Hellstrom; Robert Ball; Ken Hornbuckle; Louisa Walsh; Chuen Yee; Susan Sacks; Nancy Yuen; Vaishali Patadia; Michael Blum; Mike Johnston; Charles Gerrits; Harry Seifert; Karol LaCroix

In the last 5 years, regulatory agencies and drug monitoring centres have been developing computerised data-mining methods to better identify reporting relationships in spontaneous reporting databases that could signal possible adverse drug reactions. At present, there are no guidelines or standards for the use of these methods in routine pharmacovigilance. In 2003, a group of statisticians, pharmacoepidemiologists and pharmacovigilance professionals from the pharmaceutical industry and the US FDA formed the Pharmaceutical Research and Manufacturers of America-FDA Collaborative Working Group on Safety Evaluation Tools to review best practices for the use of these methods.In this paper, we provide an overview of: (i) the statistical and operational attributes of several currently used methods and their strengths and limitations; (ii) information about the characteristics of various postmarketing safety databases with which these tools can be deployed; (iii) analytical considerations for using safety data-mining methods and interpreting the results; and (iv) points to consider in integration of safety data mining with traditional pharmacovigilance methods. Perspectives from both the FDA and the industry are provided.Data mining is a potentially useful adjunct to traditional pharmacovigilance methods. The results of data mining should be viewed as hypothesis generating and should be evaluated in the context of other relevant data. The availability of a publicly accessible global safety database, which is updated on a frequent basis, would further enhance detection and communication about safety issues.


Expert Opinion on Drug Safety | 2005

The role of data mining in pharmacovigilance.

Manfred Hauben; David Madigan; Charles M. Gerrits; Louisa Walsh; Eugène P. van Puijenbroek

A principle concern of pharmacovigilance is the timely detection of adverse drug reactions that are novel by virtue of their clinical nature, severity and/or frequency. The cornerstone of this process is the scientific acumen of the pharmacovigilance domain expert. There is understandably an interest in developing database screening tools to assist human reviewers in identifying associations worthy of further investigation (i.e., signals) embedded within a database consisting largely of background ‘noise’ containing reports of no substantial public health significance. Data mining algorithms are, therefore, being developed, tested and/or used by health authorities, pharmaceutical companies and academic researchers. After a focused review of postapproval drug safety signal detection, the authors explain how the currently used algorithms work and address key questions related to their validation, comparative performance, deployment in naturalistic pharmacovigilance settings, limitations and potential for misuse. Suggestions for further research and development are offered.


Drug Safety | 2003

Quantitative methods in pharmacovigilance: Focus on signal detection

Manfred Hauben; Xiaofeng Zhou

Pharmacovigilance serves to detect previously unrecognised adverse events associated with the use of medicines. The simplest method for detecting signals of such events is crude inspection of lists of spontaneously reported drug-event combinations. Quantitative and automated numerator-based methods such as Bayesian data mining can supplement or supplant these methods. The theoretical basis and limitations of these methods should be understood by drug safety professionals, and automated methods should not be automatically accepted. Published evaluations of these techniques are mainly limited to large regulatory databases, and performance characteristics may differ in smaller safety databases of drug developers. Head-to-head comparisons of the major techniques have not been published. Regardless of previous statistical training, pharmacovigilance practitioners should understand how these methods work. The mathematical basis of these techniques should not obscure the numerous confounders and biases inherent in the data. This article seeks to make automated signal detection methods transparent to drug safety professionals of various backgrounds. This is accomplished by first providing a brief overview of the evolution of signal detection followed by a series of sections devoted to the methods with the greatest utilisation and evidentiary support: proportional reporting rations, the Bayesian Confidence Propagation Neural Network and empirical Bayes screening. Sophisticated yet intuitive explanations are provided for each method, supported by figures in which the underlying statistical concepts are explored. Finally the strengths, limitations, pitfalls and outstanding unresolved issues are discussed. Pharmacovigilance specialists should not be intimidated by the mathematics. Understanding the theoretical basis of these methods should enhance the effective assessment and possible implementation of these techniques by drug safety professionals.


Drug Discovery Today | 2009

Decision support methods for the detection of adverse events in post-marketing data

Manfred Hauben; Andrew Bate

Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.


Drug Safety | 2009

Defining ‘Signal’ and its Subtypes in Pharmacovigilance Based on a Systematic Review of Previous Definitions

Manfred Hauben; Jeffrey Aronson

Having surveyed the etymology and previous definitions of the pharmacovigilanceterm ‘signal’, we propose a definition that embraces all the surveyed ideas, reflects real-world pharmacovigilance processes, and accommodates signals of both harmful and beneficial effects.The essential definitional features of a pharmacovigilance signal are (i) that it is based on one or more reports of an association between an intervention or interventions and an event or set of related events (e.g. a syndrome), including any type of evidence (clinical or experimental); (ii) that it represents an association that is new and important and has not been previously investigated and refuted; (iii) that it incites to action (verification and remedial action); (iv) that it does not encompass intervention-event associations that are not related to causality or risk with a specified degree of likelihood and scientific plausibility.Based on these features, we propose this definition of a signal of suspected causality: “information that arises from one or multiple sources (including observations and experiments), which suggests a new potentially causal association, or a new aspect of a known association, between an intervention and an event or set of related events, either adverse or beneficial, which would command regulatory, societal or clinical attention, and is judged to be of sufficient likelihood to justify verificatory and, when necessary, remedial actions.”This defines an unverified signal; we have also defined terms —indeterminate, verified, and refuted signals — that qualify it in relation to verification.This definition and its accompanying flowchart should inform decision making in considering benefits and harms caused by pharmacological and nonpharmacological interventions.


Drug Safety | 2005

Data Mining in Pharmacovigilance

Manfred Hauben; Vaishali K. Patadia; Charles M. Gerrits; Louisa Walsh; Lester Reich

Data mining is receiving considerable attention as a tool for pharmacovigilance and is generating many perspectives on its uses. This paper presents four concepts that have appeared in various professional venues and represent potential sources of misunderstanding and/or entail extended discussions: (i) data mining algorithms are unvalidated; (ii) data mining algorithms allow data miners to objectively screen spontaneous report data; (iii) mathematically more complex Bayesian algorithms are superior to frequentist algorithms; and (iv) data mining algorithms are not just for hypothesis generation. Key points for a balanced perspective are that: (i) validation exercises have been done but lack a gold standard for comparison and are complicated by numerous nuances and pitfalls in the deployment of data mining algorithms. Their performance is likely to be highly situation dependent; (ii) the subjective nature of data mining is often underappreciated; (iii) simpler data mining models can be supplemented with ‘clinical shrinkage’, preserving sensitivity; and (iv) applications of data mining beyond hypothesis generation are risky, given the limitations of the data. These extended applications tend to ‘creep’, not pounce, into the public domain, leading to potential overconfidence in their results. Most importantly, in the enthusiasm generated by the promise of data mining tools, users must keep in mind the limitations of the data and the importance of clinical judgment and context, regardless of statistical arithmetic. In conclusion, we agree that contemporary data mining algorithms are promising additions to the pharmacovigilance toolkit, but the level of verification required should be commensurate with the nature and extent of the claimed applications.


Drug Safety | 2004

Safety Related Drug-Labelling Changes Findings from Two Data Mining Algorithms

Manfred Hauben; Lester Reich

AbstractIntroduction: With increasing volumes of postmarketing safety surveillance data, data mining algorithms (DMAs) have been developed to search large spontaneous reporting system (SRS) databases for disproportional statistical dependencies between drugs and events. A crucial question is the proper deployment of such techniques within the universe of methods historically used for signal detection. One question of interest is comparative performance of algorithms based on simple forms of disproportionality analysis versus those incorporating Bayesian modelling. A potential benefit of Bayesian methods is a reduced volume of signals, including false-positive signals. Objective: To compare performance of two well described DMAs (proportional reporting ratios [PRRs] and an empirical Bayesian algorithm known as multi-item gamma Poisson shrinker [MGPS]) using commonly recommended thresholds on a diverse data set of adverse events that triggered drug labelling changes. Methods: PRRs and MGPS were retrospectively applied to a diverse sample of drug-event combinations (DECs) identified on a government Internet site for a 7-month period. Metrics for this comparative analysis included the number and proportion of these DECs that generated signals of disproportionate reporting with PRRs, MGPS, both or neither method, differential timing of signal generation between the two methods, and clinical nature of events that generated signals with only one, both or neither method. Results: There were 136 relevant DECs that triggered safety-related labelling changes for 39 drugs during a 7-month period. PRRs generated a signal of disproportionate reporting with almost twice as many DECs as MGPS (77 vs 40). No DECs were flagged by MGPS only. PRRs highlighted DECs in advance of MGPS (1–15 years) and a label change (1–30 years). For 59 DECs, there was no signal with either DMA. DECs generating signals of disproportionate reporting with only PRRs were both medically serious and non-serious. Discussion/conclusion: In most instances in which a DEC generated a signal of disproportionate reporting with both DMAs (almost twice as many with PRRs), the signal was generated using PRRs in advance of MGPS. No medically important events were signalled only by MGPS. It is likely that the incremental utility of DMAs are highly situation-dependent. It is clear, however, that the volume of signals generated by itself is an inadequate criterion for comparison and that clinical nature of signalled events and differential timing of signals needs to be considered. Accepting commonly recommended threshold criteria for DMAs examined in this study as universal benchmarks for signal detection is not justified.


Drug Safety | 2012

Gold Standards in Pharmacovigilance

Manfred Hauben; Jeffrey Aronson

Anecdotal reports of adverse drug reactions are generally regarded as being of poor evidential quality. This is especially relevant for postmarketing drug safety surveillance, which relies heavily on spontaneous anecdotal reports. The numerous limitations of spontaneous reports cannot be overemphasised, but there is another side to the story: these datasets also contain anecdotal reports that can be considered to describe definitive adverse reactions, without the need for further formal verification. We have previously defined four categories of such adverse reactions: (i) extracellular or intracellular tissue deposition of the drug or a metabolite; (ii) a specific anatomical location or pattern of injury; (iii) physiological dysfunction or direct tissue damage demonstrable by physicochemical testing; and (iv) infection, as a result of the administration of an infective agent as the therapeutic substance or because of demonstrable contamination. In this article, we discuss the implications of these definitive (‘between-the-eyes’) adverse effects for pharmacovigilance.


Annals of Pharmacotherapy | 2003

A brief primer on automated signal detection.

Manfred Hauben

BACKGROUND: Statistical techniques have traditionally been underused in spontaneous reporting systems used for postmarketing surveillance of adverse drug events. Regulatory agencies, pharmaceutical companies, and drug monitoring centers have recently devoted considerable efforts to develop and implement computer-assisted automated signal detection methodologies that employ statistical theory to enhance screening efforts of expert clinical reviewers. OBJECTIVE: To provide a concise state-of-the-art review of the most commonly used automated signal detection procedures, including the underlying statistical concepts, performance characteristics, and outstanding limitations, and issues to be resolved. DATA SOURCES: Primary articles were identified by MEDLINE search (1965–December 2002) and through secondary sources. STUDY SELECTION AND DATA EXTRACTION: All of the articles identified from the data sources were evaluated and all information deemed relevant was included in this review. DATA SYNTHESIS: Commonly used methods of automated signal detection are self-contained and involve screening large databases of spontaneous adverse event reports in search of interestingly large disproportionalities or dependencies between significant variables, usually single drug–event pairs, based on an underlying model of statistical independence. The models vary according to the underlying model of statistical independence and whether additional mathematical modeling using Bayesian analysis is applied to the crude measures of disproportionality. There are many potential advantages and disadvantages of these methods, as well as significant unresolved issues related to the application of these techniques, including lack of comprehensive head-to-head comparisons in a single large transnational database, lack of prospective evaluations, and the lack of gold standard of signal detection. CONCLUSIONS: Current methods of automated signal detection are nonclinical and only highlight deviations from independence without explaining whether these deviations are due to a causal linkage or numerous potential confounders. They therefore cannot replace expert clinical reviewers, but can help them to focus attention when confronted with the difficult task of screening huge numbers of drug–event combinations for potential signals. Important questions remain to be answered about the performance characteristics of these methods. Pharmacovigilance professionals should take the time to learn the underlying mathematical concepts in order to critically evaluate accumulating experience pertaining to the relative performance characteristics of these methods that are incompletely defined.


Drug Safety | 2009

An Evaluation of Three Signal-Detection Algorithms Using a Highly Inclusive Reference Event Database

Alan M. Hochberg; Manfred Hauben; Ronald K. Pearson; Donald J. O’Hara; Stephanie J. Reisinger; David I. Goldsmith; A. Lawrence Gould; David Madigan

AbstractBackground: Pharmacovigilance data-mining algorithms (DMAs) are known to generate significant numbers of false-positive signals of disproportionate reporting (SDRs), using various standards to define the terms ‘true positive’ and ‘false positive’. Objective: To construct a highly inclusive reference event database of reported adverse events for a limited set of drugs, and to utilize that database to evaluate three DMAs for their overall yield of scientifically supported adverse drug effects, with an emphasis on ascertaining false-positive rates as defined by matching to the database, and to assess the overlap among SDRs detected by various DMAs. Methods: A sample of 35 drugs approved by the US FDA between 2000 and 2004 was selected, including three drugs added to cover therapeutic categories not included in the original sample. We compiled a reference event database of adverse event information for these drugs from historical and current US prescribing information, from peer-reviewed literature covering 1999 through March 2006, from regulatory actions announced by the FDA and from adverse event listings in the British National Formulary. Every adverse event mentioned in these sources was entered into the database, even those with minimal evidence for causality. To provide some selectivity regarding causality, each entry was assigned a level of evidence based on the source of the information, using rules developed by the authors. Using the FDA adverse event reporting system data for 2002 through 2005, SDRs were identified for each drug using three DMAs: an urn-model based algorithm, the Gamma Poisson Shrinker (GPS) and proportional reporting ratio (PRR), using previously published signalling thresholds. The absolute number and fraction of SDRs matching the reference event database at each level of evidence was determined for each report source and the data-mining method. Overlap of the SDR lists among the various methods and report sources was tabulated as well. Results: The GPS algorithm had the lowest overall yield of SDRs (763), with the highest fraction of events matching the reference event database (89 SDRs, 11.7%), excluding events described in the prescribing information at the time of drug approval. The urn model yielded more SDRs (1562), with a non-significantly lower fraction matching (175 SDRs, 11.2%). PRR detected still more SDRs (3616), but with a lower fraction matching (296 SDRs, 8.2%). In terms of overlap of SDRs among algorithms, PRR uniquely detected the highest number of SDRs (2231, with 144, or 6.5%, matching), followed by the urn model (212, with 26, or 12.3%, matching) and then GPS (0 SDRs uniquely detected). Conclusions: The three DMAs studied offer significantly different tradeoffs between the number of SDRs detected and the degree to which those SDRs are supported by external evidence. Those differences may reflect choices of detection thresholds as well as features of the algorithms themselves. For all three algorithms, there is a substantial fraction of SDRs for which no external supporting evidence can be found, even when a highly inclusive search for such evidence is conducted.

Collaboration


Dive into the Manfred Hauben's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles M. Gerrits

Takeda Pharmaceutical Company

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muhammad Younus

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge