Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Danielle Whicher is active.

Publication


Featured researches published by Danielle Whicher.


Annals of Internal Medicine | 2009

Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change

Bryan R. Luce; Judith M. Kramer; Steven N. Goodman; Jason T. Connor; Sean Tunis; Danielle Whicher; J. Sanford Schwartz

Join the dialogue on health care reform. Comment on the perspectives published in Annals and offer ideas of your own. All thoughtful voices should be heard. While advances in medical science have led to continued improvements in medical care and health outcomes, evidence of the comparative effectiveness of alternative management options remains inadequate for informed medical care and health policy decision making. The result is frequently suboptimal and inefficient care as well as unsustainable costs. To enhance or at least maintain quality of care as health reform and cost containment occurs, better evidence of comparative clinical and cost-effectiveness is required (1). The American Recovery and Reinvestment Act of 2009 allocated a


Clinical Trials | 2012

The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research

Kalipso Chalkidou; Sean Tunis; Danielle Whicher; Robert Fowler; Merrick Zwarenstein

1.1 billion down payment to support comparative effectiveness research (CER) (2). Although comparative effectiveness can be informed by synthesis of existing clinical information (systematic reviews, meta-analysis, and decision modeling) and analysis of observational data (administrative claims, electronic medical records, registries and other clinical cohorts, and casecontrol studies), randomized clinical trials (RCTs) are the most rigorous method of generating comparative effectiveness evidence and will necessarily occupy a central role in an expanded national CER agenda. However, as currently designed and conducted, many RCTs are ill suited to meet the evidentiary needs implicit in the IOM definition of CER: comparison of effective interventions among patients in typical patient care settings, with decisions tailored to individual patient needs (3). Without major changes in how we conceive, design, conduct, and analyze RCTs, the nation risks spending large sums of money inefficiently to answer the wrong questionsor the right questions too late. This article addresses several fundamental limitations of traditional RCTs for meeting CER objectives and offers 3 potentially transformational approaches to enhance their operational efficiency, analytical efficiency, and generalizability for CER. Enhancing Structural and Operational Efficiency As currently conducted, RCTs are inefficient and have become more complex, time consuming, and expensive. More than 90% of industry-sponsored clinical trials experience delayed enrollment (4). In a study comparing 28 industry-sponsored trials started between 1999 and 2002 with 29 trials started between 2003 and 2006, the time from protocol approval to database lock increased by a median of 70% (4). Several organizations have sought to streamline study start-up. In response to an analysis in Cancer and Leukemia Group B that found a median of 580 days from concept approval to phase 3 study activation (5), the National Cancer Institute established an operational efficiency working group to reduce study activation time by at least 50%, increase the proportion of studies reaching accrual targets, and improve timely study completion (6). The National Institutes of Healths Clinical and Translational Science Award recipients are documenting study start-up metrics as a first step to fostering improvements (7). The National Cancer Institute, the CEO Roundtable, Cancer Centers, and Cooperative Groups developed standard terms for clinical trial agreements as a starting point for negotiations between study sponsors and clinical sites (8). The Institute of Medicines Drug Forum also commissioned development of a template clinical research agreement (9). Through its Critical Path Program, the U.S. Food and Drug Administration (FDA) established the Clinical Trials Transformation Initiative (CTTI), a publicprivate partnership whose goal is to improve the quality and efficiency of clinical trials (10). The CTTI is hosted by Duke University and has broad representation from more than 50 member organizations, including academia, government, industry, clinical investigators, and patient advocates (11). The CTTI works by generating empirical data on how clinical trials are currently conducted and how they may be improved. Initial priorities for study include design principles, data quality and quantity (including monitoring), study start-up, and adverse event reporting. One of CTTIs projects is addressing site monitoring, an area that has been estimated to absorb 25% to 30% of phase 3 trial costs (12) and for which there is widespread agreement that improved efficiency is needed. The CTTI is determining the current range of monitoring practices for RCTs used by the National Institutes of Health, academic institutions, and industry; assessing the quality objectives of monitoring; and determining the performance of various monitoring practices in meeting quality objectives. This project will provide criteria to help sponsors select the most appropriate monitoring methods for a trial, thereby improving quality while optimizing resources. Collectively, these efforts are generating empirical evidence and developing the mechanisms to improve clinical trial efficiency. In conjunction with other improvements, including those described below, the resulting changes in clinical trial practices will increase the feasibility of mounting the scale and scope of RCTs required to evaluate the comparative effectiveness of medical care. Analytical Efficiency: The Potential Role of Bayesian and Adaptive Approaches The traditional frequentist school has provided a solid foundation for medical statistics. But the artificial division of results into significant and nonsignificant is better suited for one-time dichotomous decisions, such as regulatory approval, and is not the best model for comparing interventions as evidence accumulates over time, as occurs in a dynamic medical care system. With traditional trials and analytical methods, it is difficult to make optimal use of relevant existing, ancillary, or new evidence as it arises during a trial, and thus such methods often are not well suited to facilitate clinical and policy decision making. Furthermore, real-world CER can be noisier than a standard RCT. Standard statistical techniques require increased sample sizes, in part because of the resulting additional variability and in part when trials compare several active treatments whose effectiveness differs by relatively small amounts. Designs that use features that change or adapt in response to information generated during the trial can be more efficient than standard approaches. Although many standard RCTs are adaptive in limited ways (for example, those with interim monitoring and stopping rules), the frequentist paradigm inhibits adaptation because of the requirement to prespecify all possible study outcomes, which in turn requires some rigidity in design. The Bayesian approach, using formal, probabilistic statements of uncertainty based on the combination of all sources of information both from within and outside a study, prespecifies how information from various sources will be combined and how the design will change while controlling the probability of false-positive and false-negative conclusions (13). Bayesian and adaptive analytical approaches can reduce the sample size, time, and cost required to obtain decision-relevant information by incorporating existing high-quality external evidence (such as information from pivotal trials, systematic reviews, models, and rigorously conducted observational studies) into CER trial design and drawing on observed within-trial end point relationships. If new interventions become available, adaptive RCT designs can allow these interventions to be added and less effective ones dropped without restarting the trial; therefore, at any given time, the trial is comparing the alternatives most relevant to current clinical practice. This dynamic learning adaptive feature (analogous to the Institute of Medicine Evidence-Based Medicine Roundtables learning health care system [14]) improves both the timeliness and clinical relevance of trial results. The following example shows how this model operates. A standard comparative effectiveness trial design of 4 alternative strategies for HIV infection treatment starts with the hypothesis of equal effectiveness of all 4 treatments. In contrast, as the trial progresses, the Bayesian approach answers the pragmatic questions: What is the probability that the favored therapy is the best of the 4 therapies? and What is the probability that the currently worst therapy will turn out to be best? (15). If this latter probability is low enough, the trialists can drop that treatment even if it is not, by conventional statistical testing, worse than other treatments. Newly developed HIV treatment strategies also can enter the trial, thus focusing patient resources on the most relevant treatment comparison. Bayesian and adaptive designs are particularly useful for rapidly evolving interventions (such as devices, procedures, practices, and systems interventions), especially when outcomes occur soon enough to permit adaptation of the trial design. They should also prove useful for clinical studies generated by such conditional coverage schemes as Medicares Coverage with Evidence Development policy by adding onto an existing evidence base and adapting studies into community care settings of interest to payers and patients (16, 17). Random allocation need not be equal between trial arms or patient subgroups. Probabilities of each intervention being the best can be updated and random allocation probabilities revised, so that more patients are allocated to the most promising strategies as evidence accumulates. This flexibility can also permit Bayesian trials to focus experimentation on clinically relevant subgroups, which could facilitate tailoring strategies to particular patients, a key element of CER. Experience with Bayesian adaptive approaches has been growing in recent years. Early-phase cancer trials are commonly performed using Bayesian designs (18). In 2005, the FDA released a draft guidance document for the u


Medical Care | 2013

Ethics and informed consent for comparative effectiveness research with prospective electronic clinical data.

Ruth R. Faden; Nancy E. Kass; Danielle Whicher; Walter F. Stewart; Sean Tunis

There is a growing appreciation that our current approach to clinical research leaves important gaps in evidence from the perspective of patients, clinicians, and payers wishing to make evidence-based clinical and health policy decisions. This has been a major driver in the rapid increase in interest in comparative effectiveness research (CER), which aims to compare the benefits, risks, and sometimes costs of alternative health-care interventions in ‘the real world’. While a broad range of experimental and nonexperimental methods will be used in conducting CER studies, many important questions are likely to require experimental approaches – that is, randomized controlled trials (RCTs). Concerns about the generalizability, feasibility, and cost of RCTs have been frequently articulated in CER method discussions. Pragmatic RCTs (or ‘pRCTs’) are intended to maintain the internal validity of RCTs while being designed and implemented in ways that would better address the demand for evidence about real-world risks and benefits for informing clinical and health policy decisions. While the level of interest and activity in conducting pRCTs is increasing, many challenges remain for their routine use. This article discusses those challenges and offers some potential ways forward.


International Journal of Technology Assessment in Health Care | 2009

Comparative effectiveness research priorities: identifying critical gaps in evidence for clinical and health policy decision making.

Kalipso Chalkidou; Danielle Whicher; Weslie Kary; Sean Tunis

Background: Electronic clinical data (ECD) will increasingly serve as an important source of information for comparative effectiveness research (CER). Although many retrospective studies have relied on ECD, new study designs propose using ECD for prospective CER. These designs have great potential but they also raise important ethics questions. Aims: Drawing on an ethics framework for learning health care systems, we identify morally relevant features of prospective CER-ECD studies by examining 1 case of an observational study and a second of a pragmatic, randomized trial. We focus only on questions of consent and assume research has been subject to appropriate ethics review and oversight. Results and Conclusions: We conclude that a CER-ECD observational study that imposes no or minimal additional risk to or burden on patients may proceed ethically without express informed consent from participants in settings where: (a) patients are regularly informed of the health care institution’s commitment to learning through the integration of research and practice; and (b) there are appropriate protections for patients’ rights and interests. In addition, where (a) and (b) apply, some pragmatic, randomized trials that similarly impose no or minimal additional risk to or burden on patients may also proceed ethically without express consent, when certain additional conditions are satisfied, including: (c) the trial does not negatively affect patients’ prospects for good clinical outcomes; (d) physicians have the option of using an intervention other than the one assigned if they believe doing so is important for a particular patient; and (e) the trial does not engage preferences or values that are meaningful to patients.


PharmacoEconomics | 2010

Generating evidence for comparative effectiveness research using more pragmatic randomized controlled trials.

C. Daniel Mullins; Danielle Whicher; Emily S. Reese; Sean Tunis

BACKGROUND In the debate on improving the quality and efficiency of the United States healthcare system, comparative effectiveness research is increasingly seen as a tool for reducing costs without compromising outcomes. Furthermore, the recent American Recovery and Reinvestment Act explicitly describes a prioritization function for establishing a comparative effectiveness research agenda. However, how such a function, in terms of methods and process, would go about identifying the most important priorities warranting further research has received little attention. OBJECTIVES This study describes an Agency for Healthcare Research and Quality-funded pilot project to translate one current comparative effectiveness review into a prioritized list of evidence gaps and research questions reflecting the views of the healthcare decision makers involved in the pilot. METHODS To create a prioritized research agenda, we developed an interactive nominal group process that relied on a multistakeholder workgroup scoring a list of research questions on the management of coronary artery disease. RESULTS According to the group, the areas of greatest uncertainty regarding the management of coronary artery disease are the comparative effectiveness of medical therapy versus percutaneous coronary interventions versus coronary artery bypass grafting for different patient subgroups; the impact of diagnostic testing; and the most effective method of developing performance measures for providers. CONCLUSIONS By applying our nominal group process, we were able to create a list of research priorities for healthcare decision makers. Future research should focus on refining this process because determining research priorities is essential to the success of developing an infrastructure for comparative effectiveness research.


Milbank Quarterly | 2009

Comparative Effectiveness Research in Ontario, Canada: Producing Relevant and Timely Information for Health Care Decision Makers

Danielle Whicher; Kalipso Chalkidou; Irfan Dhalla; Leslie Levin; Sean Tunis

Comparative effectiveness research (CER), or research design to meet the needs of post-regulatory decision makers, has been brought into the spotlight with the introduction of the American Recovery and Reinvestment Act, which provided


Journal of Empirical Research on Human Research Ethics | 2015

The Views of Quality Improvement Professionals and Comparative Effectiveness Researchers on Ethics, IRBs, and Oversight

Danielle Whicher; Nancy E. Kass; Yashar Saghai; Ruth R. Faden; Sean Tunis; Peter J. Pronovost

US1.1 billion over 2 years to support CER. In the short run, the majority of this money will be invested in observational studies and building of infrastructure; however, in the long run, we will likely see an increase in the number of randomized controlled trials (RCTs), as this method is arguably the most unbiased approach for establishing causal effect between treatments and health outcomes. RCTs are an integral component of CER for generating credible evidence on the relative value of alternative interventions in order to meet the needs of post-regulatory decision makers (patients, physicians, payers and policy makers).Explanatory phase III RCTs are fit for purpose; researchers make use of guidance documents produced by the US FDA to inform the design of these clinical trials. Historically, without explicit FDA guidance, broad patient populations, including women and minorities, often were not considered in trial design. In addition, attempts to minimize cost and maximize efficiency have led to smaller sample sizes, as is clear from the increase in ‘creeping phase II-ism’. To demonstrate effectiveness, RCTs must be reflective of how an intervention will be used in the healthcare market. The concept of pragmatic clinical trials has emerged to describe those trials that are designed explicitly with this need in mind. Use of pragmatic trials will be most impactful if post-regulatory decision makers are engaged in the development of recommendations for trial design features, such as indicating outcomes measures and articulating patient populations of interest, which clearly express their evidence needs.


Contemporary Clinical Trials | 2013

Recommendations for the design of Phase 3 pharmaceutical trials that are more informative for patients, clinicians, and payers

Seema S. Sonnad; C. Daniel Mullins; Danielle Whicher; Jennifer C. Goldsack; Penny Mohr; Sean R. Tunis

CONTEXT Comparative effectiveness research is increasingly being recognized as a method to link research with the information needs of decision makers. As the United States begins to invest in comparative effectiveness, it would be wise to look at other functioning research networks to understand the infrastructure and funding required to support them. METHODS This case study looks at the comparative effectiveness research network in Ontario, Canada, for which a neutral coordinating committee is responsible for prioritizing topics, assessing evidence, providing recommendations on coverage decisions, and determining pertinent research questions for further evaluation. This committee is supported by the Medical Advisory Secretariat and several large research institutions. This article analyzes the infrastructure and cost needed to support this network and offers recommendations for developing policies and methodologies to support comparative effectiveness research in the United States. FINDINGS The research network in place in Ontario explicitly links decision making with evidence generation, in a transparent, timely, and efficient way. Funding is provided by the Ontario government through a reliable and stable funding mechanism that helps ensure that the studies it supports are relevant to decision makers. CONCLUSIONS With the recent allocation of funds to support comparative effectiveness research from the American Recovery and Reinvestment Act, the United States should begin to construct an infrastructure that applies these features to make sure that evidence generated from this effort positively affects the quality of health care delivered to patients.


Journal of Patient Safety | 2015

Ethical Issues in Patient Safety Research: A Systematic Review of the Literature.

Danielle Whicher; Nancy E. Kass; Carmen Audera-Lopez; Mobasher Butt; Iciar Larizgoitia Jauregui; Kendra Harris; Jonathan Knoche; Abha Saxena

Recently, there have been increasing numbers of activities labeled as either quality improvement (QI) or comparative effectiveness research (CER), both of which are designed to learn what works and what does not in routine clinical care settings. These activities can create confusion for researchers, Institutional Review Board members, and other stakeholders as they try to determine which activities or components of activities constitute clinical practices and which constitute clinical research requiring ethical oversight and informed consent. We conducted a series of semi-structured focus groups with QI and CER professionals to understand their experiences and views of the ethical and regulatory challenges that exist as well as the formal or informal practices and criteria they and their institutions use to address these issues. We found that most participants have experienced challenges related to the ethical oversight of QI and CER activities, and many believe that current regulatory criteria for distinguishing clinical practice from clinical research requiring ethical oversight are confusing. Instead, many participants described other criteria that they believe are more ethically appropriate. Many also described developing formal or informal practices at their institutions to navigate which activities require ethical oversight. However, these local solutions do not completely resolve the issues caused by the blurring of clinical practice and clinical research, raising the question of whether more foundational regulatory changes are needed.


Journal of The American College of Radiology | 2009

The National Oncologic PET Registry: Lessons Learned for Coverage With Evidence Development

Sean Tunis; Danielle Whicher

BACKGROUND Pharmaceutical pragmatic clinical trials (PCTs) are designed to provide the type of evidence that is desired by patients, clinicians and payers but too often missing from traditional regulatory trials. PURPOSE This paper presents framework for designing pragmatic trials incorporating evidence desired by post-regulatory decision makers while remaining within acceptable standards for regulatory approval. METHODS Following a stakeholder meeting convened in May of 2009 to identify gaps in information collected in Phase 3 trials, CMTP staff and the authors drafted recommendations for Pragmatic Phase 3 Pharmaceutical Trials. This draft was circulated first to technical working group members for their comments. After revising the document based on these comments, it was distributed electronically to other select experts and then made available for public comment. The final version of the EGD appears on the CMTP website. RESULTS The process resulted in a set of 10 recommendations for conducting Phase 3 trials that met regulatory needs while addressing information important to physicians, patients, payers, and policy-makers. These recommendations encompassed three primary areas: generalizability from the trial participants to the clinical population of interest; effectiveness relative to active comparators; and consistently measured relevant outcomes for coverage and treatment decisions. LIMITATIONS While stakeholders were involved throughout the process, not all recommendations will meet the needs of all stakeholders. CONCLUSIONS Pragmatic trial design need not be deferred until a product is in widespread use. Incremental movement toward the more pragmatic design of Phase 3 trials is desirable.

Collaboration


Dive into the Danielle Whicher's collaboration.

Top Co-Authors

Avatar

Sean Tunis

Agency for Healthcare Research and Quality

View shared research outputs
Top Co-Authors

Avatar

Nancy E. Kass

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Ruth R. Faden

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean R. Tunis

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Yashar Saghai

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge