Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bryan R. Luce is active.

Publication


Featured researches published by Bryan R. Luce.


Annals of Internal Medicine | 2009

Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change

Bryan R. Luce; Judith M. Kramer; Steven N. Goodman; Jason T. Connor; Sean Tunis; Danielle Whicher; J. Sanford Schwartz

Join the dialogue on health care reform. Comment on the perspectives published in Annals and offer ideas of your own. All thoughtful voices should be heard. While advances in medical science have led to continued improvements in medical care and health outcomes, evidence of the comparative effectiveness of alternative management options remains inadequate for informed medical care and health policy decision making. The result is frequently suboptimal and inefficient care as well as unsustainable costs. To enhance or at least maintain quality of care as health reform and cost containment occurs, better evidence of comparative clinical and cost-effectiveness is required (1). The American Recovery and Reinvestment Act of 2009 allocated a


PharmacoEconomics | 1997

Guidelines for Pharmacoeconomic Studies

Joanna E. Siegel; George W. Torrance; Louise B. Russell; Bryan R. Luce; Milton C. Weinstein; Marthe R. Gold

1.1 billion down payment to support comparative effectiveness research (CER) (2). Although comparative effectiveness can be informed by synthesis of existing clinical information (systematic reviews, meta-analysis, and decision modeling) and analysis of observational data (administrative claims, electronic medical records, registries and other clinical cohorts, and casecontrol studies), randomized clinical trials (RCTs) are the most rigorous method of generating comparative effectiveness evidence and will necessarily occupy a central role in an expanded national CER agenda. However, as currently designed and conducted, many RCTs are ill suited to meet the evidentiary needs implicit in the IOM definition of CER: comparison of effective interventions among patients in typical patient care settings, with decisions tailored to individual patient needs (3). Without major changes in how we conceive, design, conduct, and analyze RCTs, the nation risks spending large sums of money inefficiently to answer the wrong questionsor the right questions too late. This article addresses several fundamental limitations of traditional RCTs for meeting CER objectives and offers 3 potentially transformational approaches to enhance their operational efficiency, analytical efficiency, and generalizability for CER. Enhancing Structural and Operational Efficiency As currently conducted, RCTs are inefficient and have become more complex, time consuming, and expensive. More than 90% of industry-sponsored clinical trials experience delayed enrollment (4). In a study comparing 28 industry-sponsored trials started between 1999 and 2002 with 29 trials started between 2003 and 2006, the time from protocol approval to database lock increased by a median of 70% (4). Several organizations have sought to streamline study start-up. In response to an analysis in Cancer and Leukemia Group B that found a median of 580 days from concept approval to phase 3 study activation (5), the National Cancer Institute established an operational efficiency working group to reduce study activation time by at least 50%, increase the proportion of studies reaching accrual targets, and improve timely study completion (6). The National Institutes of Healths Clinical and Translational Science Award recipients are documenting study start-up metrics as a first step to fostering improvements (7). The National Cancer Institute, the CEO Roundtable, Cancer Centers, and Cooperative Groups developed standard terms for clinical trial agreements as a starting point for negotiations between study sponsors and clinical sites (8). The Institute of Medicines Drug Forum also commissioned development of a template clinical research agreement (9). Through its Critical Path Program, the U.S. Food and Drug Administration (FDA) established the Clinical Trials Transformation Initiative (CTTI), a publicprivate partnership whose goal is to improve the quality and efficiency of clinical trials (10). The CTTI is hosted by Duke University and has broad representation from more than 50 member organizations, including academia, government, industry, clinical investigators, and patient advocates (11). The CTTI works by generating empirical data on how clinical trials are currently conducted and how they may be improved. Initial priorities for study include design principles, data quality and quantity (including monitoring), study start-up, and adverse event reporting. One of CTTIs projects is addressing site monitoring, an area that has been estimated to absorb 25% to 30% of phase 3 trial costs (12) and for which there is widespread agreement that improved efficiency is needed. The CTTI is determining the current range of monitoring practices for RCTs used by the National Institutes of Health, academic institutions, and industry; assessing the quality objectives of monitoring; and determining the performance of various monitoring practices in meeting quality objectives. This project will provide criteria to help sponsors select the most appropriate monitoring methods for a trial, thereby improving quality while optimizing resources. Collectively, these efforts are generating empirical evidence and developing the mechanisms to improve clinical trial efficiency. In conjunction with other improvements, including those described below, the resulting changes in clinical trial practices will increase the feasibility of mounting the scale and scope of RCTs required to evaluate the comparative effectiveness of medical care. Analytical Efficiency: The Potential Role of Bayesian and Adaptive Approaches The traditional frequentist school has provided a solid foundation for medical statistics. But the artificial division of results into significant and nonsignificant is better suited for one-time dichotomous decisions, such as regulatory approval, and is not the best model for comparing interventions as evidence accumulates over time, as occurs in a dynamic medical care system. With traditional trials and analytical methods, it is difficult to make optimal use of relevant existing, ancillary, or new evidence as it arises during a trial, and thus such methods often are not well suited to facilitate clinical and policy decision making. Furthermore, real-world CER can be noisier than a standard RCT. Standard statistical techniques require increased sample sizes, in part because of the resulting additional variability and in part when trials compare several active treatments whose effectiveness differs by relatively small amounts. Designs that use features that change or adapt in response to information generated during the trial can be more efficient than standard approaches. Although many standard RCTs are adaptive in limited ways (for example, those with interim monitoring and stopping rules), the frequentist paradigm inhibits adaptation because of the requirement to prespecify all possible study outcomes, which in turn requires some rigidity in design. The Bayesian approach, using formal, probabilistic statements of uncertainty based on the combination of all sources of information both from within and outside a study, prespecifies how information from various sources will be combined and how the design will change while controlling the probability of false-positive and false-negative conclusions (13). Bayesian and adaptive analytical approaches can reduce the sample size, time, and cost required to obtain decision-relevant information by incorporating existing high-quality external evidence (such as information from pivotal trials, systematic reviews, models, and rigorously conducted observational studies) into CER trial design and drawing on observed within-trial end point relationships. If new interventions become available, adaptive RCT designs can allow these interventions to be added and less effective ones dropped without restarting the trial; therefore, at any given time, the trial is comparing the alternatives most relevant to current clinical practice. This dynamic learning adaptive feature (analogous to the Institute of Medicine Evidence-Based Medicine Roundtables learning health care system [14]) improves both the timeliness and clinical relevance of trial results. The following example shows how this model operates. A standard comparative effectiveness trial design of 4 alternative strategies for HIV infection treatment starts with the hypothesis of equal effectiveness of all 4 treatments. In contrast, as the trial progresses, the Bayesian approach answers the pragmatic questions: What is the probability that the favored therapy is the best of the 4 therapies? and What is the probability that the currently worst therapy will turn out to be best? (15). If this latter probability is low enough, the trialists can drop that treatment even if it is not, by conventional statistical testing, worse than other treatments. Newly developed HIV treatment strategies also can enter the trial, thus focusing patient resources on the most relevant treatment comparison. Bayesian and adaptive designs are particularly useful for rapidly evolving interventions (such as devices, procedures, practices, and systems interventions), especially when outcomes occur soon enough to permit adaptation of the trial design. They should also prove useful for clinical studies generated by such conditional coverage schemes as Medicares Coverage with Evidence Development policy by adding onto an existing evidence base and adapting studies into community care settings of interest to payers and patients (16, 17). Random allocation need not be equal between trial arms or patient subgroups. Probabilities of each intervention being the best can be updated and random allocation probabilities revised, so that more patients are allocated to the most promising strategies as evidence accumulates. This flexibility can also permit Bayesian trials to focus experimentation on clinically relevant subgroups, which could facilitate tailoring strategies to particular patients, a key element of CER. Experience with Bayesian adaptive approaches has been growing in recent years. Early-phase cancer trials are commonly performed using Bayesian designs (18). In 2005, the FDA released a draft guidance document for the u


International Journal of Technology Assessment in Health Care | 1993

Standardizing Methodologies for Economic Evaluation in Health Care: Practice, Problems, and Potential

Michael Drummond; Arno Brandt; Bryan R. Luce; Joan Rovira

SummaryThis article reports the recommendations of the Panel on Cost Effectiveness in Health and Medicine, sponsored by the US Public Health Service, on standardised methods for conducting cost-effectiveness analyses. Although not expressly directed at analyses of pharmaceutical agents, the Panel’s recommendations are relevant to pharmacoeconomic studies.The Panel outlines a ‘Reference Case’ set of methodological practices to improve quality and comparability of analyses. Designed for studies that inform resource-allocation decisions, the Reference Case includes recommendations for study framing and scope, components of the numerator and denominator of cost-effectiveness ratios, discounting, handling uncertainty and reporting.The Reference Case analysis is conducted from the societal perspective, and includes all effects of interventions on resource use and health. Resource use includes ‘time’ resources, such as for caregiving or undergoing an intervention. The quality-adjusted life-year (QALY) is the common measure of health effect across Reference Case studies. Although the Panel does not endorse a measure for obtaining quality-of-life weights, several recommendations address the QALY. The Panel recommends a 3% discount rate for costs and health effects.Pharmacoeconomic studies have burgeoned in recent years. The Reference Case analysis will improve study quality and usability, and permit comparison of pharmaceuticals with other health interventions.


Medical Care | 1993

HEALTH CARE CBA/CEA : AN UPDATE ON THE GROWTH AND COMPOSITION OF THE LITERATURE

Anne Elixhauser; Bryan R. Luce; William R. Taylor; Joseph Reblando

There has been an exponential growth in the literature on economic evaluation in health care. As the range and quality of analytical work has improved, economic studies are becoming more influential with health care decision makers. The development of standards for economic evaluation methods would help maintain the scientific quality of studies, facilitate the comparison of economic evaluation results for different health care interventions, and assist in the interpretation of results from setting to setting. However, standardization might unnecessarily stifle methodological developments. This paper reviews the arguments for and against standardization, assesses attempts to date, outlines the main areas of agreement and disagreement on methods for economic evaluation, and makes recommendations for further work.


International Journal of Technology Assessment in Health Care | 1990

Estimating costs in the economic evaluation of medical technologies

Bryan R. Luce; Anne Elixhauser

Cost-benefit (CBA) and cost-effectiveness analysas (CEA) are methods that enumerate the costs and consequences associated with health-related technologies, services, and programs. This article examines the trends in published CBA and CEA of personal health services from 1979 through 1990. It is based on a bibliography that was compiled to help address the immense need for information on the variation and effectiveness of medical practices, particularly as researchers expand their analysis to a study of the cost effectiveness of medical and surgical interventions, health care technologies, preventive practices, and other health programs.A systematic search was conducted for all articles under the heading “cost-benefit analysis” (which includes cost-effectiveness analysis) and “costs and cost analysis.” Data sources included the MEDLARS (National Library of Medicine) database, other bibliographies in specialized areas, reference lists in key articles, and contacts with researchers in the field. All titles and abstracts were scanned to determine if the articles pertained to personal health services and if both costs and consequences were assessed. If both criteria were met, the article was included in the bibliography. This search resulted in 3,206 eligible CBA/ CEA publications from 1979 through 1990.The publications were subdivided into two major categories: reports of studies and “other” publications, including reviews, descriptions of methodology, letters, and editorials. Reports of studies and “other” publications were classified into approximately 250 different topic areas. The studies were further classified by parameters, such as study type, publication vehicle, and medical function.This article describes the results of this classification and describes trends during 1979 to 1990 compared with 1966 to 1978. The classification of study reports and “other” publications into 250 topic areas is presented in Appendix A. The entire bibliography is reproduced in Appendix B. Detailed tables of findings are presented in Appendix C, and the results are illustrated graphically in Appendix D. Appendix E provides the coding scheme used in the bibliographys data base.


Milbank Quarterly | 2010

EBM, HTA, and CER: clearing the confusion.

Bryan R. Luce; Michael Drummond; Bengt Jönsson; Peter J. Neumann; J. Sanford Schwartz; Uwe Siebert; Sean D. Sullivan

The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.


Clinical Therapeutics | 1995

Methods of cost-effectiveness analysis: areas of consensus and debate.

Bryan R. Luce; Kit Simpson

CONTEXT The terms evidence-based medicine (EBM), health technology assessment (HTA), comparative effectiveness research (CER), and other related terms lack clarity and so could lead to miscommunication, confusion, and poor decision making. The objective of this article is to clarify their definitions and the relationships among key terms and concepts. METHODS This article used the relevant methods and policy literature as well as the websites of organizations engaged in evidence-based activities to develop a framework to explain the relationships among the terms EBM, HTA, and CER. FINDINGS This article proposes an organizing framework and presents a graphic demonstrating the differences and relationships among these terms and concepts. CONCLUSIONS More specific terminology and concepts are necessary for an informed and clear public policy debate. They are even more important to inform decision making at all levels and to engender more accountability by the organizations and individuals responsible for these decisions.


Social Science & Medicine | 1997

Managed care pharmacy, socioeconomic assessments and drug adoption decisions

Alan Lyles; Bryan R. Luce; Anne M. Rentz

Methods of evaluating socioeconomic relationships have evolved over many years, and a number of specific approaches have been developed. Among the techniques available, cost-effectiveness analysis (CEA) has emerged as the most widely used and accepted method. Yet, despite considerable effort by the analytical community to refine this technique into one more useful for making health policy decisions, much debate and confusion still persist among analysts, readers, and policy-makers concerning methods standards and the overall usefulness of CEA in resource allocation decision making. Thus the purpose of this paper is to summarize, critically examine, and comment on existing recommended methods for socioeconomic evaluation of health care interventions. In particular, we examine an exhaustive set of component methods within the general area of cost-effectiveness and comment on areas of apparent consensus and debate. Our review reveals many areas of agreement and many yet to be resolved. Analysts generally agree on the components of the overall framework for an analysis; basic methodologic principles; the general treatment of costs; the principle of marginal analysis; the need for and general approach to discounting; the use of sensitivity analysis; the extent to which ethical issues can be incorporated; and the importance of choosing appropriate alternatives for comparison. The principal areas in which disagreement still persists are choice of study design, measurement and valuation of health outcomes including conversion of health outcomes to economic values, transformation of efficacy results into effectiveness outcomes, and the empirical measurement of costs.


International Journal of Technology Assessment in Health Care | 2010

Are Key Principles for improved health technology assessment supported and used by health technology assessment organizations

Peter J. Neumann; Michael Drummond; Bengt Jönsson; Bryan R. Luce; J. Sanford Schwartz; Uwe Siebert; Sean D. Sullivan

A telephone survey of a representative national sample of 51 large managed care organizations in the U.S. (> 50,000 enrollees) was undertaken (1) to understand the role of socioeconomic assessments on drug adoption decisions; (2) to determine the sources of these assessments and the reliance of managed care pharmacy on each; and (3) to determine the resources for internally versus externally performed drug assessments. Socioeconomic assessments (clinical effectiveness, safety, cost of treatment, cost-effectiveness, and quality of life) are often tied to formulary decisions. Plans differ in their use of externally available socioeconomic assessments and in their ratings of the importance to decision making of drug assessments from the various sources. Those using a specific source of drug assessment information rated them in the following order of importance: PBM assessments, other HMOs, peer reviewed literature, evaluations performed by industry, articles in non-peer reviewed publications and, lastly, government reports. Timeliness and comprehensiveness are important components of the overall utility of information. A high percentage of plans reported using some of the various types of assessments, with clinical effectiveness most common, and cost-effectiveness second. The percentage of new drugs that undergo assessments in each of the plans covers a broad range, with 57% of the plans evaluating at least half of all new drugs. All but one surveyed managed care plan reported having either implemented or plans to implement a disease management program. Eighty percent of those surveyed are more concerned about drug assessments than in the past and 88% anticipate greater future use. Although 38 plans (75%) have a person in the organization responsible for drug assessments, this is the primary job in only 14 plans (37%). With greater reliance on drug assessments in the future, there are substantial opportunities for integrating drug assessments, formularies and disease management programs.


American Journal of Medical Quality | 1998

The Economic and Clinical Efficiency of Point-of-Care Testing for Critically Ill Patients: A Decision-Analysis Model

Michael T. Halpern; Cynthia S. Palmer; Kit N. Simpson; Francis Chesley; Bryan R. Luce; Johan P. Suyderhoud; Bonnie V. Neibauer; Fawzy G. Estafanous

Previously, our group-the International Working Group for HTA Advancement-proposed a set of fifteen Key Principles that could be applied to health technology assessment (HTA) programs in different jurisdictions and across a range of organizations and perspectives. In this commentary, we investigate the extent to which these principles are supported and used by fourteen selected HTA organizations worldwide. We find that some principles are broadly supported: examples include being explicit about HTA goals and scope; considering a wide range of evidence and outcomes; and being unbiased and transparent. Other principles receive less widespread support: examples are addressing issues of generalizability and transferability; being transparent on the link between HTA findings and decision-making processes; considering a full societal perspective; and monitoring the implementation of HTA findings. The analysis also suggests a lack of consensus in the field about some principles--for example, considering a societal perspective. Our study highlights differences in the uptake of key principles for HTA and indicates considerable room for improvement for HTA organizations to adopt principles identified to reflect good HTA practices. Most HTA organizations espouse certain general concepts of good practice--for example, assessments should be unbiased and transparent. However, principles that require more intensive follow-up--for example, monitoring the implementation of HTA findings--have received little support and execution.

Collaboration


Dive into the Bryan R. Luce's collaboration.

Top Co-Authors

Avatar

Anne Elixhauser

Battelle Memorial Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bengt Jönsson

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar

Dennis A. Revicki

Battelle Memorial Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge