Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deborah A. Marshall is active.

Publication


Featured researches published by Deborah A. Marshall.


BMJ | 1996

Meta-analysis of how well measures of bone mineral density predict occurrence of osteoporotic fractures.

Deborah A. Marshall; Olof Johnell; Hans Wedel

Abstract Objective: To determine the ability of measurements of bone density in women to predict later fractures. Design: Meta-analysis of prospective cohort studies published between 1985 and end of 1994 with a baseline measurement of bone density in women and subsequent follow up for fractures. For comparative purposes, we also reviewed case control studies of hip fractures published between 1990 and 1994. Subjects: Eleven separate study populations with about 90000 person years of observation time and over 2000 fractures. Main outcome measures: Relative risk of fracture for a decrease in bone mineral density of one standard deviation below age adjusted mean. Results: All measuring sites had similar predictive abilities (relative risk 1.5 (95% confidence interval 1.4 to 1.6)) for decrease in bone mineral density except for measurement at spine for predicting vertebral fractures (relative risk 2.3 (1.9 to 2.8)) and measurement at hip for hip fractures (2.6 (2.0 to 3.5)). These results are in accordance with results of case-control studies. Predictive ability of decrease in bone mass was roughly similar to (or, for hip or spine measurements, better than) that of a 1 SD increase in blood pressure for stroke and better than a 1 SD increase in serum cholesterol concentration for cardiovascular disease. Conclusions: Measurements of bone mineral density can predict fracture risk but cannot identify individuals who will have a fracture. We do not recommend a programme of screening menopausal women for osteoporosis by measuring bone density. Key messages Measuring bone mineral density has been suggested as a method of identifying individuals at high risk of fracture in a preventive context Our meta-analysis of prospective studies showed that all studies measuring bone density at any site had similar predictive ability for a decrease of 1 SD in bone density except for measurements at hip and spine, which have better predictive ability for fractures in hip and spine respectively Predictive ability of decrease in bone mass was roughly similar to (or, for hip or spine measurements, better than) that of a 1 SD increase in blood pressure for stroke and better than a 1 SD increase in serum cholesterol concentration for cardiovascular disease Although bone mineral density measurements can predict fracture risk, they cannot identify individuals who will have a fracture, and a screening programme for osteoporosis cannot be recommended


Value in Health | 2011

Conjoint analysis applications in health - A checklist: A report of the ISPOR Good Research Practices for Conjoint Analysis Task Force

John F. P. Bridges; A. Brett Hauber; Deborah A. Marshall; Andrew Lloyd; Lisa A. Prosser; Dean A. Regier; F. Reed Johnson; Josephine Mauskopf

BACKGROUND The application of conjoint analysis (including discrete-choice experiments and other multiattribute stated-preference methods) in health has increased rapidly over the past decade. A wider acceptance of these methods is limited by an absence of consensus-based methodological standards. OBJECTIVE The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Good Research Practices for Conjoint Analysis Task Force was established to identify good research practices for conjoint-analysis applications in health. METHODS The task force met regularly to identify the important steps in a conjoint analysis, to discuss good research practices for conjoint analysis, and to develop and refine the key criteria for identifying good research practices. ISPOR members contributed to this process through an extensive consultation process. A final consensus meeting was held to revise the article using these comments, and those of a number of international reviewers. RESULTS Task force findings are presented as a 10-item checklist covering: 1) research question; 2) attributes and levels; 3) construction of tasks; 4) experimental design; 5) preference elicitation; 6) instrument design; 7) data-collection plan; 8) statistical analyses; 9) results and conclusions; and 10) study presentation. A primary question relating to each of the 10 items is posed, and three sub-questions examine finer issues within items. CONCLUSIONS Although the checklist should not be interpreted as endorsing any specific methodological approach to conjoint analysis, it can facilitate future training activities and discussions of good research practices for the application of conjoint-analysis methods in health care studies.


Value in Health | 2013

Constructing Experimental Designs for Discrete-Choice Experiments: Report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force

F. Reed Johnson; Emily Lancsar; Deborah A. Marshall; Vikram Kilambi; Axel C. Mühlbacher; Dean A. Regier; Brian W. Bresnahan; Barbara Kanninen; John F. P. Bridges

Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task forces draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012).


Value in Health | 2013

ISPOR task force reportConstructing Experimental Designs for Discrete-Choice Experiments: Report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force

F. Reed Johnson; Emily Lancsar; Deborah A. Marshall; Vikram Kilambi; Axel C. Mühlbacher; Dean A. Regier; Brian W. Bresnahan; Barbara Kanninen; John F. P. Bridges

Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task forces draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012).


The Patient: Patient-Centered Outcomes Research | 2010

Conjoint Analysis Applications in Health - How are Studies being Designed and Reported?: An Update on Current Practice in the Published Literature between 2005 and 2008.

Deborah A. Marshall; John F. P. Bridges; Brett Hauber; Ruthanne Cameron; Lauren Donnalley; Ken Fyie; F. Reed Johnson

Despite the increased popularity of conjoint analysis in health outcomes research, little is known about what specific methods are being used for the design and reporting of these studies. This variation in method type and reporting quality sometimes makes it difficult to assess substantive findings. This review identifies and describes recent applications of conjoint analysis based on a systematic review of conjoint analysis in the health literature. We focus on significant unanswered questions for which there is neither compelling empirical evidence nor agreement among researchers.We searched multiple electronic databases to identify English-language articles of conjoint analysis applications in human health studies published since 2005 through to July 2008. Two independent reviewers completed the detailed data extraction, including descriptive information, methodological details on survey type, experimental design, survey format, attributes and levels, sample size, number of conjoint scenarios per respondent, and analysis methods. Review articles and methods studies were excluded. The detailed extraction form was piloted to identify key elements to be included in the database using a standardized taxonomy.We identified 79 conjoint analysis articles that met the inclusion criteria. The number of applied studies increased substantially over time in a broad range of clinical applications, cancer being the most frequent. Most used a discrete-choice survey format (71%), with the number of attributes ranging from 3 to 16. Most surveys included 6 attributes, and 73% presented 7–15 scenarios to each respondent. Sample size varied substantially (minimum = 13, maximum = 1258), with most studies (38%) including between 100 and 300 respondents. Cost was included as an attribute to estimate willingness to pay in approximately 40% of the articles across all years.Conjoint analysis in health has expanded to include a broad range of applications and methodological approaches. Although we found substantial variation in methods, terminology, and presentation of findings, our observations on sample size, the number of attributes, and number of scenarios presented to respondents should be helpful in guiding researchers when planning a new conjoint analysis study in health.


Annals of Internal Medicine | 2004

Cost-effectiveness of rhythm versus rate control in atrial fibrillation.

Deborah A. Marshall; Adrian R. Levy; Humberto Vidaillet; Elisabeth Fenwick; April Slee; Gordon Blackhouse; H. Leon Greene; D. George Wyse; Graham Nichol; Bernie J. O'Brien

Context Randomized trials show that rate control and rhythm control are similarly effective in the treatment of atrial fibrillation; therefore, economic issues will play a large role in the choice of therapy. Contribution This cost-effectiveness model shows that rate control saves costs compared with rhythm control. Implications From an economic perspective, unless specific clinical factors suggest a benefit of rhythm control for a particular patient, rate control seems to be the preferred strategy for the management of atrial fibrillation. Atrial fibrillation is the most common sustained type of cardiac arrhythmia treated by physicians. Its prevalence increases with advancing age, affecting approximately 5% of those 65 years of age and older and 10% of those older than 80 years of age (1-3). As the U.S. population ages, it is expected that more than 5 million persons will be living with atrial fibrillation by the year 2050 (4). Despite significant advances in the effectiveness of treatments for atrial fibrillation and its associated comorbid conditions, disability and mortality from atrial fibrillation remain high (5-10). The optimal approach to the rhythm management of atrial fibrillation remains unclear. There are 2 main approaches: Rhythm control uses electrical cardioversion, antiarrhythmic drugs, and, sometimes, nonpharmacologic therapies (for example, multisite atrial pacing, maze procedures, or radiofrequency ablation procedures) to maintain sinus rhythm; rate control uses atrioventricular nodal blocking agents (and, if needed, ablation of the atrioventricular junction and pacemaker implantation) for ventricular rate control. Recently, several randomized, controlled studies have compared rate control versus rhythm control. In the largest of these studies, investigators in the Atrial Fibrillation Follow-up Investigation of Rhythm Management (AFFIRM) randomly assigned 4060 patients with atrial fibrillation (mean age, 70 years) to either rate control or rhythm control (10-12). After a mean follow-up of 3.5 years, mortality did not differ significantly between the groups (hazard ratio for rate control vs. rhythm control, 0.87 [95% CI, 0.75 to 1.01]; P= 0.08), and the rate-control approach was associated with a lower risk for adverse drug effects (12). The results of the other large study were consistent with these findings (13). The RAte Control versus Electrical cardioversion for persistent atrial fibrillation (RACE) study randomly assigned 522 patients with persistent atrial fibrillation after electrical cardioversion to either rhythm control or rate control; the mean follow-up was 2.3 years (13). Patients in both treatment groups received oral anticoagulant drugs. There was a nonsignificant trend toward reduced death or other serious cardiovascular events in patients treated by the rate-control strategy. Consequently, economic factors often play a substantial role in guiding treatment selection. Several authors have examined the incremental cost-effectiveness of rhythm-control versus rate-control strategies for treating atrial fibrillation; however, their studies have been confined to modeling exercises of hypothetical scenarios that lack data on efficacy and resource use from randomized trials (14, 15). This paper reports an economic analysis based on the results of AFFIRM. The objective was to estimate the incremental cost-effectiveness of rhythm-control versus rate-control strategies from AFFIRM. Methods AFFIRM Study Sample AFFIRM included 4060 patients with atrial fibrillation whose treatment was block randomized by center to be either rhythm control or rate control (12). Similar to patients with atrial fibrillation in the general population, most of the patients in AFFIRM were older men (men represented 61% of the sample) with common associated cardiovascular comorbid conditions (history of hypertension [71%], coronary artery disease [39%], and congestive heart failure [9%]). The mean age (SD) for all patients was 69.7 9.0 years, and 75% were 65 years of age or older. The qualifying event was the first episode of atrial fibrillation in 34% of patients and recurrent atrial fibrillation in the remaining 66% of patients. The overriding principles for enrollment of patients in AFFIRM were based on the clinical judgment of the investigator and were as follows: Atrial fibrillation was likely to be recurrent, atrial fibrillation was likely to cause morbidity or death, long-term treatment for atrial fibrillation was warranted, anticoagulation was not contraindicated, the patient was eligible for at least 2 drug trials in both treatment strategies, and treatment in both strategies could be initiated immediately after randomization. Additional information on the design, inclusion and exclusion criteria, and results of AFFIRM are available elsewhere (10-12). The economic analysis described here compares costs and effects of the 2 management strategies among patients enrolled in AFFIRM from the perspective of a third-party payer. The outcome was the incremental cost-effectiveness ratio comparing rhythm control and rate control, measured in dollars per life-year gained. Survival We obtained data on survival from the time of randomization to the end of study follow-up and use of specific health care resources for all 4060 AFFIRM patients. For patients who were lost to follow-up, withdrew from the study, or had incomplete follow-up, all available data were included in the analysis. Patients were censored at withdrawal or loss to follow-up. We derived the within-study mean survival time for each treatment group by using the KaplanMeier product limit estimator to account for censoring during follow-up (16). To obtain an unbiased estimate of mean survival, exposure was truncated at 5.65 years, which was the longest follow-up observed in AFFIRM (17). Resource Use and Costs We estimated costs by multiplying the number of each resource used by its unit cost (18). All unit costs for resources were estimated in U.S. dollars for the year 2002. Price estimates from earlier years were adjusted by applying the Consumer Price Index, Medical Care component (19). For each measure of resource use, 3 different unit costs were derived and considered in separate analyses: a base case for the most likely scenario, a low estimate, and a high estimate. The analysis considered costs of all hospitalizations, cardiac procedures, cardioversion, short-stay and emergency department visits, and medications used to treat atrial fibrillation from the perspective of a third-party payer. Hospital Costs At each follow-up visit during the study, the total number of hospitalized days since the last visit was recorded, along with the primary reason (cardiovascular or noncardiovascular cause) for hospitalization. The mean cost per hospital day was estimated from the Healthcare Cost and Utilization Project (HCUP) statistics for the 1995 HCUP-3 Nationwide Inpatient Sample (20) for Diseases of the Circulatory System, excluding any diagnosis associated with a mean patient age of younger than 18 years, for cardiovascular and noncardiovascular causes. The low and high estimates of the per diem for hospital days were based on the 25th and 75th percentile of mean charges, respectively. The HCUP prices were adjusted to represent costs by using a cost-to-charge ratio of 0.575, which is based on the 2002 estimate from the Centers for Medicare & Medicaid Services (21). In addition, physician charges for subsequent hospital care as a level II visit (Current Procedural Terminology [CPT] [22] code 99232) were applied for each hospital day recorded. In the base case, an average estimate for the physician fee payment for this CPT code was calculated from the 2002 Physician Fee Schedule Payment Amount File National/Carrier for facility-based procedures for all carriers and localities listed in the database (23). In the sensitivity analysis, we used the minimum physician fee across carriers and localities for each procedure as the low cost estimate. We based the high cost estimate on the standard billed charge from the Marshfield Clinic, an ambulatory care facility in Marshfield, Wisconsin. This clinical center recruited most patients in the study and provides an estimate of charges for centers in the United States. Although these estimates are based on data from 1 facility, they are a reasonable estimate for the high-cost scenario, in between billed charges from a teaching hospital and a private clinic. Costs of Cardiac Procedures At each follow-up visit during the study, the number of cardiac procedures (percutaneous transluminal coronary angioplasty, coronary artery bypass graft surgery, pacemakers, valve surgery, ablation) performed since the previous follow-up visit was recorded. No information was available from AFFIRM to describe the number of arteries revascularized during percutaneous transluminal coronary angioplasty interventions or coronary artery bypass graft surgeries. We assumed that only 1 lesion was treated for each percutaneous transluminal coronary angioplasty procedure. We estimated the number of arteries revascularized during bypass surgery as a weighted average from the National Hospital Discharge Survey (NHDS) data set for 2000 (24, 25). We included the costs of the most frequent cardiac procedures in the analysis. Hospital costs include the costs of all facility personnel except physicians. Physician costs consisted of a physician fee for diagnostic and therapeutic procedures, as well as any applicable anesthesia fee. Perfusionist fees for open-heart cardiac procedures were not included because these costs are included in the hospital costs. The analysis included costs for pacemakers and implantable cardioverter defibrillators (ICDs) because they have high unit cost (24, 25). In the base case, the hardware cost (that is, device and electrode or electrodes costs) for the most widely used single-chamber and dual-chamber device was assigned on the


Value in Health | 2016

Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force

A. Brett Hauber; Juan Marcos Gonzalez; Catharina Gerarda Maria Groothuis-Oudshoorn; Thomas J. Prior; Deborah A. Marshall; Charles E. Cunningham; Maarten Joost IJzerman; John F. P. Bridges

Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results.


European Journal of Cancer | 2001

Economic decision analysis model of screening for lung cancer

Deborah A. Marshall; Kit N. Simpson; Craig C. Earle; C.-W. Chu

The objective of this study was to evaluate the potential clinical and economic implications of an annual lung cancer screening programme based on helical computed tomography (CT). A decision analysis model was created using combined data from the Surveillance, Epidemiology and End Results (SEER) registry public-use database and published results from the Early Lung Cancer Action Project (ELCAP). We found that under optimal conditions in a high risk cohort of patients between 60 and 74 years of age, annual lung cancer screening over a period of 5 years appears to be cost effective at approximately


Value in Health | 2015

Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force.

Deborah A. Marshall; Lina Burgos-Liz; Maarten Joost IJzerman; Nathaniel D. Osgood; William V. Padula; Mitchell K. Higashi; Peter K. Wong; Kalyan S. Pasupathy; William H. Crown

19000 per life year saved. A sensitivity analysis of the model to account for a 1-year decrease in survival benefit and changes in assumptions for incidence rate and costs generated cost effectiveness estimates ranging from approximately


Clinical Pharmacology & Therapeutics | 2009

Addressing The Challenges Of The Clinical Application Of Pharmacogenetic Testing

On Ikediobi; Jaekyu Shin; Robert L. Nussbaum; Kathryn A. Phillips; Judith M. E. Walsh; Uri Ladabaum; Deborah A. Marshall

10800 to

Collaboration


Dive into the Deborah A. Marshall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Bohm

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge