Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Lipscomb is active.

Publication


Featured researches published by Joseph Lipscomb.


PharmacoEconomics | 1999

Determining clinically important differences in health status measures: a general approach with illustration to the Health Utilities Index Mark II.

Greg Samsa; David Edelman; Margaret L. Rothman; G. Rhys Williams; Joseph Lipscomb; David B. Matchar

The objective of this article was to describe and illustrate a comprehensive approach for estimating clinically important differences (CIDs) in health-related quality-of-life (HR-QOL). A literature review and pilot study were conducted to determine whether effect size-based benchmarks are consistent with CIDs obtained from other approaches.CIDs may be estimated based primarily upon effect sizes, supplemented by more traditional anchor-based methods of benchmarking (i.e. direct, cross-sectional or longitudinal approaches). A literature review of articles discussing CIDs provided comparative data on effect sizes for various chronic conditions. A pilot study was then conducted to estimate the minimum CID of the Health Utilities Index (HUI) Mark II, and to compare the observed between-group differences observed in a recent randomised trial of an acute stroke intervention with this benchmark.The use of standardised effect size benchmarks has a number of advantages–for example, effect sizes are efficient, widely accepted outside HR-QOL, and have well accepted benchmarks based upon external anchors. In addition, our literature review and pilot study suggest that effect size-based CID benchmarks are similar to those which would be obtained using more traditional methods. For most HR-QOL instruments, we do not know the changes in score which constitute CIDs of various magnitudes. This makes interpretation of HR-QOL results from clinical trials difficult, and having a benchmarking process which is relatively straightforward would be highly desirable.


American Heart Journal | 1998

Utilities for major stroke: Results from a survey of preferences among persons at increased risk for stroke

Gregory P. Samsa; David B. Matchar; Larry B. Goldstein; Arthur J. Bonito; Pamela W. Duncan; Joseph Lipscomb; Cam Enarson; D. M. Witter; Pat Venus; John E. Paul; Morris Weinberger

BACKGROUNDnPatient beliefs, values, and preferences are crucial to decisions involving health care. In a large sample of persons at increased risk for stroke, we examined attitudes toward hypothetical major stroke.nnnMETHODS AND RESULTSnRespondents were obtained from the Academic Medical Center Consortium (n = 621), the Cardiovascular Health Study (n = 321 ), and United Health Care (n = 319). Preferences were primarily assessed by using the time trade off (TTO). Although major stroke is generally considered an undesirable event (mean TTO = 0.30), responses were varied: although 45% of respondents considered major stroke to be a worse outcome than death, 15% were willing to trade off little or no survival to avoid a major stroke.nnnCONCLUSIONSnProviders should speak directly with patients about beliefs, values, and preferences. Stroke-related interventions, even those with a high price or less than dramatic clinical benefits, are likely to be cost-effective if they prevent an outcome (major stroke) that is so undesirable.


Medical Care | 1994

Costing medical care: Using medicare administrative data

Judith R. Lave; Chris L. Pashos; Gerard F. Anderson; David J. Brailer; Thomas A. Bubolz; Douglas A. Conrad; Deborah A. Freund; Steven Fox; Emmett B. Keeler; Joseph Lipscomb; S Harold S. Luft; George Provenzano

This paper describes how the PORTS are using data from the Medicare administrative records systems to study the medical care costs of specific conditions. The general strengths and weaknesses of the Medicare databases for studying cost related issues are discussed, and the relevant data elements are examined in detail. Changes in the nature of the data collected over time are noted. Information is provided on how the PORTS are using these data to estimate the cost to Medicare of treating Medicare beneficiaries with specific conditions and the social (opportunity) cost of treating these patients. Furthermore, information is provided on how data from the Medicare administrative records system can be used to determine the cost of services for patients who have been identified through other large databases (i.e., state hospital discharge tapes) or who have been enrolled in prospective cohort studies.


Journal of Clinical Epidemiology | 1999

Performing cost-effectiveness analysis by integrating randomized trial data with a comprehensive decision model : Application to treatment of acute ischemic stroke

Gregory P. Samsa; Richard A. Reutter; Giovanni Parmigiani; Marek Ancukiewicz; Paul H. Abrahamse; Joseph Lipscomb; David B. Matchar

A recent national panel on cost-effectiveness in health and medicine has recommended that cost-effectiveness analysis (CEA) of randomized controlled trials (RCTs) should reflect the effect of treatments on long-term outcomes. Because the follow-up period of RCTs tends to be relatively short, long-term implications of treatments must be assessed using other sources. We used a comprehensive simulation model of the natural history of stroke to estimate long-term outcomes after a hypothetical RCT of an acute stroke treatment. The RCT generates estimates of short-term quality-adjusted survival and cost and also the pattern of disability at the conclusion of follow-up. The simulation model incorporates the effect of disability on long-term outcomes, thus supporting a comprehensive CEA. Treatments that produce relatively modest improvements in the pattern of outcomes after ischemic stroke are likely to be cost-effective. This conclusion was robust to modifying the assumptions underlying the analysis. More effective treatments in the acute phase immediately following stroke would generate significant public health benefits, even if these treatments have a high price and result in relatively small reductions in disability. Simulation-based modeling can provide the critical link between a treatments short-term effects and its long-term implications and can thus support comprehensive CEA.


Journal of Clinical Epidemiology | 1993

Comparison of analytic models for estimating the effect of clinical factors on the cost of coronary artery bypass graft surgery

R. Adams Dudley; Frank E. Harrell; L. Richard Smith; Daniel B. Mark; Robert M. Califf; David B. Pryor; Donald D. Glower; Joseph Lipscomb; Mark A. Hlatky

The cost of treating disease depends on patient characteristics, but standard tools for analyzing the clinical predictors of cost have deficiencies. To explore whether survival analysis techniques might overcome some of these deficiencies in the analysis of cost data, we compared ordinary least square (OLS) linear regression (with and without transformation of the data) and binary logistic regression with two survival models: the Cox proportional hazards model and a parametric model assuming a Weibull distribution. Each model was applied to data from 155 patients undergoing coronary artery bypass grafting. We examined the effects of age, sex, ejection fraction, unstable angina, and number of diseased vessels on univariable and multivariable predictions of costs. The significant univariable predictors of cost were consistent in all models: ejection fraction was significant in all five models, and age and number of diseased vessels were each significant in all but the OLS model, while sex and angina type were significant in none of the models. The significant multivariable predictors of cost, however, differed according to model: ejection fraction was a significant multivariable predictor of cost in all five models, age was significant in three models, and number of diseased vessels was significant in one model. All five models were also used to predict the costs for an average patient undergoing surgery. The Cox model provided the most accurate predictions of mean cost, median cost, and the proportion of patients with high cost. This study shows: (1) lower ejection fraction and older age are independent clinical predictors of increased cost of CABG, and (2) the Cox proportional hazards model shows considerable promise for the analysis of the impact of clinical factors upon cost.


Medical Decision Making | 1998

Predicting the Cost of Illness: A Comparison of Alternative Models Applied to Stroke

Joseph Lipscomb; Marek Ancukiewicz; Giovanni Parmigiani; Vic Hasselblad; Greg Samsa; David B. Matchar

Predictions of cost over well-defined time horizons are frequently required in the anal ysis of clinical trials and social experiments, for decision models investigating the cost-effectiveness of interventions, and for macro-level estimates of the resource im pact of disease. With rare exceptions, cost predictions used in such applications con tinue to take the form of deterministic point estimates. However, the growing availability of large administrative and clinical data sets offers new opportunities for a more general approach to disease cost forecasting: the estimation of multivariable cost functions that yeld predictions at the individual level, conditional on intervention(s), patient charac teristics, and other factors. This raises the fundamental question of how to choose the best cost model for a given application. The central purpose of this paper is to demonstrate how to evaluate competing models on the basis of predictive validity. This concept is operationalized according to three alternative criteria: 1) root mean square error (RMSE), for evaluating predicted mean cost; 2) mean absolute error (MAE), for evaluating predicted median cost; and 3) a logarithmic scoring rule (log score), an information-theoretic index for evaluating the entire predictive distribution of cost. To illustrate these concepts, the authors conducted a split-sample analysis of data from a national sample of Medicare-covered patients hospitalized for ischemic stroke in 1991 and followed to the end of 1993. Using test and training samples of about 500,000 observations each, they investigated five models: single-equation linear models, with and without log transform of cost; two-part (mixture) models, with and without log transform, to directly address the problem of zero-cost observations; and a Cox pro portional-hazards model stratified by time interval. For deriving the predictive distri bution of cost, the log transformed two-part and proportional-hazards models are su perior. For deriving the predicted mean or median cost, these two models and the commonly used log-transformed linear model all perform about the same. The untrans formed models are dominated in every instance. The approaches to model selection illustrated here can be applied across a wide range of settings. Key words: cost anal ysis ; cost of illness; statistical models; econometric models; stroke; cerebrovascular disease. (Med Decis Making 1998;18 suppl:S39-S56)


The American Journal of Gastroenterology | 1998

Performing a cost-effectiveness analysis : Surveillance of patients with ulcerative colitis

Dawn Provenzale; John Wong; Jane E. Onken; Joseph Lipscomb

Abstract Objective: To illustrate the principles of cost-effectiveness analysis, this third article in the “Primer on Economic Analysis for the Gastroenterologist” applies published criteria for appraising an economic analysis to a study of the cost-effectiveness of surveillance of patients with ulcerative colitis. Methods: We review and apply the 10 standard criteria for critical appraisal and evaluation of cost-effectiveness analyses. Summary: We outlined the development and critique of a decision analytic model that examines the cost-effectiveness of surveillance of patients with ulcerative colitis, and we compared the cost-effectiveness of surveillance of patients with ulcerative colitis to other well-accepted medical practices.


Annals of Internal Medicine | 1997

The Stroke Prevention Policy Model: Linking Evidence and Clinical Decisions

David B. Matchar; Gregory P. Samsa; J. Rosser Matthews; Marek Ancukiewicz; Giovanni Parmigiani; Vic Hasselblad; Phillip A. Wolf; Ralph B. D'Agostino; Joseph Lipscomb

Traditionally, the art of medicine has consisted of physicians using their expert and largely tacit knowledge [1] (culled from years of personal experience) to tailor diagnosis and therapy to the specific needs of individual patients. In this tradition, the use of external evidence to guide medical practice is discouraged because it might undermine the inherently private and unique relationship between physician and patient. Taken to the extreme, this perspective is clearly outdated. Indeed, one of the few points of general agreement in contemporary debates on health care is that the practice of medicine should be evidence based. Evidence-Based Medicine and Large Databases The challenge is to define the meaning of evidence-based medicine. To the extent that evidence based means that clinical encounters should be supported by scientific conclusions based on data as much as possible, the rationale for evidence-based medicine is self-evident. However, disagreements about the specifics are abundant. In particular, the clinical and health policy communities lack consensus about what types of evidence are relevant to decision making, how to properly evaluate and interpret various bits of evidence, and how to translate evidence into plans of action (such as recommendations and guidelines). In this context, the large database has emerged as an attractive but controversial source of information. In typical applications, the term large database is synonymous with databases from administrative files (such as Medicare claims), in which the original purpose of data collection was not evaluation of clinical issues. The appeal of these data is that they are readily available and relatively easy to use. Persons who advocate the use of these data anticipate that their value will be revealed through extensive statistical analyses (for example, regression modeling) that can lead to inferences about the relative efficacy, cost, and patterns of use of various clinical interventions [2]. The approach is seductively simple. Yet administrative data are subject to various and well-catalogued sources of bias [3], and their use as a source of information for clinical practice is viewed with legitimate concern. Given these problems with administrative data, it would be productive to broaden the meaning of large databases to include any database that contains many patients, regardless of the underlying study design. This broader definition includes Medicare claims files, large cohort studies, and meta-analyses of randomized trials that include many patients in aggregate. This definition is more consistent with the goals of evidence-based medicine. However, adaptation of the definition stretches the use of analytic tools typically encountered in database analysis. Rather than analyzing a file with a simple structure (one record for each patient) from a single source, researchers would be faced with various information sources that could have different variables and structures and could have been obtained from different types of study design. After our view of what data should be considered as evidence has been expanded, the next challenging step (addressed in this article) is to identify a means by which the various data can be effectively used. Our response to this challenge is to develop a comprehensive decision model that combines information from disparate sources into a single structure. Such an analytic approach to decision making invites criticism from investigators who believe that not all decision models are created equal and that some decision models resort to oversimplification and unrealistic assumptions. This criticism is fair because an information tool with the power of decision modeling carries substantial dangers. Certainly, model building should always be done in a systematic fashion, with special consideration given to such issues as consistency with underlying clinical data, documentation, reproducibility, and model validation. A Comprehensive Approach Applied to Decisions in Stroke Prevention Every year, approximately 500 000 persons in the United States have a stroke and 150 000 persons die as a result. Stroke is the third leading cause of death in the United States and is the leading cause of serious disability. At an annual cost of at least


Medical Decision Making | 1997

Assessing uncertainty in cost-effectiveness analyses: application to a complex decision model.

Giovanni Parmigiani; Greg Samsa; Marek Ancukiewicz; Joseph Lipscomb; Vic Hasselblad; David B. Matchar

30 billion to


Journal of the American College of Cardiology | 1990

Feasibility and cost-saving potential of outpatient cardiac catheterization

Jennifer Lee; James R. Bengtson; Joseph Lipscomb; Thomas M. Bashore; Daniel B. Mark; Robert M. Califf; David B. Pryor; Mark A. Hlatky

40 billion, stroke is expensive [4, 5]. Moreover, stroke is a widely feared event: more than 40% of respondents to a recent survey rated a hypothetical major stroke to be a worse outcome than death [6]. Because strokes predominately affect elderly persons, the clinical and public health importance of stroke can be expected to grow as the number of elderly persons continues to increase. One of the goals of the Patient Outcomes Research Team for the Secondary and Tertiary Prevention of Stroke (Stroke PORT) is to evaluate the cost-effectiveness of surgical and medical interventions that prevent stroke. We have organized tasks related to this goal around a comprehensive model of stroke development and outcome: the Stroke Prevention Policy Model (SPPM) [7-11] (Figure 1). This model uses various data as inputs, including epidemiologic studies, randomized clinical trials, administrative data, and patient interviews. Many of these inputs represent large databases. The SPPM outputs include estimates of the health and economic outcomes of alternate stroke prevention practices. By linking clinical evidence (in particular, from population-based studies, claims data, and large randomized trials) to the health outcomes of individual persons, the SPPM is intended to improve the health care communitys understanding of the available prevention options, thereby focusing on and improving the ability of clinicians, patients, and policy makers to make informed decisions about stroke prevention. Figure 1. Structure of the Stroke Prevention Policy Model (SPPM). The focus of this article is the role of large databases (as broadly defined) in the development and operation of a comprehensive decision-modeling effort-the SPPM. Our article consists of three sections. First, we describe the SPPM in terms of its basic structure and function. Because this article focuses on the application of large databases in the context of the SPPM effort rather than on the SPPM itself, this description is illustrative rather than comprehensive (more comprehensive descriptions are published elsewhere [8, 9]). Next, recognizing that the use of modeling continues to be a source of controversy within the medical community, we discuss the philosophical underpinnings of the SPPM. Finally, we consider how some of the lessons learned from the SPPM might be applied to simulation-based decision modeling for similarly complex issues. Description of the Stroke Prevention Policy Model Component Models The overall SPPM incorporates component models for the natural history of cerebrovascular disease (natural history model), implications of prevention strategies (intervention model), patient preferences (utility model), and costs (cost model). The natural history model (the backbone of the SPPM) describes the development of cerebrovascular disease in the absence of specific preventive interventions. The intervention model describes how these interventions modify the natural history of stroke. By linking the natural history model (as possibly modified by information about interventions) with the patient preference and cost models, the SPPM can be used to evaluate the cost-effectiveness of various intervention strategies. Natural History Model Structure The SPPM is a semi-Markov simulation model used to support decision and cost-effectiveness analyses. Its fundamental elements consist of health states and events, whereby events represent transitions between health states. Because of its technical complexity, the SPPM operates by simulating the natural history of a large cohort of patients. The health-related history of an individual patient is random (that is, it is based on the simulations generation of random numbers) yet follows underlying transition probabilities. As the size of the simulated cohort increases, the relative effects of chance being caused by the simulation are reduced and the natural history of patients can be described with any desired degree of precision. To compare an intervention with standard practice, we simulate the experience of a large cohort of patients using the natural history model, modify the parameters of the natural history using the intervention model, and then repeat the simulation. We use a similar strategy to compare two interventions. Events The SPPM takes into account the following clinical events: transient ischemic attack, ischemic stroke, hemorrhagic stroke, myocardial infarction, complications of treatment for disorders other than stroke or myocardial infarction (for example, gastrointestinal bleeding as a complication of anticoagulation), and death (which can be the result of stroke, myocardial infarction, or other causes). Patients who survive at least 30 days after a stroke are randomly assigned a stroke-related disability level defined by the Rankin score [12]. Patients who die of a stroke within 30 days are assigned a Rankin score of 5 (indicating the highest possible stroke-related disability). The health states that represent fundamental elements of the model can be defined from these events in various ways. For example, a patient who has had both a transient ischemic attack and an ischemic stroke might be defined as being in the transient ischemic attack and ischemic stroke state. In contrast, the same patient might be defined as being in the transient ischemic attack state and then undergoing a transition to the ischemic stroke state. For convenience, we adopt the latter convention and focus on the following codes for the health states: ASY (asymptomatic), TIA (transient ischemic attack), IS (ischemic stroke), HS (hemorrhagic stroke), MI (myocardial infarction), and DT

Collaboration


Dive into the Joseph Lipscomb's collaboration.

Top Co-Authors

Avatar

David B. Matchar

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge