Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lisa I. Iezzoni is active.

Publication


Featured researches published by Lisa I. Iezzoni.


Annals of Internal Medicine | 1997

Assessing Quality Using Administrative Data

Lisa I. Iezzoni

State and regional efforts to assess the quality of health care often start with administrative data, which are a by-product of administering health services, enrolling members into health insurance plans, and reimbursing for health care services. By definition, administrative data were never intended for use in quality assessment. As a result, clinicians often dismiss these data, arguing that the information cannot be trusted. Nonetheless, with detailed clinical information buried deep within paper medical records and thus expensive to extract, administrative data possess important virtues. They are readily available; are inexpensive to acquire; are computer readable; and typically encompass entire regional populations or large, well-defined subpopulations. In the health policy community, hopes for administrative data were initially high. Beginning in the early 1970s, administrative data quantified startling practice variations across small geographic areas [1, 2]. In the 1980s, administrative databases became a mainstay of research on the outcomes of care [3, 4]. In 1989, legislation that created the Agency for Health Care Policy and Research (AHCPR) stipulated the use of claims data in determining the outcomes, effectiveness, and appropriateness of different therapies (Public Law 101-239, Section 1142[c]). Five years later, however, the Office of Technology Assessment offered a stinging appraisal: Contrary to the expectations expressed in the legislation establishing AHCPR administrative databases generally have not proved useful in answering questions about the comparative effectiveness of alternative medical treatments [5]. The costs of acquiring detailed clinical information, however, often force concessions in the real world. For example, in 1990, Californias Assembly debated new requirements for reporting clinical data to evaluate hospital quality [6]. When estimated annual costs for data collection were


Journal of General Internal Medicine | 2007

Implicit Bias among Physicians and its Prediction of Thrombolysis Decisions for Black and White Patients

Alexander R. Green; Dana R. Carney; Daniel J. Pallin; Long Ngo; Kristal L. Raymond; Lisa I. Iezzoni; Mahzarin R. Banaji

61 million, fiscal reality intervened. The legislature mandated the creation of quality measures that used Californias existing administrative database. Thus, widespread quality assessment typically demands a tradeoff-the credibility of clinical data versus the expense and feasibility of data collection. Can administrative data produce useful judgments about the quality of health care? Defining Quality What is quality? For decades, physicians protested that defining health care quality was impossible. Today, however, experts claim that rigorous quality measures can systematically assess care across groups of patients [7, 8]. Nonetheless, consensus about specific methods for measuring quality remains elusive. Different conceptual frameworks for defining quality stress different dimensions of health care delivery. Donabedians classic framework [9] delineated three dimensions: 1) structure, or the characteristics of a health care setting [for example, the physical plant, available technology, staffing patterns, and credentialing procedures]; 2) process, or what is done to patients; and 3) outcomes, or how patients do after health care interventions. The three dimensions are intertwined, but their relative utility depends on context. Few links between processes and outcomes are backed by solid evidence from well-controlled studies, and outcomes that are not linked to specific medical practices provide little guidance for developing quality-improvement strategies [10]. In addition, comparing outcomes across groups frequently requires adjustment for patient risk and the recognition that some patients are sicker than others [11]. Other important dimensions emerge when a process splits into two components: technical quality and interpersonal quality (for example, communication, caring, and respect for patient preferences). Another process question involves the appropriateness of services: errors of omission (failing to do necessary things) and errors of commission (doing unnecessary things). Both errors can be related to another important dimension of quality: access to health care. In errors of omission, access may be impeded; in errors of commission, access may be too easy or inducements to perform procedures too great. In todays environment, determining who (or what) is accountable for observed quality is as important as measuring quality. This requires defining a unit of analysis: quality for whom? Potential units of analysis include individual patients, patients grouped by providers, or populations defined by region or an important characteristic (for example, the insurer or patient age). Methods for measuring quality across populations differ from those that scrutinize quality for individual patients. Given these multidimensional perspectives, a single response may be insufficient to judge whether administrative data can assess health care quality. As discussed in the following sections, administrative data may capture some dimensions of quality and units of observation better than others. Content of Administrative Databases The three major producers of administrative databases are the federal government (including the Health Care Financing Administration [HCFA], which administers Medicare and oversees Medicaid; the Department of Defense; and the Department of Veterans Affairs), state governments, and private insurers [3, 4, 12-19]. Although administrative files initially concentrated on information from acute care hospitals, information is increasingly compiled from outpatient, long-term care, home health, and hospice programs. Most administrative files explicitly aim to minimize data collection. Their source documents (for example, claim forms) contain the minimum amount of information required to perform the relevant administrative function (for example, to verify and pay the claims). In this article, I focus on hospital-derived data (such as that obtained from discharge abstracts), but many of the issues examined apply to other care settings. Their clinical content delimits the potential of databases to measure the quality of health care. Administrative sources always contain routine demographic data (Table 1). Additional clinical information includes diagnosis codes (based on the International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM]) and procedure codes. Hospitals report procedures using the ICD-9-CM codes, but physicians generally use codes from the American Medical Associations Current Procedural Terminology. The two coding systems do not readily link, hindering comparisons between hospital- and physician-generated data. Table 1. Contents of the Uniform Hospital Discharge Data Set The ICD-9-CM contains codes for many conditions that are technically not diseases (Table 2). Given this diversity, creatively combining ICD-9-CM codes produces snapshots of clinical scenarios. For example, data selected from the 1994 discharge abstract of a man in a California hospital (Table 3) suggest the following scenario: A 62-year-old white man with a history of chronic renal failure that required hemodialysis and type 2 diabetes with retinopathy was admitted with the Mallory-Weis syndrome. Blood loss from an esophageal tear may have caused orthostatic hypotension. During the 9-day hospitalization, the patient was also treated for Klebsiella pneumonia. Table 2. Examples of Information Contained in ICD-9-CM Codes* Table 3. Discharge Abstract Information for a Patient Admitted to a California Hospital in 1994* This diversity of ICD-9-CM codes is used by administrative data-based severity measures [20-22] aiming to compare risk-adjusted patient outcomes across hospitals. For example, Disease Staging rates patients with pneumonia as having more severe disease if the discharge abstract also contains codes for sepsis. Attributes of Administrative Data Administrative files contain limited clinical insight to inform quality assessment. Administrative data cannot elucidate the interpersonal quality of care, evaluate the technical quality of processes of care, determine most errors of omission or commission, or assess the appropriateness of care. Some exceptions to these negative judgments do exist. For example, with longitudinal person-level data, one could detect failures to immunize children (errors of omission)-if all immunizations were coded properly, which is unlikely. Certain ICD-9-CM procedure codes prompt concerns about technical quality (for example, 39.41, control of hemorrhage after vascular surgery, and 54.12, reopening of recent laparotomy site), but the specificity of the codes is suspect. Nonetheless, administrative data are widely used to produce hospital report cards that primarily compare in-hospital mortality rates. The mechanics are easy. For example, in Massachusetts, reporters for The Boston Globe purchased the states database of hospital discharge abstracts, conducted analyses, and published a report card on hospital mortality. The report card was explicitly intended to provide insight into the quality of health care [23]. Are quality assessments based on administrative data valid? As Donabedian observed [9], a major aspect of validity has to do with the accuracy of the data. The Institute of Medicines Committee on Regional Health Data Networks made the reliability and validity of data an absolute requirement that had to be satisfied before public dissemination of derived quality measures [12]: The public interest is materially served when society is given as much information on costs, quality, and value for health care dollar expended as can be given accurately . Public disclosure is acceptable only when it: (1) involves information and analytic results that come from studies that have been well conducted, (2) is based on data that can be shown to be reliable and valid for the purposes intended, and (3) is accompanied by appropriate educational material. What, therefore, are the important attributes of administrative data? Data Quality Like quality of car


JAMA | 1997

The risks of risk adjustment

Lisa I. Iezzoni

ContextStudies documenting racial/ethnic disparities in health care frequently implicate physicians’ unconscious biases. No study to date has measured physicians’ unconscious racial bias to test whether this predicts physicians’ clinical decisions.ObjectiveTo test whether physicians show implicit race bias and whether the magnitude of such bias predicts thrombolysis recommendations for black and white patients with acute coronary syndromes.Design, Setting, and ParticipantsAn internet-based tool comprising a clinical vignette of a patient presenting to the emergency department with an acute coronary syndrome, followed by a questionnaire and three Implicit Association Tests (IATs). Study invitations were e-mailed to all internal medicine and emergency medicine residents at four academic medical centers in Atlanta and Boston; 287 completed the study, met inclusion criteria, and were randomized to either a black or white vignette patient.Main Outcome MeasuresIAT scores (normal continuous variable) measuring physicians’ implicit race preference and perceptions of cooperativeness. Physicians’ attribution of symptoms to coronary artery disease for vignette patients with randomly assigned race, and their decisions about thrombolysis. Assessment of physicians’ explicit racial biases by questionnaire.ResultsPhysicians reported no explicit preference for white versus black patients or differences in perceived cooperativeness. In contrast, IATs revealed implicit preference favoring white Americans (mean IAT score = 0.36, P < .001, one-sample t test) and implicit stereotypes of black Americans as less cooperative with medical procedures (mean IAT score 0.22, P < .001), and less cooperative generally (mean IAT score 0.30, P < .001). As physicians’ prowhite implicit bias increased, so did their likelihood of treating white patients and not treating black patients with thrombolysis (P = .009).ConclusionsThis study represents the first evidence of unconscious (implicit) race bias among physicians, its dissociation from conscious (explicit) bias, and its predictive validity. Results suggest that physicians’ unconscious biases may contribute to racial/ethnic disparities in use of medical procedures such as thrombolysis for myocardial infarction.


Medical Care | 1994

IDENTIFYING COMPLICATIONS OF CARE USING ADMINISTRATIVE DATA

Lisa I. Iezzoni; Jennifer Daley; Timothy Heeren; Susan M. Foley; Elliott S. Fisher; Charles C. Duncan; John S. Hughes; Gerald A. Coffman

CONTEXT Risk adjustment is essential before comparing patient outcomes across hospitals. Hospital report cards around the country use different risk adjustment methods. OBJECTIVES To examine the history and current practices of risk adjusting hospital death rates and consider the implications for using risk-adjusted mortality comparisons to assess quality. DATA SOURCES AND STUDY SELECTION This article examines severity measures used in states and regions to produce comparisons of risk-adjusted hospital death rates. Detailed results are presented from a study comparing current commercial severity measures using a single database. It included adults admitted for acute myocardial infarction (n=11880), coronary artery bypass graft surgery (n=7765), pneumonia (n=18016), and stroke (n=9407). Logistic regressions within each condition predicted in-hospital death using severity scores. Odds ratios for in-hospital death were compared across pairs of severity measures. For each hospital, z scores compared actual and expected death rates. RESULTS The severity measure called Disease Staging had the highest c statistic (which measures how well a severity measure discriminates between patients who lived and those who died) for acute myocardial infarction, 0.86; the measure called All Patient Refined Diagnosis Related Groups had the highest for coronary artery bypass graft surgery, 0.83; and the measure, MedisGroups, had the highest for pneumonia, 0.85 and stroke, 0.87. Different severity measures predicted different probabilities of death for many patients. Severity measures frequently disagreed about which hospitals had particularly low or high z scores. Agreement in identifying low- and high-mortality hospitals between severity-adjusted and unadjusted death rates was often better than agreement between severity measures. CONCLUSIONS Severity does not explain differences in death rates across hospitals. Different severity measures frequently produce different impressions about relative hospital performance. Severity-adjusted mortality rates alone are unlikely to isolate quality differences across hospitals.


Journal of Health Economics | 1994

Measuring hospital efficiency with frontier cost functions.

Stephen Zuckerman; Jack Hadley; Lisa I. Iezzoni

The Complications Screening Program (CSP) is a method using standard hospital discharge abstract data to identify 27 potentially preventable in-hospital complications, such as post-operative pneumonia, hemorrhage, medication incidents, and wound infection. The CSP was applied to over 1.9 million adult medical/surgical cases using 1988 California discharge abstract data. Cases with complications were significantly older and more likely to die, and they had much higher average total charges and lengths of stay than other cases (P < 0.0001). For most case types, 13 chronic conditions, defined using diagnosis codes, increased the relative risks of having a complication after adjusting for patient age. Cases at larger hospitals and teaching facilities generally had higher complication rates. Logistic regression models to predict complications using demographic, administrative, clinical, and hospital characteristics variables, had modest power (C statistics = 0.64 to 0.70). The CSP requires further evaluation before using it for purposes other than research.


BMJ | 1999

Explaining differences in English hospital death rates using routinely collected data

Brian Jarman; Simon Gault; Bernadette Alves; Amy Hider; Susan Dolan; Adrian Cook; Brian Hurwitz; Lisa I. Iezzoni

This paper uses a stochastic frontier multiproduct cost function to derive hospital-specific measures of inefficiency. The cost function includes direct measures of illness severity, output quality, and patient outcomes to reduce the likelihood that the inefficiency estimates are capturing unmeasured differences in hospital outputs. Models are estimated using data from the AHA Annual Survey, Medicare Hospital Cost Reports, and MEDPAR. We explicitly test the assumption of output endogeneity and reject it in this application. We conclude that inefficiency accounts for 13.6 percent of total hospital costs. This estimate is robust with respect to model specification and approaches to pooling data across distinct groups of hospitals.


Journal of General Internal Medicine | 2003

Linguistic and cultural barriers to care.

Quyen Ngo-Metzger; Michael P. Massagli; Brian R. Clarridge; Michael Manocchia; Roger B. Davis; Lisa I. Iezzoni; Russell S. Phillips

Objectives: To ascertain hospital inpatient mortality in England and to determine which factors best explain variation in standardised hospital death ratios. Design: Weighted linear regression analysis of routinely collected data over four years, with hospital standardised mortality ratios as the dependent variable. Setting: England. Subjects:Eight million discharges from NHS hospitals when the primary diagnosis was one of the diagnoses accounting for 80% of inpatient deaths. Main outcome measures: Hospital standardised mortality ratios and predictors of variations in these ratios. Results: The four year crude death rates varied across hospitals from 3.4% to 13.6% (average for England 8.5%), and standardised hospital mortality ratios ranged from 53 to 137 (average for England 100). The percentage of cases that were emergency admissions (60% of total hospital admissions) was the best predictor of this variation in mortality, with the ratio of hospital doctors to beds and general practitioners to head of population the next best predictors. When analyses were restricted to emergency admissions (which covered 93% of all patient deaths analysed) number of doctors per bed was the best predictor. Conclusion: Analysis of hospital episode statistics reveals wide variation in standardised hospital mortality ratios in England. The percentage of total admissions classified as emergencies is the most powerful predictor of variation in mortality. The ratios of doctors to head of population served, both in hospital and in general practice, seem to be critical determinants of standardised hospital death rates; the higher these ratios, the lower the death rates in both cases.


Journal of General Internal Medicine | 2005

Interpreter services, language concordance, and health care quality. Experiences of Asian Americans with limited English proficiency.

Alexander R. Green; Quyen Ngo-Metzger; Anna T. R. Legedza; Michael P. Massagli; Russell S. Phillips; Lisa I. Iezzoni

AbstractCONTEXT: Primarily because of immigration, Asian Americans are one of the fastest growing and most ethnically diverse minority groups in the United States. However, little is known about their perspectives on health care quality. OBJECTIVE: To examine factors contributing to quality of care from the perspective of Chinese- and Vietnamese-American patients with limited English language skills. DESIGN: Qualitative study using focus groups and content analysis to determine domains of quality of care. SETTING: Four community health centers in Massachusetts. PARTICIPANTS: A total of 122 Chinese- and Vietnamese-American patients were interviewed in focus groups by bilingual interviewers using a standardized, translated moderator guide. MAIN OUTCOME MEASURES: Domains of quality of care mentioned by patients in verbatim transcripts. RESULTS: In addition to dimensions of health care quality commonly expressed by Caucasian, English-speaking patients in the United States, Chinese- and Vietnamese-American patients with limited English proficiency wanted to discuss the use of non-Western medical practices with their providers, but encountered significant barriers. They viewed providers’ knowledge, inquiry, and nonjudgmental acceptance of traditional Asian medical beliefs and practices as part of quality care. Patients also considered the quality of interpreter services to be very important. They preferred using professional interpreters rather than family members, and preferred gender-concordant translators. Furthermore, they expressed the need for help in navigating health care systems and obtaining support services. CONCLUSIONS: Cultural and linguistically appropriate health care services may lead to improved health care quality for Asian-American patients who have limited English language skills. Important aspects of quality include providers’ respect for traditional health beliefs and practices, access to professional interpreters, and assistance in obtaining social services.


American Journal of Public Health | 1996

Judging hospitals by severity-adjusted mortality rates: the influence of the severity-adjustment method.

Lisa I. Iezzoni; Arlene S. Ash; Jennifer Daley; John S. Hughes; Yevgenia D. Mackiernan

AbstractBACKGROUND: Patients with limited English proficiency (LEP) have more difficulty communicating with health care providers and are less satisfied with their care than others. Both interpreter- and language-concordant clinicians may help overcome these problems but few studies have compared these approaches. OBJECTIVE: To compare self-reported communication and visit ratings for LEP Asian immigrants whose visits involve either a clinic interpreter or a clinician speaking their native language. DESIGN: Cross-sectional survey—response rate 74%. PATIENTS: Two thousand seven hundred and fifteen LEP Chinese and Vietnamese immigrant adults who received care at 11 community-based health centers across the U.S. MEASUREMENTS: Five self-reported communication measures and overall rating of care. RESULTS: Patients who used interpreters were more likely than language-concordant patients to report having questions about their care (30.1% vs 20.9%, P<.001) or about mental health (25.3% vs 18.2%, P=.005) they wanted to ask but did not. They did not differ significantly in their response to 3 other communication measures or their likelihood of rating the health care received as “excellent” or “very good” (51.7% vs 50.9%, P=.8). Patients who rated their interpreters highly (“excellent” or “very good”) were more likely to rate the health care they received highly (adjusted odds ratio 4.8, 95% confidence interval, 2.3 to 10.1). CONCLUSIONS: Assessments of communication and health care quality for outpatient visits are similar for LEP Asian immigrants who use interpreters and those whose clinicians speak their language. However, interpreter use may compromise certain aspects of communication. The perceived quality of the interpreter is strongly associated with patients’ assessments of quality of care overall.


International Journal of Technology Assessment in Health Care | 1990

Using Administrative Diagnostic Data to Assess the Quality of Hospital Care: Pitfalls and Potential of ICD-9-CM

Lisa I. Iezzoni

OBJECTIVES This research examined whether judgments about a hospitals risk-adjusted mortality performance are affected by the severity-adjustment method. METHODS Data came from 100 acute care hospitals nationwide and 11880 adults admitted in 1991 for acute myocardial infarction. Ten severity measures were used in separate multivariable logistic models predicting in-hospital death. Observed-to-expected death rates and z scores were calculated with each severity measure for each hospital. RESULTS Unadjusted mortality rates for the 100 hospitals ranged from 4.8% to 26.4%. For 32 hospitals, observed mortality rates differed significantly from expected rates for 1 or more, but not for all 10, severity measures. Agreement between pairs of severity measures on whether hospitals were flagged as statistical mortality outliers ranged from fair to good. Severity measures based on medical records frequently disagreed with measures based on discharge abstracts. CONCLUSIONS Although the 10 severity measures agreed about relative hospital performance more often than would be expected by chance, assessments of individual hospital mortality rates varied by different severity-adjustment methods.

Collaboration


Dive into the Lisa I. Iezzoni's collaboration.

Top Co-Authors

Avatar

Arlene S. Ash

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Roger B. Davis

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ellen P. McCarthy

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda M. Long-Bellil

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge