Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Luck is active.

Publication


Featured researches published by Jeff Luck.


Annals of Internal Medicine | 2004

Measuring the Quality of Physician Practice by Using Clinical Vignettes: A Prospective Validation Study

John W. Peabody; Jeff Luck; Peter Glassman; Sharad Jain; Joyce Hansen; Maureen Spell; Martin L. Lee

Accurate, affordable, and valid measurements of clinical practice are the basis for quality-of-care assessments (1). However, to date, most measurement tools have relied on incomplete data sources, such as medical records or administrative data; require highly trained and expensive personnel to implement; and are difficult to validate (2-5). Comparisons of clinical practice across different sites and health care systems are also difficult because they require relatively complex instrument designs or statistical techniques to adjust for variations in case mix among the underlying patient populations (6, 7). We have developed a measurement tool, computerized clinical vignettes, that overcomes these limitations and measures physicians clinical practice against a predefined set of explicit quality criteria. These vignettes simulate patient visits and can be given to physicians to measure their ability to evaluate, diagnose, and treat specific medical conditions. Each vignette-simulated case contains realistic clinical detail, allowing an identical clinical scenario to be presented to many physicians. Each physician can be asked to complete several vignettes to simulate diverse clinical conditions. This instrument design obviates the need to adjust quality scores for the variation in disease severity and comorbid conditions found in actual patient populations. Our vignettes are also distinct from other quality measurements of clinical practice because they do not focus on a single task, or even a limited set of tasks, but instead comprehensively evaluate the range of skills needed to care for a patient. Vignettes are particularly well-suited for quality assessments of clinical practice that are used for large-scale (8, 9), cross-system comparisons (10, 11) or for cases in which ethical issues preclude involving patients or their records (7, 12, 13). They are also ideal for evaluations that require holding patient variation constant (14, 15) or manipulating patient-level variables (15-17). The appeal of vignettes has resulted in their extensive use in medical school education (18, 19), as well as various studies that explicitly evaluate the quality of clinical practice in real-life settings and comparative analysis among national health care systems (10, 20-23). Before vignette-measured quality can be used confidently in these settings, however, 2 important questions must be answered: How valid are vignettes as a measure of actual clinical practice? Can vignettes discriminate among variations in the quality of clinical practice? This has led to a search to define a gold standard for validation (24-26). We and others have used standardized patients as this standard. Standardized patients are trained actors who present unannounced to outpatient clinics as patients with a given clinical condition. Immediately after meeting with a physician, the standardized patient records on a checklist what the physician did during the visit (26-28). Rigorous methods, which we have described in detail elsewhere (29), ensure that standardized patients can be considered a gold standard. In addition, we have demonstrated the validity of standardized patients as a gold standard by concealing audio recorders on standardized patients during visits. The overall rate of agreement between the standardized patients checklists and the independent assessment of the audio transcripts was 91% (26). We previously used paper-and-pen vignettes in a study limited to only 1 health care system, the Veterans Administration, and found that they seemed to be a valid measure of the quality of clinical practice according to their rate of agreement with standardized patient checklists (26). For this study, we wanted to confirm the validity of vignettes by using a more complex study design that introduced many more randomly assigned physicians, a broader range of clinical cases, and several sites representing different health care systems. We also wanted to test a refined, computerized version of vignettes, which we believe are more realistic and streamline data collection and scoring. We were particularly interested in determining whether the vignettes accurately capture variation in the quality of clinical practice, which has become increasingly prominent in the national debate on quality of care (30, 31). We hoped that vignettes could contribute to this debate by providing a low-cost measure of variation across different health care systems. Methods Sites The study was conducted in 4 general internal medicine clinics: 2 Veterans Affairs (VA) medical centers and 2 large, private medical centers. One private site is a closed group model, and the other, primarily staffed by employed physicians, contracts with managed care plans. All sites are located in California, and each has an internal medicine residency training program. One VA medical center and 1 private site are located in 1 of 2 cities. The 2 VA medical centers are large, academically affiliated hospitals with large primary care general internal medicine practices. We chose the 2 private sites that were generally similar to the VA medical centers and to each other; each had large primary care practices and capitated reimbursement systems that provide primary care general internists with a broad scope of clinical decision-making authority. Study Design At each site, all attending physicians and second- and third-year residents who were actively engaged in the care of general internal medicine outpatients were eligible to participate in the study. We excluded only interns. Of 163 eligible physicians, 144 agreed to participate. We informed consenting physicians that 6 to 10 standardized patients might be introduced unannounced into their clinics over the course of a year and that they might be asked to complete an equal number of vignettes. Sixty physicians were randomly selected to see standardized patients: 5 physicians from each of the 3 training levels at each of the 4 sites (Figure 1). We assigned standardized patients to each selected physician for 8 clinical casessimple and complex cases of chronic obstructive pulmonary disease, diabetes, vascular disease, and depression. We abstracted the medical records from the 480 standardized patient visits. Each selected physician also completed a computerized clinical vignette for each of the 8 cases. For standardized patient visits that a selected physician did not complete, a replacement physician, who was randomly selected from the same training level at the same site, completed the visit. Eleven physicians required replacements. The 11 replacement physicians completed 24 standardized patient visits. Each replacement physician completed vignettes for all 8 cases. Finally, we randomly selected 45 additional physicians to serve as controls and complete vignettes (only) for all 8 cases. A total of 116 physicians participated in the study by seeing standardized patients, completing vignettes, or both. Standardized patients presented to the clinics between March and July 2000, and physicians completed vignettes between May and August 2000. Figure 1. Planned study design showing sites and physician sample by level of training and clinical case for the 3 quality measurement methods. Vignette Data Collection We developed the vignettes by using a standardized protocol. We first selected relatively common medical conditions frequently seen by internists. All selected conditions had explicit, evidence-based quality criteria and accepted standards of practice that could be used to score the vignettes, as well as be measured by standardized patients and chart abstraction. We developed written scenarios that described a typical patient with 1 of the same 4 diseases (chronic obstructive pulmonary disease, diabetes, vascular disease, or depression). For each disease, we developed a simple (uncomplicated) case and a more complex case with a comorbid condition of either hypertension or hypercholesterolemia. This yielded a total of 8 clinical cases. (A sample vignette and scoring sheet are available online.) Supplement. Appendix Figure: Vignette scoring sheet. Published online with permission from John W. Peabody, MD, PhD The physician completing the vignette sees the patient on a computer. Each vignette is organized into 5 sections, or domains, which, when completed in sequential order, recreate the normal sequence of events in an actual patient visit: taking the patients history, performing the physical examination, ordering radiologic or laboratory tests, making a diagnosis, and administering a treatment plan. For example, the computerized vignette first states the presenting problem to the physician and prompts the physician to take the patients history (that is, ask questions that would determine the history of the present illness; past medical history, including prevention; and social history). Physicians can record components of the history in any order without penalty. The entire format is open-ended: The physician enters the history questions directly into the computer and, in the most recent computerized versions, receives realtime responses. When the history is completed, the computer confirms that the physician has finished and then provides key responses typical of a patient with the specific case. The same process is repeated for the 4 remaining domains. In addition to the open-ended format, we have taken 3 steps to avoid potential inflation of vignette scores. First, physicians are not allowed to return to a previous domain and change their queries after they have seen the computerized response. Second, the number of queries is limited in the history and physical examination domains. For example, in the physical examination domain, physicians are asked to list only the 6 to 10 essential elements of the examination that they would perform. Third, they are given limited time to complete the vignette (just as time is limited during an actual patient visit)


The American Journal of Medicine | 2000

How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record

Jeff Luck; John W. Peabody; Timothy R. Dresselhaus; Martin L. Lee; Peter Glassman

PURPOSEnDespite widespread reliance on chart abstraction for quality measurement, concerns persist about its reliability and validity. We prospectively evaluated the validity of chart abstraction by directly comparing it with the gold standard of reports by standardized patients.nnnSUBJECTS AND METHODSnTwenty randomly selected general internal medicine residents and attending faculty physicians at the primary care clinics of two Veterans Affairs Medical Centers blindly evaluated and treated actor-patients (standardized patients) who had one of four common diseases: diabetes, chronic obstructive pulmonary disease, coronary artery disease, or low back pain. Charts from the visits were abstracted using explicit quality criteria; standardized patients completed a checklist containing the same criteria. For each physician, quality was measured for two different cases of the four conditions (a total of 160 physician-patient encounters). We compared chart abstraction with standardized-patient reports for four aspects of the encounter: taking the history, examining the patient, making the diagnosis, and prescribing appropriate treatment. The sensitivity and specificity of chart abstraction were calculated.nnnRESULTSnThe mean (+/- SD) chart abstraction score was 54% +/- 9%, substantially less than the mean score on the standardized-patient checklist of 68% +/- 9% (P <0.001). This finding was similar for all four conditions and at both sites. False positives-chart-recorded necessary care actions not reported by the standardized patients-resulted in a specificity of only 81%. The overall sensitivity of chart abstraction for necessary care was only 70%.nnnCONCLUSIONSnChart abstraction underestimates the quality of care for common outpatient general medical conditions when compared with standardized-patient reports. The medical record is neither sensitive nor specific. Quality measurements derived from chart abstraction may have important shortcomings, particularly as the basis for drawing policy conclusions or making management decisions.


Medical Care | 2004

Assessing the accuracy of administrative data in health information systems.

John W. Peabody; Jeff Luck; Sharad Jain; Dan Bertenthal; Peter Glassman

Background:Administrative data play a central role in health care. Inaccuracies in such data are costly to health systems, they obscure health research, and they affect the quality of patient care. Objectives:We sought to prospectively determine the accuracy of the primary and secondary diagnoses recorded in administrative data sets. Research Design:Between March and July 2002, standardized patients (SPs) completed unannounced visits at 3 sites. We abstracted the 348 medical records from these visits to obtain the written diagnoses made by physicians. We also examined the patient files to identify the diagnoses recorded on the administrative encounter forms and extracted data from the computerized administrative databases. Because the correct diagnosis was defined by the SP visit, we could determine whether the final diagnosis in the administrative data set was correct and, if not, whether it was caused by physician diagnostic error, missing encounter forms, or incorrectly filled out forms. Subjects:General internal medicine outpatient clinics at 2 Veterans Administration facilities and a large, private medical center participated in this study. Measures:A total of 45 trained SPs presented to physicians with 4 common outpatient conditions. Results:The correct primary diagnosis was recorded for 57% of visits. Thirteen percent of errors were caused by physician diagnostic error, 8% to missing encounter forms, and 22% to incorrectly entered data. Findings varied by condition and site but not by level of training. Accuracy of secondary diagnosis data (27%) was even poorer. Conclusions:Although more research is needed to evaluate the cause of inaccuracies and the relative contributions of patient, provider, and system level effects, it appears that significant inaccuracies in administrative data are common. Interventions aimed at correcting these errors appear feasible.


BMJ | 2002

Using standardised patients to measure physicians' practice: validation study using audio recordings

Jeff Luck; John W. Peabody

Abstract Objective: To assess the validity of standardised patients to measure the quality of physicians practice. Design: Validation study of standardised patients assessments. Physicians saw unannounced standardised patients presenting with common outpatient conditions. The standardised patients covertly tape recorded their visit and completed a checklist of quality criteria immediately afterwards. Their assessments were compared against independent assessments of the recordings by a trained medical records abstractor. Setting: Four general internal medicine primary care clinics in California. Participants: 144 randomly selected consenting physicians. Main outcome measures: Rates of agreement between the patients assessments and independent assessment. Results: 40 visits, one per standardised patient, were recorded. The overall rate of agreement between the standardised patients checklists and the independent assessment of the audio transcripts was 91% (κ. Disaggregating the data by medical condition, site, level of physicians training, and domain (stage of the consultation) gave similar rates of agreement. Sensitivity of the standardised patients assessments was 95%, and specificity was 85%. The area under the receiver operator characteristic curve was 90%. Conclusions: Standardised patients assessments seem to be a valid measure of the quality of physicians care for a variety of common medical conditions in actual outpatient settings. Properly trained standardised patients compare well with independent assessment of recordings of the consultations and may justify their use as a “gold standard” in comparing the quality of care across sites or evaluating data obtained from other sources, such as medical records and clinical vignettes.


Journal of General Internal Medicine | 2000

Measuring Compliance with Preventive Care Guidelines

Timothy R. Dresselhaus; John W. Peabody; Martin L. Lee; Mingming Wang; Jeff Luck

AbstractOBJECTIVE: To determine how accurately preventive care reported in the medical record reflects actual physician practice or competence.n DESIGN: Scoring criteria based on national guidelines were developed for 7 separate items of preventive care. The preventive care provided by randomly selected physicians was measured prospectively for each of the 7 items. Three measurement methods were used for comparison: (1) the abstracted medical record from a standardized patient (SP) visit; (2) explicit reports of physician practice during those visits from the SPs, who were actors trained to present undetected as patients; and (3) physician responses to written case scenarios (vignettes) identical to the SP presentations.n SETTING: The general medicine primary care clinics of two university-affiliated VA medical centers.n PARTICIPANTS: Twenty randomly selected physicians (10 at each site) from among eligible second- and third-year general internal medicine residents and attending physicians.n MEASUREMENTS AND MAIN RESULTS: Physicians saw 160 SPs (8 cases × 20 physicians). We calculated the percentage of visits in which each prevention item was recorded in the chart, determined the marginal percentage improvement of SP checklists and vignettes over chart abstraction alone, and compared the three methods using an analysis-of-variance model. We found that chart abstraction underestimated overall prevention compliance by 16% (P < .01) compared with SP checklists. Chart abstraction scores were lower than SP checklists for all seven items and lower than vignettes for four items. The marginal percentage improvement of SP checklists and vignettes to performance as measured by chart abstraction was significant for all seven prevention items and raised the overall prevention scores from 46% to 72% (P < .0001).n CONCLUSIONS: These data indicate that physicians perform more preventive care than they report in the medical record. Thus, benchmarks of preventive care by individual physicians and institutions that rely solely on the medical record may be misleading, at best.


Health Care Management Review | 2000

Improving the public sector: can reengineering identify how to boost efficiency and effectiveness at a VA medical center?

Jeff Luck; John W. Peabody

Reengineering is a widespread management technique, but few evaluations of its application in health care, especially in public sector organizations, have been reported. We conducted a reengineering analysis to determine if the method could identify how to improve the efficiency and effectiveness of a VA medical centers primary care delivery system. We found that the reengineering method appears applicable in the institutional context of a public sector teaching hospital.


Social Science & Medicine | 2014

Patient and provider perspectives on quality and health system effectiveness in a transition economy: Evidence from Ukraine

Jeff Luck; John W. Peabody; Lisa DeMaria; C.S. Alvarado; R. Menon

Facing a severe population health crisis due to noncommunicable diseases, Ukraine and other former Soviet republics and Eastern European countries have a pressing need for more effective health systems. Policies to enhance health system effectiveness should consider the perspectives of different stakeholder groups, including providers as well as patients. In addition, policies that directly target the quality of clinical care should be based on objective performance measures. In 2009 and 2010 we conducted a coordinated series of household and facility-level surveys to capture the perspectives of Ukrainian household members, outpatient clinic patients, and physicians regarding the countrys health system overall, as well as the quality, access, and affordability of health care. We objectively measured the quality of care for heart failure and chronic obstructive pulmonary disease using CPV(®) vignettes. There was broad agreement among household respondents (79%) and physicians (95%) that Ukraines health system should be reformed. CPV(®) results indicate that the quality of care for common noncommunicable diseases is poor in all regions of the country and in hospitals as well as polyclinics. However, perspectives about the quality of care differ, with household respondents seeing quality as a serious concern, clinic patients having more positive perceptions, and physicians not viewing quality as a reform priority. All stakeholder groups viewed affordability as a problem. These findings have several implications for policies to enhance health system effectiveness. The shared desire for health system reform among all stakeholder groups provides a basis for action in Ukraine. Improving quality, strengthening primary care, and enhancing affordability should be major goals of new health policies. Policies to improve quality directly, such as pay-for-performance, would be mutually reinforcing with purchasing reforms such as transparent payment mechanisms. Such policies would align the incentives of physicians with the desires of the population they serve.


Journal of General Internal Medicine | 2014

Multimethod Evaluation of the VA’s Peer-to-Peer Toolkit for Patient-Centered Medical Home Implementation

Jeff Luck; Candice Bowman; Laura York; Amanda M. Midboe; Thomas Taylor; Randall Gale; Steven M. Asch

BACKGROUNDEffective implementation of the patient-centered medical home (PCMH) in primary care practices requires training and other resources, such as online toolkits, to share strategies and materials. The Veterans Health Administration (VA) developed an online Toolkit of user-sourced tools to support teams implementing its Patient Aligned Care Team (PACT) medical home model.OBJECTIVETo present findings from an evaluation of the PACT Toolkit, including use, variation across facilities, effect of social marketing, and factors influencing use.INNOVATIONThe Toolkit is an online repository of ready-to-use tools created by VA clinic staff that physicians, nurses, and other team members may share, download, and adopt in order to more effectively implement PCMH principles and improve local performance on VA metrics.DESIGNMultimethod evaluation using: (1) website usage analytics, (2) an online survey of the PACT community of practice’s use of the Toolkit, and (3) key informant interviews.PARTICIPANTSSurvey respondents were PACT team members and coaches (nu2009=u2009544) at 136 VA facilities. Interview respondents were Toolkit users and non-users (nu2009=u200932).MEASURESFor survey data, multivariable logistic models were used to predict Toolkit awareness and use. Interviews and open-text survey comments were coded using a “common themes” framework. The Consolidated Framework for Implementation Research (CFIR) guided data collection and analyses.KEY RESULTSThe Toolkit was used by 6,745 staff in the first 19xa0months of availability. Among members of the target audience, 80xa0% had heard of the Toolkit, and of those, 70xa0% had visited the website. Tools had been implemented at 65xa0% of facilities. Qualitative findings revealed a range of user perspectives from enthusiastic support to lack of sufficient time to browse the Toolkit.CONCLUSIONSAn online Toolkit to support PCMH implementation was used at VA facilities nationwide. Other complex health care organizations may benefit from adopting similar online peer-to-peer resource libraries.


Journal of Health Politics Policy and Law | 2015

Oregon's Experiment in Health Care Delivery and Payment Reform: Coordinated Care Organizations Replacing Managed Care

Steven W. Howard; Stephanie Bernell; Jangho Yoon; Jeff Luck; Claire M. Ranit

To control Medicaid costs, improve quality, and drive community engagement, the Oregon Health Authority introduced a new system of coordinated care organizations (CCOs). While CCOs resemble traditional Medicaid managed care, they have differences that have been deliberately designed to improve care coordination, increase accountability, and incorporate greater community governance. Reforms include global budgets integrating medical, behavioral, and oral health care and public health functions; risk-adjusted payments rewarding outcomes and evidence-based practice; increased transparency; and greater community engagement. The CCO model faces several implementation challenges. If successful, it will provide improved health care delivery, better health outcomes, and overall savings.


BMC Health Services Research | 2014

Quality of care and health status in Ukraine

John W. Peabody; Jeff Luck; Lisa DeMaria; Rekha Menon

BackgroundWe conducted a national level assessment of the quality of clinical care practice in the Ukrainian healthcare system for two important causes of death and chronic disease conditions. We tested two hypotheses: a) quality of care is predicted by physician and facility characteristics and b) health status is predicted by quality of care.MethodsDuring 2009–2010 in Ukraine, we collected nationally-representative data from clinical facilities, physicians, Clinical Performance and Value (CPV®) vignettes, patient surveys from the facilities, and from the general population. Each physician completed a written CPV® vignette—a simulated case scenario of a typical patient visit—for each of two clinical cases, congestive heart failure (CHF) and chronic obstructive pulmonary disease (COPD). CPV® vignette scores, calculated as a percentage of all care criteria completed by the physician, were used as the measure of clinical quality of care. Self-reported health measures were collected from exit and household survey respondents. Regression models were developed to test the two study hypotheses.Results136 hospitals and 125 polyclinics were surveyed; 1,044 physicians were interviewed and completed CPV® vignettes. On average physicians scored 47.4% on the vignettes. Younger, female physicians provide a higher quality of care—as well as those that have had recent continuing medical education (CME) in chronic disease or health behaviors. Higher quality was associated with better health outcomes.ConclusionsAs low- and middle-income countries around the world are challenged by non-communicable diseases, higher quality of care provided to these populations may result in better outcomes, such as improved health status and life expectancy, and overcome regional shortfalls. Policy efforts that serially evaluate quality may improve chronic disease care.

Collaboration


Dive into the Jeff Luck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jangho Yoon

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Martin L. Lee

University of California

View shared research outputs
Top Co-Authors

Avatar

Peter Glassman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura York

United States Department of Veterans Affairs

View shared research outputs
Top Co-Authors

Avatar

Randall Gale

VA Palo Alto Healthcare System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa DeMaria

Rafael Advanced Defense Systems

View shared research outputs
Researchain Logo
Decentralizing Knowledge