Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Loveland is active.

Publication


Featured researches published by Susan Loveland.


Journal of The American College of Surgeons | 2011

Validity of Selected Patient Safety Indicators: Opportunities and Concerns

Haytham M.A. Kaafarani; Ann M. Borzecki; Kamal M.F. Itani; Susan Loveland; Hillary J. Mull; Kathleen Hickson; Sally MacDonald; Marlena H. Shin; Amy K. Rosen

BACKGROUND The Agency for Healthcare Research and Quality (AHRQ) recently designed the Patient Safety Indicators (PSIs) to detect potential safety-related adverse events. The National Quality Forum has endorsed several of these ICD-9-CM-based indicators as quality-of-care measures. We examined the positive predictive value (PPV) of 3 surgical PSIs: postoperative pulmonary embolus and deep vein thrombosis (pPE/DVT), iatrogenic pneumothorax (iPTX), and accidental puncture and laceration (APL). STUDY DESIGN We applied the AHRQ PSI software (v.3.1a) to fiscal year 2003 to 2007 Veterans Health Administration (VA) administrative data to identify (flag) patients suspected of having a pPE/DVT, iPTX, or APL. Two trained nurse abstractors reviewed a sample of 336 flagged medical records (112 records per PSI) using a standardized instrument. Inter-rater reliability was assessed. RESULTS Of 2,343,088 admissions, 6,080 were flagged for pPE/DVT (0.26%), 1,402 for iPTX (0.06%), and 7,203 for APL (0.31%). For pPE/DVT, the PPV was 43% (95% CI, 34% to 53%); 21% of cases had inaccurate coding (eg, arterial not venous thrombosis); and 36% featured thromboembolism present on admission or preoperatively. For iPTX, the PPV was 73% (95% CI, 64% to 81%); 18% had inaccurate coding (eg, spontaneous pneumothorax), and 9% were pneumothoraces present on admission. For APL, the PPV was 85% (95% CI, 77% to 91%); 10% of cases had coding inaccuracies and 5% indicated injuries present on admission. However, 27% of true APLs were minor injuries requiring no surgical repair (eg, small serosal bowel tear). Inter-rater reliability was >90% for all 3 PSIs. CONCLUSIONS Until coding revisions are implemented, these PSIs, especially pPE/DVT, should be used primarily for screening and case-finding. Their utility for public reporting and pay-for-performance needs to be reassessed.


Journal of General Internal Medicine | 2006

Health status among 28,000 women veterans. The VA Women's Health Program Evaluation Project.

Susan M. Frayne; Victoria A. Parker; Cindy L. Christiansen; Susan Loveland; Margaret R. Seaver; Lewis E. Kazis; Katherine M. Skinner

AbstractBACKGROUND: Male veterans receiving Veterans Health Administration (VA) care have worse health than men in the general population. Less is known about health status in women veteran VA patients, a rapidly growing population. OBJECTIVE: To characterize health status of women (vs men) veteran VA patients across age cohorts, and assess gender differences in the effect of social support upon health status. DESIGN AND PATIENTS: Data came from the national 1999 Large Health Survey of Veteran Enrollees (response rate 63%) and included 28,048 women and 651,811 men who used VA in the prior 3 years. MEASUREMENTS: Dimensions of health status from validated Veterans Short Form-36 instrument; social support (married, living arrangement, have someone to take patient to the doctor). RESULTS: In each age stratum (18 to 44, 45 to 64, and ≥65 years), Physical Component Summary (PCS) and Mental Component Summary (MCS) scores were clinically comparable by gender, except that for those aged≥65, mean MCS was better for women than men (49.3 vs 45.9, P<.001). Patient gender had a clinically insignificant effect upon PCS and MCS after adjusting for age, race/ethnicity, and education. Women had lower levels of social support than men; in patients aged <65, being married or living with someone benefited MCS more in men than in women. CONCLUSIONS: Women veteran VA patients have as heavy a burden of physical and mental illness as do men in VA, and are expected to require comparable intensity of health care services. Their ill health occurs in the context of poor social support, and varies by age.


Medical Care | 2003

Predicting costs of care using a pharmacy-based measure risk adjustment in a veteran population.

Anne Sales; Chuan Fen Liu; Kevin L. Sloan; Jesse D. Malkin; Paul A. Fishman; Amy K. Rosen; Susan Loveland; W. Paul Nichol; Norman T. Suzuki; Edward B. Perrin; Nancy D. Sharp; Jeffrey Todd-Stenberg

Background. Although most widely used risk adjustment systems use diagnosis data to classify patients, there is growing interest in risk adjustment based on computerized pharmacy data. The Veterans Health Administration (VHA) is an ideal environment in which to test the efficacy of a pharmacy-based approach. Objective. To examine the ability of RxRisk-V to predict concurrent and prospective costs of care in VHA and compare the performance of RxRisk-V to a simple age/gender model, the original RxRisk, and two leading diagnosis-based risk adjustment approaches: Adjusted Clinical Groups and Diagnostic Cost Groups/Hierarchical Condition Categories. Methods. The study population consisted of 161,202 users of VHA services in Washington, Oregon, Idaho, and Alaska during fiscal years (FY) 1996 to 1998. We examined both concurrent and predictive model fit for two sequential 12-month periods (FY 98 and FY 99) with the patient-year as the unit of analysis, using split-half validation. Results. Our results show that the Diagnostic Cost Group /Hierarchical Condition Categories model performs best (R2 = 0.45) among concurrent cost models, followed by ADG (0.31), RxRisk-V (0.20), and age/sex model (0.01). However, prospective cost models other than age/sex showed comparable R2: Diagnostic Cost Group /Hierarchical Condition Categories R2 = 0.15, followed by ADG (0.12), RxRisk-V (0.12), and age/sex (0.01). Conclusions. RxRisk-V is a clinically relevant, open source risk adjustment system that is easily tailored to fit specific questions, populations, or needs. Although it does not perform better than diagnosis-based measures available on the market, it may provide a reasonable alternative to proprietary systems where accurate computerized pharmacy data are available.


Medical Care Research and Review | 2008

Using patient safety indicators to estimate the impact of potential adverse events on outcomes.

Peter E. Rivard; Stephen L. Luther; Cindy L. Christiansen; Shibei Zhao; Susan Loveland; Anne Elixhauser; Patrick S. Romano; Amy K. Rosen

The authors estimated the impact of potentially preventable patient safety events, identi- fied by Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs), on patient outcomes: mortality, length of stay (LOS), and cost. The PSIs were applied to all acute inpatient hospitalizations at Veterans Health Administration (VA) facil- ities in fiscal 2001. Two methods—regression analysis and multivariable case matching— were used independently to control for patient and facility characteristics while predicting the effect of the PSI on each outcome. The authors found statistically significant (p < .0001) excess mortality, LOS, and cost in all groups with PSIs. The magnitude of the excess varied considerably across the PSIs. These VA findings are similar to those from a previously published study of nonfederal hospitals, despite differences between VA and non-VA systems. This study contributes to the literature measuring outcomes of medical errors and provides evidence that AHRQ PSIs may be useful indicators for comparison across delivery systems.


Medical Care | 2009

Effects of Resident Duty Hour Reform on Surgical and Procedural Patient Safety Indicators Among Hospitalized Veterans Health Administration and Medicare Patients

Amy K. Rosen; Susan Loveland; Patrick S. Romano; Kamal M.F. Itani; Jeffrey H. Silber; Orit Even-Shoshan; Michael J. Halenar; Yun Teng; Jingsan Zhu; Kevin G. Volpp

Objective:Improving patient safety was a strong motivation behind duty hour regulations implemented by Accreditation Council for Graduate Medical Education on July 1, 2003. We investigated whether rates of patient safety indicators (PSIs) changed after these reforms. Research Design:Observational study of patients admitted to Veterans Health Administration (VA) (N = 826,047) and Medicare (N = 13,367,273) acute-care hospitals from July 1, 2000 to June 30, 2005. We examined changes in patient safety events in more versus less teaching-intensive hospitals before (2000–2003) and after (2003–2005) duty hour reform, using conditional logistic regression, adjusting for patient age, gender, comorbidities, secular trends, baseline severity, and hospital site. Measures:Ten PSIs were aggregated into 3 composite measures based on factor analyses: “Continuity of Care,” “Technical Care,” and “Other” composites. Results:Continuity of Care composite rates showed no significant changes postreform in hospitals of different teaching intensity in either VA or Medicare. In the VA, there were no significant changes postreform for the technical care composite. In Medicare, the odds of a Technical Care PSI event in more versus less teaching-intensive hospitals in postreform year 1 were 1.12 (95% CI; 1.01–1.25); there were no significant relative changes in postreform year 2. Other composite rates increased in VA in postreform year 2 in more versus less teaching-intensive hospitals (odds ratio, 1.63; 95% CI; 1.10–2.41), but not in Medicare in either postreform year. Conclusions:Duty hour reform had no systematic impact on PSI rates. In the few cases where there were statistically significant increases in the relative odds of developing a PSI, the magnitude of the absolute increases were too small to be clinically meaningful.


Medical Care | 2001

Evaluating diagnosis-based case-mix measures: how well do they apply to the VA population?

Amy K. Rosen; Susan Loveland; Jennifer J. Anderson; James A. Rothendler; Cheryl S. Hankin; Carter C. Rakovski; Mark A. Moskowitz; Dan R. Berlowitz

Background.Diagnosis-based case-mix measures are increasingly used for provider profiling, resource allocation, and capitation rate setting. Measures developed in one setting may not adequately capture the disease burden in other settings. Objectives.To examine the feasibility of adapting two such measures, Adjusted Clinical Groups (ACGs) and Diagnostic Cost Groups (DCGs), to the Department of Veterans Affairs (VA) population. Research Design. A 60% random sample of veterans who used health care services during FY 1997 was obtained from VA inpatient and outpatient administrative databases. A split-sample technique was used to obtain a 40% sample (n = 1,046,803) for development and a 20% sample (n = 524,461) for validation. Methods.Concurrent ACG and DCG risk adjustment models, using 1997 diagnoses and demographics to predict FY 1997 utilization (ambulatory provider encounters, and service days–the sum of a patient’s inpatient and outpatient visit days), were fitted and cross-validated. Results.Patients were classified into groupings that indicated a population with multiple psychiatric and medical diseases. Model R-squares explained between 6% and 32% of the variation in service utilization. Although reparameterized models did better in predicting utilization than models with external weights, none of the models was adequate in characterizing the entire population. For predicting service days, DCGs were superior to ACGs in most categories, whereas ACGs did better at discriminating among veterans who had the lowest utilization. Conclusions.Although “off-the-shelf” case-mix measures perform moderately well when applied to another setting, modifications may be required to accurately characterize a population’s disease burden with respect to the resource needs of all patients.


Medical Care | 2012

Validating the patient safety indicators in the Veterans Health Administration: do they accurately identify true safety events?

Amy K. Rosen; Kamal M.F. Itani; Marisa Cevasco; Haytham M.A. Kaafarani; Amresh Hanchate; Marlena H. Shin; Susan Loveland; Qi Chen; Ann M. Borzecki

Background:The Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs) use administrative data to detect potentially preventable in-hospital adverse events. However, few studies have determined how accurately the PSIs identify true safety events. Objectives:We examined the criterion validity, specifically the positive predictive value (PPV), of 12 selected PSIs using clinical data abstracted from the Veterans Health Administration (VA) electronic medical record as the gold standard. Methods:We identified PSI-flagged cases from 28 representative hospitals by applying the AHRQ PSI software (v.3.1a) to VA fiscal year 2003 to 2007 administrative data. Trained nurse-abstractors used standardized abstraction tools to review a random sample of flagged medical records (112 records per PSI) for the presence of true adverse events. Interrater reliability was assessed. We evaluated PPVs and associated 95% confidence intervals of each PSI and examined false positive (FP) cases to determine why they were incorrectly flagged and gain insight into how each PSI might be improved. Results:PPVs ranged from 28% (95% CI, 15%−43%) for Postoperative Hip Fracture to 87% (95% CI, 79%−92%) for Postoperative Wound Dehiscence. Common reasons for FPs included conditions that were present on admission (POA), coding errors, and lack of coding specificity. PSIs with the lowest PPVs had the highest proportion of FPs owing to POA. Conclusions:Overall, PPVs were moderate for most of the PSIs. Implementing POA codes and using more specific ICD-9-CM codes would improve their validity. Our results suggest that additional coding improvements are needed before the PSIs evaluated herein are used for hospital reporting or pay for performance.


Medical Care | 2010

Comparison of In-Hospital Versus 30-Day Mortality Assessments for Selected Medical Conditions

Ann M. Borzecki; Cindy L. Christiansen; Priscilla Chew; Susan Loveland; Amy K. Rosen

Background:In-hospital mortality measures such as the Agency for Healthcare Research and Quality (AHRQ) Inpatient Quality Indicators (IQIs) are easily derived using hospital discharge abstracts and publicly available software. However, hospital assessments based on a 30-day postadmission interval might be more accurate given potential differences in facility discharge practices. Objectives:To compare in-hospital and 30-day mortality rates for 6 medical conditions using the AHRQ IQI software. Methods:We used IQI software (v3.1) and 2004–2007 Veterans Health Administration (VA) discharge and Vital Status files to derive 4-year facility-level in-hospital and 30-day observed mortality rates and observed/expected ratios (O/Es) for admissions with a principal diagnosis of acute myocardial infarction, congestive heart failure, stroke, gastrointestinal hemorrhage, hip fracture, and pneumonia. We standardized software-calculated O/Es to the VA population and compared O/Es and outlier status across sites using correlation, observed agreement, and kappas. Results:Of 119 facilities, in-hospital versus 30-day mortality O/E correlations were generally high (median: r = 0.78; range: 0.31–0.86). Examining outlier status, observed agreement was high (median: 84.7%, 80.7%–89.1%). Kappas showed at least moderate agreement (k > 0.40) for all indicators except stroke and hip fracture (k ≤ 0.22). Across indicators, few sites changed from a high to nonoutlier or low outlier, or vice versa (median: 10, range: 7–13). Conclusions:The AHRQ IQI software can be easily adapted to generate 30-day mortality rates. Although 30-day mortality has better face validity as a hospital performance measure than in-hospital mortality, site assessments were similar despite the definition used. Thus, the measure selected for internal benchmarking should primarily depend on the healthcare systems data linkage capabilities.


Medical Care | 2006

Tracking rates of patient safety indicators over time: Lessons from the veterans administration

Amy K. Rosen; Shibei Zhao; Peter E. Rivard; Susan Loveland; Maria E. Montez-Rath; Anne Elixhauser; Patrick S. Romano

Background:The Patient Safety Indicators (PSIs), developed by the Agency for Healthcare Research and Quality, are useful screening tools for highlighting areas in which quality should be further investigated and providing useful benchmarks for tracking progress. Objectives:Our objectives were to: 1) provide a descriptive analysis of the incidence of PSI events from 2001 to 2004 in the Veterans Health Administration (VA); 2) examine trends in national PSI rates at the hospital discharge level over time; and 3) assess whether hospital characteristics (eg, teaching status, number of beds, and degree of quality improvement implementation) and baseline safety-related hospital performance predict future hospital safety-related performance. Methods:We examined changes in risk-adjusted PSI rates at the discharge level, calculated the correlation between hospitals’ risk-adjusted PSI rates in 2001 with subsequent years, and developed generalized linear models to examine predictors of hospitals’ 2004 risk-adjusted PSI rates. Results:Risk-adjusted rates of 2 of the 15 PSIs demonstrated significant trends over time. Rates of iatrogenic pneumothorax increased over time, whereas rates of failure to rescue decreased. Most PSIs demonstrated consistent rates over time. After accounting for patient and hospital characteristics, hospitals’ baseline risk-adjusted PSI rates were the most important predictors of their 2004 risk-adjusted rates for 8 PSIs. Conclusions:The PSIs are useful tools for tracking and monitoring patient safety events in the VA. Future research should investigate whether trends reflect better or worse care or increased attention to documenting patient safety events.


Medical Care | 2013

Examining the impact of the AHRQ Patient Safety Indicators (PSIs) on the Veterans Health Administration: the case of readmissions.

Amy K. Rosen; Susan Loveland; Marlena H. Shin; Amresh Hanchate; Qi Chen; Haytham M.A. Kaafarani; Ann M. Borzecki

Background:By focusing primarily on outcomes in the inpatient setting one may overlook serious adverse events that may occur after discharge (eg, readmissions, mortality) as well as opportunities for improving outpatient care. Objective:Our overall objective was to examine whether experiencing an Agency for Healthcare Research and Quality Patient Safety Indicator (PSI) event in an index medical or surgical hospitalization increased the likelihood of readmission. Methods:We applied the Agency for Healthcare Research and Quality PSI software (version 4.1.a) to 2003–2007 Veterans Health Administration inpatient discharge data to generate risk-adjusted PSI rates for 9 individual PSIs and 4 aggregate PSI measures: any PSI event and composite PSIs reflecting “Technical Care,” “Continuity of Care,” and both surgical and medical care (Mixed). We estimated separate logistic regression models to predict the likelihood of 30-day readmission for individual PSIs, any PSI event, and the 3 composites, adjusting for age, sex, comorbidities, and the occurrence of other PSI(s). Results:The odds of readmission were 23% higher for index hospitalizations with any PSI event compared with those with no event [confidence interval (CI), 1.19–1.26], and ranged from 22% higher for Iatrogenic Pneumothorax (CI, 1.03–1.45) to 61% higher for Postoperative Wound Dehiscence (CI, 1.27–2.05). For the composites, the odds of readmission ranged from 15% higher for the Technical Care composite (CI, 1.08–1.22) to 37% higher for the Continuity of Care composite (CI, 1.26–1.50). Conclusions:Our results suggest that interventions that focus on minimizing preventable inpatient safety events as well as improving coordination of care between and across settings may decrease the likelihood of readmission.

Collaboration


Dive into the Susan Loveland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Elixhauser

Agency for Healthcare Research and Quality

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge