Ashley Snyder
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashley Snyder.
American Journal of Respiratory and Critical Care Medicine | 2017
Matthew M. Churpek; Ashley Snyder; Han X; Sarah Sokol; Pettit N; Howell; Dana P. Edelson
Rationale: The 2016 definitions of sepsis included the quick Sepsis‐related Organ Failure Assessment (qSOFA) score to identify high‐risk patients outside the intensive care unit (ICU). Objectives: We sought to compare qSOFA with other commonly used early warning scores. Methods: All admitted patients who first met the criteria for suspicion of infection in the emergency department (ED) or hospital wards from November 2008 until January 2016 were included. The qSOFA, Systemic Inflammatory Response Syndrome (SIRS), Modified Early Warning Score (MEWS), and the National Early Warning Score (NEWS) were compared for predicting death and ICU transfer. Measurements and Main Results: Of the 30,677 included patients, 1,649 (5.4%) died and 7,385 (24%) experienced the composite outcome (death or ICU transfer). Sixty percent (n = 18,523) first met the suspicion criteria in the ED. Discrimination for in‐hospital mortality was highest for NEWS (area under the curve [AUC], 0.77; 95% confidence interval [CI], 0.76‐0.79), followed by MEWS (AUC, 0.73; 95% CI, 0.71‐0.74), qSOFA (AUC, 0.69; 95% CI, 0.67‐0.70), and SIRS (AUC, 0.65; 95% CI, 0.63‐0.66) (P < 0.01 for all pairwise comparisons). Using the highest non‐ICU score of patients, ≥2 SIRS had a sensitivity of 91% and specificity of 13% for the composite outcome compared with 54% and 67% for qSOFA ≥2, 59% and 70% for MEWS ≥5, and 67% and 66% for NEWS ≥8, respectively. Most patients met ≥2 SIRS criteria 17 hours before the combined outcome compared with 5 hours for ≥2 and 17 hours for ≥1 qSOFA criteria. Conclusions: Commonly used early warning scores are more accurate than the qSOFA score for predicting death and ICU transfer in non‐ICU patients. These results suggest that the qSOFA score should not replace general early warning scores when risk‐stratifying patients with suspected infection.
BMJ Quality & Safety | 2018
Lakshmi Swaminathan; Scott A. Flanders; Mary A.M. Rogers; Yvonne Calleja; Ashley Snyder; Rama Thyagarajan; Priscila Bercea; Vineet Chopra
Background Although important in clinical care, reports of inappropriate peripherally inserted central catheter (PICC) use are growing. Objective To test whether implementation of the Michigan Appropriateness Guide for Intravenous Catheters (MAGIC) can improve PICC use and patient outcomes. Design Quasi-experimental, interrupted time series design at one study site with nine contemporaneous external controls. Setting Ten hospitals participating in a state-wide quality collaborative from 1 August 2014 to 31 July 2016. Patients 963 hospitalised patients who received a PICC at the study site vs 6613 patients at nine control sites. Intervention A multimodal intervention (tool, training, electronic changes, education) derived from MAGIC. Measurements Appropriateness of PICC use and rates of PICC-associated complications. Segmented Poisson regression was used for analyses. Results Absolute rates of inappropriate PICC use decreased substantially at the study site versus controls (91.3% to 65.3% (−26.0%) vs 72.2% to 69.6% (−2.6%); P<0.001). After adjusting for underlying trends and patient characteristics, however, a marginally significant 13.8% decrease in inappropriate PICC use occurred at the study site (incidence rate ratio 0.86 (95% CI 0.74 to 0.99; P=0.048)); no change was observed at control sites. While the incidence of all PICC complications decreased to a greater extent at the study site, the absolute difference between controls and intervention was small (33.9% to 26.7% (−7.2%) vs 22.4% to 20.8% (−1.6%); P=0.036). Limitations Non-randomised design limits inference; the most effective component of the multimodal intervention is unknown; effects following implementation were modest. Conclusions In a multihospital quality improvement project, implementation of MAGIC improved PICC appropriateness and reduced complications to a modest extent. Given the size and resources required for this study, future work should consider cost-to-benefit ratio of similar approaches.
Critical Care Medicine | 2017
Matthew M. Churpek; Ashley Snyder; Sarah Sokol; Natasha Pettit; Dana P. Edelson
Objective: Studies in sepsis are limited by heterogeneity regarding what constitutes suspicion of infection. We sought to compare potential suspicion criteria using antibiotic and culture order combinations in terms of patient characteristics and outcomes. We further sought to determine the impact of differing criteria on the accuracy of sepsis screening tools and early warning scores. Design: Observational cohort study. Setting: Academic center from November 2008 to January 2016. Patients: Hospitalized patients outside the ICU. Interventions: None. Measurements and Main Results: Six criteria were investigated: 1) any culture, 2) blood culture, 3) any culture plus IV antibiotics, 4) blood culture plus IV antibiotics, 5) any culture plus IV antibiotics for at least 4 of 7 days, and 6) blood culture plus IV antibiotics for at least 4 of 7 days. Accuracy of the quick Sepsis-related Organ Failure Assessment score, Sepsis-related Organ Failure Assessment score, systemic inflammatory response syndrome criteria, the National and Modified Early Warning Score, and the electronic Cardiac Arrest Risk Triage score were calculated for predicting ICU transfer or death within 48 hours of meeting suspicion criteria. A total of 53,849 patients met at least one infection criteria. Mortality increased from 3% for group 1 to 9% for group 6 and percentage meeting Angus sepsis criteria increased from 20% to 40%. Across all criteria, score discrimination was lowest for systemic inflammatory response syndrome (median area under the receiver operating characteristic curve, 0.60) and Sepsis-related Organ Failure Assessment score (median area under the receiver operating characteristic curve, 0.62), intermediate for quick Sepsis-related Organ Failure Assessment (median area under the receiver operating characteristic curve, 0.65) and Modified Early Warning Score (median area under the receiver operating characteristic curve 0.67), and highest for National Early Warning Score (median area under the receiver operating characteristic curve 0.71) and electronic Cardiac Arrest Risk Triage (median area under the receiver operating characteristic curve 0.73). Conclusions: The choice of criteria to define a potentially infected population significantly impacts prevalence of mortality but has little impact on accuracy. Systemic inflammatory response syndrome was the least predictive and electronic Cardiac Arrest Risk Triage the most predictive regardless of how infection was defined.
Critical Care Medicine | 2017
Matthew M. Churpek; Dana P. Edelson; Ji Yeon Lee; Kyle Carey; Ashley Snyder
Objectives: Decreased staffing at nighttime is associated with worse outcomes in hospitalized patients. Rapid response teams were developed to decrease preventable harm by providing additional critical care resources to patients with clinical deterioration. We sought to determine whether rapid response team call frequency suffers from decreased utilization at night and how this is associated with patient outcomes. Design: Retrospective analysis of a prospectively collected registry database. Setting: National registry database of inpatient rapid response team calls. Patients: Index rapid response team calls occurring on the general wards in the American Heart Association Get With The Guidelines-Medical Emergency Team database between 2005 and 2015 were analyzed. Interventions: None. Measurements and Main Results: The primary outcome was inhospital mortality. Patient and event characteristics between the hours with the highest and lowest mortality were compared, and multivariable models adjusting for patient characteristics were fit. A total of 282,710 rapid response team calls from 274 hospitals were included. The lowest frequency of calls occurred in the consecutive 1 AM to 6:59 AM period, with 266 of 274 (97%) hospitals having lower than expected call volumes during those hours. Mortality was highest during the 7 AM hour and lowest during the noon hour (18.8% vs 13.8%; adjusted odds ratio, 1.41 [1.31–1.52]; p < 0.001). Compared with calls at the noon hour, those during the 7 AM hour had more deranged vital signs, were more likely to have a respiratory trigger, and were more likely to have greater than two simultaneous triggers. Conclusions: Rapid response team activation is less frequent during the early morning and is followed by a spike in mortality in the 7 AM hour. These findings suggest that failure to rescue deteriorating patients is more common overnight. Strategies aimed at improving rapid response team utilization during these vulnerable hours may improve patient outcomes.
Journal of Hospital Medicine | 2018
Matthew M. Churpek; Ashley Snyder; Nicole M. Twu; Dana P. Edelson
Respiratory rate is the most accurate vital sign for predicting adverse outcomes in ward patients.1,2 Though other vital signs are typically collected by using machines, respiratory rate is collected manually by caregivers counting the breathing rate. However, studies have shown signi cant discrepancies between a patient’s respiratory rate documented in the medical record, which is often 18 or 20, and the value measured by counting the rate over a full minute.3 Thus, despite the high accuracy of respiratory rate, it is possible that these values do not represent true patient physiology. It is unknown whether a valid automated measurement of respiratory rate would be more predictive than a manually collected respiratory rate for identifying patients who develop deterioration. The aim of this study was to compare the distribution and predictive accuracy of manually and automatically recorded respiratory rates.
Journal of Hospital Medicine | 2017
Patrick G. Lyons; Ashley Snyder; Sarah Sokol; Dana P. Edelson; Babak Mokhlesi; Matthew M. Churpek
BACKGROUND: Opioids and benzodiazepines are frequently used in hospitals, but little is known about outcomes among ward patients receiving these medications. OBJECTIVE: To determine the association between opioid and benzodiazepine administration and clinical deterioration. DESIGN: Observational cohort study. SETTING: 500‐bed academic urban tertiary‐care hospital. PATIENTS: All adults hospitalized on the wards from November 2008 to January 2016 were included. Patients who were “comfort care” status, had tracheostomies, sickle‐cell disease, and patients at risk for alcohol withdrawal or seizures were excluded. MEASUREMENTS: The primary outcome was the composite of intensive care unit transfer or ward cardiac arrest. Discrete‐time survival analysis was used to calculate the odds of this outcome during exposed time periods compared to unexposed time periods with respect to the medications of interest, with adjustment for patient demographics, comorbidities, severity of illness, and pain score. RESULTS: In total, 120,518 admissions from 67,097 patients were included, with 67% of admissions involving opioids, and 21% involving benzodiazepines. After adjustment, each equivalent of 15 mg oral morphine was associated with a 1.9% increase in the odds of the primary outcome within 6 hours (odds ratio [OR], 1.019; 95% confidence interval [CI], 1.013‐1.026; P < 0.001), and each 1 mg oral lorazepam equivalent was associated with a 29% increase in the odds of the composite outcome within 6 hours (OR, 1.29; CI, 1.16‐1.45; P < 0.001). CONCLUSION: Among ward patients, opioids were associated with increased risk for clinical deterioration in the 6 hours after administration. Benzodiazepines were associated with even higher risk. These results have implications for ward‐monitoring strategies.
Chest | 2018
Xuan Han; Dana P. Edelson; Ashley Snyder; Natasha Pettit; Sarah Sokol; Carmen Barc; Michael D. Howell; Matthew M. Churpek
Background Sepsis remains a significant cause of morbidity and mortality in the United States, leading to the implementation of the Severe Sepsis and Septic Shock Early Management Bundle (SEP‐1). SEP‐1 identifies patients with “severe sepsis” via clinical and laboratory criteria and mandates interventions, including lactate draws and antibiotics, within a specific time window. We sought to characterize the patients affected and to study the implications of SEP‐1 on patient care and outcomes. Methods All adults admitted to the University of Chicago from November 2008 to January 2016 were eligible. Modified SEP‐1 criteria were used to identify appropriate patients. Time to lactate draw and antibiotic and IV fluid administration were calculated. In‐hospital mortality was examined. Results Lactates were measured within the mandated window 32% of the time on the ward (n = 505) compared with 55% (n = 818) in the ICU and 79% (n = 2,144) in the ED. Patients with delayed lactate measurements demonstrated the highest in‐hospital mortality at 29%, with increased time to antibiotic administration (median time, 3.9 vs 2.0 h). Patients with initial lactates > 2.0 mmol/L demonstrated an increase in the odds of death with hourly delay in lactate measurement (OR, 1.02; 95% CI, 1.0003‐1.05; P = .04). Conclusions Delays in lactate measurement are associated with delayed antibiotics and increased mortality in patients with initial intermediate or elevated lactate levels. Systematic early lactate measurement for all patients with sepsis will lead to a significant increase in lactate draws that may prompt more rapid physician intervention for patients with abnormal initial values.
BMJ Quality & Safety | 2018
Heather Gilmartin; Sanjay Saint; Mary A.M. Rogers; Suzanne Winter; Ashley Snyder; Martha Quinn; Vineet Chopra
Background To evaluate the effectiveness of a brief mindfulness intervention on hand hygiene performance and mindful attention for inpatient physician teams. Design A pilot, pre-test/post-test randomised controlled mixed methods trial. Setting One academic medical centre in the USA. Participants Four internal medicine physician teams consisting of one attending, one resident, two to three interns and up to four medical students. Intervention A facilitated, group-based educational discussion on how mindfulness, as practised through mindful hand hygiene, may improve clinical care and practices in the hospital setting. Main outcomes and measures The primary outcome was hand hygiene adherence (percentage) for each patient encounter. Other outcomes were observable mindful moments and mindful attention, measured using the Mindfulness Attention Awareness Scale, from baseline to post-intervention, and qualitative evaluation of the intervention. Results For attending physicians, hand hygiene adherence increased 14.1% in the intervention group compared with a decrease of 5.7% in the controls (P=0.035). For residents, the comparable figures were 24.7% (intervention) versus 0.2% (control) (P=0.064). For interns, adherence increased 10.0% with the intervention versus 4.2% in the controls (P=0.007). For medical students, adherence improved more in the control group (4.7% intervention vs 7.7% controls; P=0.003). An increase in mindfulness behaviours was observed for the intervention group (3.7%) versus controls (0.9%) (P=0.021). Self-reported mindful attention did not change (P=0.865). Conclusions A brief, education-based mindfulness intervention improved hand hygiene in attending physicians and residents, but not in medical students. The intervention was well-received, increased mindfulness practice, and appears to be a feasible way to introduce mindfulness in the clinical setting. Future work instructing clinicians in mindfulness to improve hand hygiene may prove valuable. Trial registration number NCT03165799; Results.
BMJ Quality & Safety | 2018
Ashwin Gupta; Ashley Snyder; Allen Kachalia; Scott A. Flanders; Sanjay Saint; Vineet Chopra
Background Little is known about the incidence or significance of diagnostic error in the inpatient setting. We used a malpractice claims database to examine incidence, predictors and consequences of diagnosis-related paid malpractice claims in hospitalised patients. Methods The US National Practitioner Database was used to identify paid malpractice claims occurring between 1 January 1999 and 31 December 2011. Patient and provider characteristics associated with paid claims were analysed using descriptive statistics. Differences between diagnosis-related paid claims and other paid claim types (eg, surgical, anaesthesia, medication) were assessed using Wilcoxon rank-sum and χ2 tests. Multivariable logistic regression was used to identify patient and provider factors associated with diagnosis-related paid claims. Trends for incidence of diagnosis-related paid claims and median annual payment were assessed using the Cochran-Armitage and non-parametric trend test. Results 13 682 of 62 966 paid malpractice claims (22%) were diagnosis-related. Compared with other paid claim types, characteristics significantly associated with diagnosis-related paid claims were as follows: male patients, patient aged >50 years, provider aged <50 years and providers in the northeast region. Compared with other paid claim types, diagnosis-related paid claims were associated with 1.83 times more risk of disability (95% CI 1.75 to 1.91; p<0.001) and 2.33 times more risk of death (95% CI 2.23 to 2.43; p<0.001) than minor injury, after adjusting for patient and provider characteristics. Inpatient diagnostic error accounted for
BMJ Open | 2018
Christopher M. Petrilli; Sanjay Saint; Joseph J Jennings; Andrew Caruso; Latoya Kuhn; Ashley Snyder; Vineet Chopra
5.7 billion in payments over the study period, and median diagnosis-related payments increased at a rate disproportionate to other types. Conclusion Inpatient diagnosis-related malpractice payments are common and more often associated with disability and death than other claim types. Research focused on understanding and mitigating diagnostic errors in hospital settings is necessary.