Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicole B. Gabler is active.

Publication


Featured researches published by Nicole B. Gabler.


Circulation | 2012

Validation of 6-Minute Walk Distance as a Surrogate End Point in Pulmonary Arterial Hypertension Trials

Nicole B. Gabler; Benjamin French; Brian L. Strom; Harold I. Palevsky; Darren B. Taichman; Steven M. Kawut; Scott D. Halpern

Background— Nearly all available treatments for pulmonary arterial hypertension have been approved based on change in 6-minute walk distance (&Dgr;6MWD) as a clinically important end point, but its validity as a surrogate end point has never been shown. We aimed to validate the difference in &Dgr;6MWD against the probability of a clinical event in pulmonary arterial hypertension trials. Methods and Results— First, to determine whether &Dgr;6MWD between baseline and 12 weeks mediated the relationship between treatment assignment and development of clinical events, we conducted a pooled analysis of patient-level data from the 10 randomized placebo-controlled trials previously submitted to the US Food and Drug Administration (n=2404 patients). Second, to identify a threshold effect for the &Dgr;6MWD that indicated a statistically significant reduction in clinical events, we conducted a meta-regression among 21 drug/dose-level combinations. &Dgr;6MWD accounted for 22.1% (95% confidence interval, 12.1%– 31.1%) of the treatment effect (P<0.001). The meta-analysis showed an average difference in &Dgr;6MWD of 22.4 m (95% confidence interval, 17.4–27.5 m), favoring active treatment over placebo. Active treatment decreased the probability of a clinical event (summary odds ratio, 0.44; 95% confidence interval, 0.33–0.57). The meta-regression revealed a significant threshold effect of 41.8 m. Conclusions— Our results suggest that &Dgr;6MWD does not explain a large proportion of the treatment effect, has only modest validity as a surrogate end point for clinical events, and may not be a sufficient surrogate end point. Further research is necessary to determine whether the threshold value of 41.8 m is valid for long-term outcomes or whether it differs among trials using background therapy or lacking placebo controls entirely.


Medical Care | 2011

N-of-1 trials in the medical literature: a systematic review.

Nicole B. Gabler; Naihua Duan; Sunita Vohra; Richard L. Kravitz

BackgroundN-of-1 trials (multiple crossover studies conducted in single individuals) may be ideal for determining individual treatment effects and as a tool to estimate heterogeneity of treatment effects (HTE) in a population. However, comprehensive data on n-of-1 trial methodology and analysis is lacking. We performed this study to describe n-of-1 trial characteristics, examine treatment changes resulting from n-of-1 trial participation, and to determine if trial reporting is adequate for estimating HTE. MethodsWe undertook a systematic review of n-of-1 trials published between 1985 and December 2010. Included trials were those having individual treatment episodes as the unit of randomization and reporting individual-specific treatment effects. We abstracted trial characteristics, treatment change information, and analytic methods. ResultsWe included 108 trials reporting on 2154 participants. Approximately half (49%) of the trials used a statistical cutoff to determine a superior treatment, whereas the remainder used a graphical comparison (25%) or a clinical significance cutoff (20%). Sixty-seven trials, reporting on 488 people, provided treatment change information: 54% of participants had subsequent treatment decisions consistent with the results of the trial, 8% had decisions inconsistent with trial results, and 38% had ambiguous results. Less than half of the trials (45%) reported adequate information to facilitate the calculation of HTE. ConclusionN-of-1 trials are a useful tool for enhancing therapeutic precision in a range of conditions and should be conducted more often. To facilitate future meta-analysis, and the estimation of HTE, researchers reporting n-of-1 trial results should clearly describe individual data.


Annals of Internal Medicine | 2013

Outcomes among patients discharged from busy intensive care units.

Jason Wagner; Nicole B. Gabler; Sarah J. Ratcliffe; Sydney E. S. Brown; Brian L. Strom; Scott D. Halpern

BACKGROUND Strains on the capacities of intensive care units (ICUs) may influence the quality of ICU-to-floor transitions. OBJECTIVE To determine how 3 metrics of ICU capacity strain (ICU census, new admissions, and average acuity) measured on days of patient discharges influence ICU length of stay (LOS) and post-ICU discharge outcomes. DESIGN Retrospective cohort study from 2001 to 2008. SETTING 155 ICUs in the United States. PATIENTS 200 730 adults discharged from ICUs to hospital floors. MEASUREMENTS Associations between ICU capacity strain metrics and discharged patient ICU LOS, 72-hour ICU readmissions, subsequent in-hospital death, post-ICU discharge LOS, and hospital discharge destination. RESULTS Increases in the 3 strain variables on the days of ICU discharge were associated with shorter preceding ICU LOS (all P < 0.001) and increased odds of ICU readmissions (all P < 0.050). Going from the 5th to 95th percentiles of strain was associated with a 6.3-hour reduction in ICU LOS (95% CI, 5.3 to 7.3 hours) and a 1.0% increase in the odds of ICU readmission (CI, 0.6% to 1.5%). No strain variable was associated with increased odds of subsequent death, reduced odds of being discharged home from the hospital, or longer total hospital LOS. LIMITATION Long-term outcomes could not be measured. CONCLUSION When ICUs are strained, triage decisions seem to be affected such that patients are discharged from the ICU more quickly and, perhaps consequentially, have slightly greater odds of being readmitted to the ICU. However, short-term patient outcomes are unaffected. These results suggest that bed availability pressures may encourage physicians to discharge patients from the ICU more efficiently and that ICU readmissions are unlikely to be causally related to patient outcomes. PRIMARY FUNDING SOURCE Agency for Healthcare Research and Quality; National Heart, Lung, and Blood Institute; and Society of Critical Care Medicine.


BMJ | 2015

Quantifying the risks of non-oncology phase I research in healthy volunteers: meta-analysis of phase I studies.

Ezekiel J. Emanuel; Gabriella Bedarida; Kristy Macci; Nicole B. Gabler; Annette Rid; David Wendler

Objective To quantify the frequency and seriousness of adverse events in non-oncology phase I studies with healthy participants. Design Meta-analysis of individual, healthy volunteer level data. Setting Phase I studies with healthy volunteers conducted between September 2004 and March 2011 at Pfizer’s three dedicated phase I testing sites in Belgium, Singapore, and the United States. These included studies in which drug development was terminated. Participants 11 028 participants who received the study drug in 394 distinct non-oncology phase I studies, which involved 4620 unique individuals. A total of 2460 (53.2%) participants were involved in only one study, whereas others participated in two or more studies. Main outcome measures Adverse events classified as mild, moderate, and severe as well as serious adverse events—defined by the Food and Drug Administration as events that result in death, a life threatening event, admission to hospital, prolongation of existing hospital stay, a persistent or major disability, or a congenital anomaly or birth defect. Pfizer researchers of phase I trials determined adverse events, and serious adverse events were those filed with the FDA. Results Overall, 4000 (36.3%) participants who received the study drug experienced no adverse events and 7028 (63.7%) experienced 24 643 adverse events. Overall, 84.6% (n=20 840) of adverse events were mild and 1.0% (n=255) were severe. 34 (0.31%) serious adverse events occurred among the 11 028 participants who received the study agent, with no deaths or life threatening events. Of the 34 serious adverse events, 11 were related to the study drug and seven to study procedures, whereas 16 were unrelated to a study drug or procedure, including four that occurred when the participant was receiving a placebo. Overall, 24.1% (n=5947) of adverse events were deemed to be unrelated to the study drug. With a total of 143 (36%) studies involving placebo, 10.3% (n=2528) of all adverse events occurred among participants receiving placebo. The most common adverse events were headache (12.2%, n=3017), drowsiness (9.8%, n=2410), and diarrhea (6.9%, n=1698). Research on drugs for neuropsychiatric indications had the highest frequency of adverse events (3015 per 1000 participants). Conclusion Among 11 028 healthy participants who received study drug in non-oncology phase I studies, the majority (85%) of adverse events were mild. 34 (0.31%) serious adverse events occurred, with no life threatening events or deaths. Half of all adverse events were related to the study drug or to procedures. Extrapolation of these data to other types of phase I studies, especially with biological agents, may not be warranted.


American Journal of Respiratory and Critical Care Medicine | 2015

Comparison of Treatment Response in Idiopathic and Connective Tissue Disease–associated Pulmonary Arterial Hypertension

Rennie L. Rhee; Nicole B. Gabler; Sapna Sangani; Amy Praestgaard; Peter A. Merkel; Steven M. Kawut

RATIONALE Studies suggest that patients with connective tissue disease-associated pulmonary arterial hypertension (CTD-PAH) have a poorer treatment response to therapies for PAH compared with patients with idiopathic PAH (IPAH), but individual randomized controlled trials (RCTs) have been underpowered to examine differences within these subgroups. OBJECTIVES To compare the effect of therapy for PAH in CTD-PAH versus IPAH. METHODS We obtained individual participant data from phase III placebo-controlled RCTs of therapies for PAH submitted to the U.S. Food and Drug Administration for drug approval. A treatment-by-diagnosis interaction term evaluated differences in treatment response between CTD-PAH and IPAH. Outcomes included change in 6-minute-walk distance (∆6MWD) from baseline to 12 weeks, clinical worsening, and all-cause mortality. MEASUREMENTS AND MAIN RESULTS The study sample included 827 participants with CTD-PAH and 1,935 with IPAH from 11 RCTs. Patients with CTD-PAH had less improvement in 6MWD when assigned to active treatment versus placebo compared with patients with IPAH (difference in treatment effect on ∆6MWD in CTD-PAH vs. IPAH, -17.3 m; 90% confidence interval, -31.3 to -3.3; P for interaction = 0.043). Treatment was less effective in reducing the occurrence of clinical worsening in CTD-PAH versus IPAH (P for interaction = 0.012), but there was no difference in the placebo-adjusted effect of treatment on mortality (P for interaction = 0.65). CONCLUSIONS Treatment for PAH was less effective in CTD-PAH compared with IPAH in terms of increasing 6MWD and preventing clinical worsening. The heterogeneity of treatment response supports the need for identifying therapies that are more effective for CTD-PAH.


Circulation | 2014

Are Hemodynamics Surrogate End Points in Pulmonary Arterial Hypertension

Corey E. Ventetuolo; Nicole B. Gabler; Jason S. Fritz; K. Akaya Smith; Harold I. Palevsky; James R. Klinger; Scott D. Halpern; Steven M. Kawut

Background— Although frequently assessed in trials and clinical practice, hemodynamic response to therapy has never been validated as a surrogate end point for clinical events in pulmonary arterial hypertension (PAH). Methods and Results— We performed a patient-level pooled analysis of 4 randomized, placebo-controlled trials to determine whether treatment-induced changes in hemodynamic values at 12 weeks accounted for the relationship between treatment assignment and the probability of early clinical events (death, lung transplantation, atrial septostomy, PAH hospitalization, withdrawal for clinical worsening, or escalation in PAH therapy). We included 1119 subjects with PAH. The median (interquartile range) age was 48 years (37–59 years), and 23% were men. A total of 656 patients (59%) received active therapy (101 [15%] iloprost, 118 [18%] sitaxsentan, 204 [31%] sildenafil, and 233 [36%] subcutaneous treprostinil). Active treatment significantly lowered right atrial pressure, mean pulmonary artery pressure, and pulmonary vascular resistance and increased cardiac output and index (P<0.01 for all). Changes in hemodynamic values (except for right atrial pressure and mean pulmonary artery pressure) were significantly associated with the risk of a clinical event (P<0.02 for all). Although active treatment approximately halved the odds of a clinical event compared with placebo (P<0.001), changes in hemodynamics accounted for only 1.2% to 13.9% of the overall treatment effect. Conclusions— Treatment-induced changes in hemodynamics at 12 weeks only partially explain the impact of therapy on the probability of early clinical events in PAH. These findings suggest that resting hemodynamics are not valid surrogate end points for short-term events in PAH clinical trials.


American Journal of Respiratory and Critical Care Medicine | 2014

The allocation of intensivists' rounding time under conditions of intensive care unit capacity strain.

Sydney E. S. Brown; Michael M. Rey; Dustin Pardo; Scott Weinreb; Sarah J. Ratcliffe; Nicole B. Gabler; Scott D. Halpern

To the Editor: With rising demand for critical care, intensivists’ time must increasingly be divided among patients (1–6). Recent studies suggest that increased strain at intensive care unit (ICU) admission leads to higher mortality in closed ICUs (7) and that increased strain at discharge leads to increases in ICU readmissions (8). These relationships between strain and outcomes could be mediated by strain-induced changes in the time intensivists devote to patients during patient care rounds (7–9). We therefore examined how the allocation of intensivists’ time during rounds changes at times of low versus high ICU strain and whether intensivists preferentially allocate time away from certain patient groups as strain increases. Some results have been previously reported in the form of an abstract (10). Methods We conducted a prospective study of patient care rounds in the 24-bed medical ICU of the Hospital of the University of Pennsylvania in 2012. Time spent performing various rounding activities was recorded in real time by trained data collectors, using a tablet computer. Methods for assessing interrater reliability can be found in the online supplement. Data collection was randomly assigned to one of two intensivist-led medical ICU teams each day and was not performed on weekends. Variables describing patient characteristics, staffing, and ICU strain were obtained from the electronic medical record and as part of a separate clinical trial (11). Our analysis focused on “cognitive rounding time” (time spent on the patient’s assessment and plan) and on total rounding time (presentation of events and data, assessment and plan, and teaching related to that patient). Three validated strain variables (5) were considered: team census (“census”), representing the number of patients rounded on by the observed team each day; number of new admissions (“admissions”) since the end of rounds the previous day; and average severity of illness (“acuity”) of patients on the team, using Acute Physiology and Chronic Health Evaluation III (APACHE III) scores (12). We constructed explanatory linear mixed-effects models for cognitive and total rounding time for each patient-day. Patients were treated as random clusters, cumulative days hospitalized in the ICU were included as a random slope and as a linear term, and attending was treated as an indicator variable (13). Patient race was obtained from the electronic medical record and could be reported by either patient or provider. We considered three race categories: black, nonblack (white or Asian), and unknown. Table 1 and Table E1 in the online supplement describe all evaluated covariates. Table 1. Descriptive Statistics We constructed separate models to determine whether time was allocated away from specific patient groups as strain increased by exploring interactions between the three strain variables and the following six patient variables: admission status (new admission vs. follow-up), patient race (black vs. nonblack), age (continuous), severity of illness (continuous), family presence on rounds, and patient sex. We then constructed a fully adjusted model with all interactions having a P value < 0.2 and used backward selection, removing nonsignificant terms (14). We used Holm tests of conditional significance given the multiple comparisons made (15). Additional details regarding the statistical analyses are available in the online supplement. Results Rounds were observed for 566 patients over the course of 114 noncontiguous weekdays, for a total of 1,295 patient-days. Intensivists rounded on a median of 11 patients (interquartile range [IQR], 10–13) each day, including two new admissions (IQR, 1–3). Median daily rounding time was 188.6 minutes (IQR, 164.8–212.6 min); 91.9 minutes (IQR, 77.9–107.3 min) were spent on cognitive rounding time (Table 1). Daily rounding time increased as census (6.3 min; 95% confidence interval [CI], 2.4–10.1 min; P = 0.002) and admissions (6.0 min; 95% CI, 0.6–11.4 min; P = 0.031) increased (Figure E1 in the online supplement); cognitive rounding time increased as census increased (2.5 min; 95% CI, 0.1–4.9 min; P = 0.045). In fully adjusted models, with increasing daily admissions, newly admitted patients received 1.38 fewer minutes (95% CI, −2.43 to −0.33 min; PHolm = 0.002) total rounding time (interaction P value = 0.01) and 0.73 fewer minutes (95% CI, −1.42 to −0.07 min; PHolm = 0.0113) cognitive rounding time per additional admission. No significant changes occurred among follow-up patients (interaction P value = 0.030; Figure E2). As census increased, each unit increase led to a 0.5-minute (95% CI, 0.87–0.13 min; PHolm = 0.0135) decrease in cognitive rounding time among new admissions, with no decrement among follow-ups (interaction P value = 0.028). The effect of census on total rounding time was modified by new admission status and race. A three-way interaction (P = 0.04) revealed that among follow-ups, nonblack patients received 3.4 minutes (95% CI, −5.6 to −1.2 min; PHolm = 0.02) more than blacks at low census (eight patients); however, the excess time spent with nonblacks disappeared as census increased (P < 0.01). In contrast, no significant differences in strain-induced decrements in rounding time were observed between black and nonblack new admissions (P = 0.22; Figure 1). These relationships persisted in two sensitivity analyses, excluding patients of indeterminate race or excluding Asians from the nonblack group (Table E2). Figure 1. Total rounding time. Models are adjusted for acuity, severity of illness measured on Day 1 of the first intensive care unit (ICU) stay, day number in ICU course, attending, data collector, order patient was rounded on (inverse), maximum team size, attending’s ... Neither patient age, sex, acuity, and severity of illness nor the presence of family on rounds affected the allocation of rounding time. Discussion This study provides the first description of how ICU physicians allocate rounding time among patients and how this allocation changes as ICUs become strained. Daily rounding time increased with increases in census and admissions, but less time was spent per patient, primarily affecting new admissions and nonblack follow-up patients. These findings are consistent with studies showing that clinicians perceive their time to be highly constrained (1, 5, 6). The observation that strain preferentially affected new admissions and nonblack follow-ups may reflect the fact that these patients received more time in general, such that further reductions were challenging. Importantly, we found that increases in ICU strain did not result in disproportionate decreases in the time allocated to other patient subgroups, suggesting that ICU physicians generally ration their time equitably. Although total rounding time was allocated away from nonblack follow-ups as census increased, the facts that nonblacks received more time overall and that similar patterns were not observed among new admissions casts doubt on this finding, representing a true racial disparity. This study had several limitations. First, data were not collected outside of morning rounds, and therefore we could not assess how strain affected time allocation at other times. Second, although a differential effect of census on time allocation was not found between newly admitted black and nonblack patients, future research should determine whether a larger sample would reveal a significant disparity. Third, severity of illness was assessed only at ICU admission, limiting severity adjustment on subsequent days; however, bias introduced by inadequate severity adjustment is unlikely to be differential across different levels of census, and therefore it is unlikely to have affected the results. Residual confounding could still be present, as severity of illness may be indirectly affected by census. Finally, data capture was not formally evaluated; however, interrater reliability was excellent. In summary, this study provides a description of how intensivists allocate their time among patients as their workloads increase, providing objective confirmation of the common perception that time is a scarce resource. However, as a single-center study, these results may not generalize to other ICUs. In addition, because we often lacked data on rounding time on the same patient over contiguous days, we could not address whether observed decreases in rounding time mediated previously observed relationships between strain and outcomes or whether they represent improved efficiency. Future research is needed to explore these questions (7, 8).


JAMA | 2017

Discriminative Accuracy of Physician and Nurse Predictions for Survival and Functional Outcomes 6 Months After an ICU Admission

Michael E. Detsky; Michael O. Harhay; Dominique F. Bayard; Aaron M. Delman; Anna E. Buehler; Saida Kent; Isabella V. Ciuffetelli; Elizabeth Cooney; Nicole B. Gabler; Sarah J. Ratcliffe; Mark E. Mikkelsen; Scott D. Halpern

Importance Predictions of long-term survival and functional outcomes influence decision making for critically ill patients, yet little is known regarding their accuracy. Objective To determine the discriminative accuracy of intensive care unit (ICU) physicians and nurses in predicting 6-month patient mortality and morbidity, including ambulation, toileting, and cognition. Design, Setting, and Participants Prospective cohort study conducted in 5 ICUs in 3 hospitals in Philadelphia, Pennsylvania, and enrolling patients who spent at least 3 days in the ICU from October 2013 until May 2014 and required mechanical ventilation, vasopressors, or both. These patients’ attending physicians and bedside nurses were also enrolled. Follow-up was completed in December 2014. Main Outcomes and Measures ICU physicians’ and nurses’ binary predictions of in-hospital mortality and 6-month outcomes, including mortality, return to original residence, ability to toilet independently, ability to ambulate up 10 stairs independently, and ability to remember most things, think clearly, and solve day-to-day problems (ie, normal cognition). For each outcome, physicians and nurses provided a dichotomous prediction and rated their confidence in that prediction on a 5-point Likert scale. Outcomes were assessed via interviews with surviving patients or their surrogates at 6 months. Discriminative accuracy was measured using positive and negative likelihood ratios (LRs), C statistics, and other operating characteristics. Results Among 340 patients approached, 303 (89%) consented (median age, 62 years [interquartile range, 53-71]; 57% men; 32% African American); 6-month follow-up was completed for 299 (99%), of whom 169 (57%) were alive. Predictions were made by 47 physicians and 128 nurses. Physicians most accurately predicted 6-month mortality (positive LR, 5.91 [95% CI, 3.74-9.32]; negative LR, 0.41 [95% CI, 0.33-0.52]; C statistic, 0.76 [95% CI, 0.72-0.81]) and least accurately predicted cognition (positive LR, 2.36 [95% CI, 1.36-4.12]; negative LR, 0.75 [95% CI, 0.61-0.92]; C statistic, 0.61 [95% CI, 0.54-0.68]). Nurses most accurately predicted in-hospital mortality (positive LR, 4.71 [95% CI, 2.94-7.56]; negative LR, 0.61 [95% CI, 0.49-0.75]; C statistic, 0.68 [95% CI, 0.62-0.74]) and least accurately predicted cognition (positive LR, 1.50 [95% CI, 0.86-2.60]; negative LR, 0.88 [95% CI, 0.73-1.06]; C statistic, 0.55 [95% CI, 0.48-0.62]). Discriminative accuracy was higher when physicians and nurses were confident about their predictions (eg, for physicians’ confident predictions of 6-month mortality: positive LR, 33.00 [95% CI, 8.34-130.63]; negative LR, 0.18 [95% CI, 0.09-0.35]; C statistic, 0.90 [95% CI, 0.84-0.96]). Compared with a predictive model including objective clinical variables, a model that also included physician and nurse predictions had significantly higher discriminative accuracy for in-hospital mortality, 6-month mortality, and return to original residence (P < .01 for all). Conclusions and Relevance ICU physicians’ and nurses’ discriminative accuracy in predicting 6-month outcomes of critically ill patients varied depending on the outcome being predicted and confidence of the predictors. Further research is needed to better understand how clinicians derive prognostic estimates of long-term outcomes.


Critical Care Medicine | 2015

An Observational Study of Decision Making by Medical Intensivists.

Mary S. McKenzie; Catherine L. Auriemma; Jennifer Olenik; Elizabeth Cooney; Nicole B. Gabler; Scott D. Halpern

Objectives:The ICU is a place of frequent, high-stakes decision making. However, the number and types of decisions made by intensivists have not been well characterized. We sought to describe intensivist decision making and determine how the number and types of decisions are affected by patient, provider, and systems factors. Design:Direct observation of intensivist decision making during patient rounds. Setting:Twenty-four-bed academic medical ICU. Subjects:Medical intensivists leading patient care rounds. Intervention:None. Measurements and Main Results:During 920 observed patient rounds on 374 unique patients, intensivists made 8,174 critical care decisions (mean, 8.9 decisions per patient daily, 102.2 total decisions daily) over a mean of 3.7 hours. Patient factors associated with increased numbers of decisions included a shorter time since ICU admission and an earlier slot in rounding order (both p < 0.05). Intensivist identity explained the greatest proportion of variance in number of decisions per patient even when controlling for all other factors significant in bivariable regression. A given intensivist made more decisions per patient during days later in the 14-day rotation (p < 0.05). Female intensivists made significantly more decisions than male intensivists (p < 0.05). Conclusions:Intensivists made over 100 daily critical care decisions during rounds. The number of decisions was influenced by a variety of patient- and system-related factors and was highly variable among intensivists. Future work is needed to explore effects of the decision-making burden on providers’ choices and on patient outcomes.


Journal of Critical Care | 2015

Intensive care unit capacity strain and adherence to prophylaxis guidelines.

Gary E. Weissman; Nicole B. Gabler; Sydney E. S. Brown; Scott D. Halpern

PURPOSE The purpose of the study is to examine the relationship between different measures of capacity strain and adherence to prophylaxis guidelines in the intensive care unit (ICU). MATERIALS AND METHODS We conducted a retrospective cohort study within the Project IMPACT database. We used multivariable logistic regression to examine relationships between ICU capacity strain and appropriate usage of venous thromboembolism prophylaxis (VTEP) and stress ulcer prophylaxis (SUP). RESULTS Of 776,905 patient-days eligible for VTEP, appropriate therapy was provided on 68%. Strain as measured by proportion of new admissions (odds ratio [OR], 0.91; 95% confidence interval [CI], 0.90-0.91) and census (OR, 0.97; 95% CI, 0.97-0.98) was associated with decreased odds of receiving VTEP. With increasing strain as measured by new admissions, the degradation of VTEP utilization was more severe in ICUs with closed (OR, 0.85; 95% CI, 0.83-0.88) than open (OR, 0.91; 95% CI, 0.91-0.92) staffing models (interaction P<.001). Of 185425 patient-days eligible for SUP, 48% received appropriate therapy. Administration of SUP was not significantly influenced by any measure of strain. CONCLUSIONS Rising capacity strain in the ICU reduces the odds that patients will receive appropriate VTEP but not SUP. The variability among different types of ICUs in the extent to which strain degraded VTEP use suggests opportunities for systems improvement.

Collaboration


Dive into the Nicole B. Gabler's collaboration.

Top Co-Authors

Avatar

Scott D. Halpern

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Cooney

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Steven M. Kawut

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian L. Strom

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael O. Harhay

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Benjamin French

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek C. Angus

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge