Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Asaf Hanish is active.

Publication


Featured researches published by Asaf Hanish.


BMJ | 2009

Overall and cancer related mortality among patients with ocular inflammation treated with immunosuppressive drugs: retrospective cohort study.

John H. Kempen; Ebenezer Daniel; James P. Dunn; C. Stephen Foster; Sapna Gangaputra; Asaf Hanish; Kathy J. Helzlsouer; Douglas A. Jabs; R. Oktay Kaçmaz; Grace A. Levy-Clarke; Teresa L. Liesegang; Craig Newcomb; Robert B. Nussenblatt; Siddharth S. Pujari; James T. Rosenbaum; Eric B. Suhler; Jennifer E. Thorne

Context Whether immunosuppressive treatment adversely affects survival is unclear. Objective To assess whether immunosuppressive drugs increase mortality. Design Retrospective cohort study evaluating overall and cancer mortality in relation to immunosuppressive drug exposure among patients with ocular inflammatory diseases. Demographic, clinical, and treatment data derived from medical records, and mortality results from United States National Death Index linkage. The cohort’s mortality risk was compared with US vital statistics using standardised mortality ratios. Overall and cancer mortality in relation to use or non-use of immunosuppressive drugs within the cohort was studied with survival analysis. Setting Five tertiary ocular inflammation clinics. Patients 7957 US residents with non-infectious ocular inflammation, 2340 of whom received immunosuppressive drugs during follow up. Exposures Use of antimetabolites, T cell inhibitors, alkylating agents, and tumour necrosis factor inhibitors. Main outcome measures Overall mortality, cancer mortality. Results Over 66 802 person years (17 316 after exposure to immunosuppressive drugs), 936 patients died (1.4/100 person years), 230 (24.6%) from cancer. For patients unexposed to immunosuppressive treatment, risks of death overall (standardised mortality ratio 1.02, 95% confidence interval [CI] 0.94 to 1.11) and from cancer (1.10, 0.93 to 1.29) were similar to those of the US population. Patients who used azathioprine, methotrexate, mycophenolate mofetil, ciclosporin, systemic corticosteroids, or dapsone had overall and cancer mortality similar to that of patients who never took immunosuppressive drugs. In patients who used cyclophosphamide, overall mortality was not increased and cancer mortality was non-significantly increased. Tumour necrosis factor inhibitors were associated with increased overall (adjusted hazard ratio [HR] 1.99, 95% CI 1.00 to 3.98) and cancer mortality (adjusted HR 3.83, 1.13 to 13.01). Conclusions Most commonly used immunosuppressive drugs do not seem to increase overall or cancer mortality. Our results suggesting that tumour necrosis factor inhibitors might increase mortality are less robust than the other findings; additional evidence is needed.


Journal of Hospital Medicine | 2015

Development, implementation, and impact of an automated early warning and response system for sepsis

Craig A. Umscheid; Joel S. Betesh; Christine VanZandbergen; Asaf Hanish; Gordon Tait; Mark E. Mikkelsen; Benjamin French; Barry D. Fuchs

BACKGROUND Early recognition and timely intervention significantly reduce sepsis-related mortality. OBJECTIVE Describe the development, implementation, and impact of an early warning and response system (EWRS) for sepsis. DESIGN After tool derivation and validation, a preimplementation/postimplementation study with multivariable adjustment measured impact. SETTING Urban academic healthcare system. PATIENTS Adult non-ICU patients admitted to acute inpatient units from October 1, 2011 to October 31, 2011 for tool derivation, June 6, 2012 to July 5, 2012 for tool validation, and June 6, 2012 to September 4, 2012 and June 6, 2013 to September 4, 2013 for the preimplementation/postimplementation analysis. INTERVENTION An EWRS in our electronic health record monitored laboratory values and vital signs in real time. If a patient had ≥4 predefined abnormalities at any single time, the provider, nurse, and rapid response coordinator were notified and performed an immediate bedside patient evaluation. MEASUREMENTS Screen positive rates, test characteristics, predictive values, and likelihood ratios; system utilization; and resulting changes in processes and outcomes. RESULTS The tools screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios for our composite of intensive care unit (ICU) transfer, rapid response team call, or death in the derivation cohort was 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively. Validation values were similar. The EWRS resulted in a statistically significant increase in early sepsis care, ICU transfer, and sepsis documentation, and decreased sepsis mortality and increased discharge to home, although neither of these latter 2 findings reached statistical significance. CONCLUSIONS An automated prediction tool identified at-risk patients and prompted a bedside evaluation resulting in more timely sepsis care, improved documentation, and a suggestion of reduced mortality.


Journal of Hospital Medicine | 2013

The readmission risk flag: Using the electronic health record to automatically identify patients at risk for 30-day readmission

Charles A. Baillie; Christine VanZandbergen; Gordon Tait; Asaf Hanish; Brian F Leas; Benjamin French; C. William Hanson; Maryam Behta; Craig A. Umscheid

BACKGROUND Identification of patients at high risk for readmission is a crucial step toward improving care and reducing readmissions. The adoption of electronic health records (EHR) may prove important to strategies designed to risk stratify patients and introduce targeted interventions. OBJECTIVE To develop and implement an automated prediction model integrated into our health systems EHR that identifies on admission patients at high risk for readmission within 30 days of discharge. DESIGN Retrospective and prospective cohort. SETTING Healthcare system consisting of 3 hospitals. PATIENTS All adult patients admitted from August 2009 to September 2012. INTERVENTIONS An automated readmission risk flag integrated into the EHR. MEASURES Thirty-day all-cause and 7-day unplanned healthcare system readmissions. RESULTS Using retrospective data, a single risk factor, ≥ 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%), with a C statistic of 0.62. Sensitivity (39%), positive predictive value (30%), proportion of patients flagged (18%), and C statistic (0.61) during the 12-month period after implementation of the risk flag were similar. There was no evidence for an effect of the intervention on 30-day all-cause and 7-day unplanned readmission rates in the 12-month period after implementation. CONCLUSIONS An automated prediction model was effectively integrated into an existing EHR and identified patients on admission who were at risk for readmission within 30 days of discharge.


Annals of the American Thoracic Society | 2015

Post-Acute Care Use and Hospital Readmission after Sepsis.

Tiffanie K. Jones; Barry D. Fuchs; Dylan S. Small; Scott D. Halpern; Asaf Hanish; Craig A. Umscheid; Charles A. Baillie; Meeta Prasad Kerlin; David F. Gaieski; Mark E. Mikkelsen

RATIONALE The epidemiology of post-acute care use and hospital readmission after sepsis remains largely unknown. OBJECTIVES To examine the rate of post-acute care use and hospital readmission after sepsis and to examine risk factors and outcomes for hospital readmissions after sepsis. METHODS In an observational cohort study conducted in an academic health care system (2010-2012), we compared post-acute care use at discharge and hospital readmission after 3,620 sepsis hospitalizations with 108,958 nonsepsis hospitalizations. We used three validated, claims-based approaches to identify sepsis and severe sepsis. MEASUREMENTS AND MAIN RESULTS Post-acute care use at discharge was more likely after sepsis, driven by skilled care facility placement (35.4% after sepsis vs. 15.8%; P < 0.001), with the highest rate observed after severe sepsis. Readmission rates at 7, 30, and 90 days were higher postsepsis (P < 0.001). Compared with nonsepsis hospitalizations (15.6% readmitted within 30 d), the increased readmission risk was present regardless of sepsis severity (27.3% after sepsis and 26.0-26.2% after severe sepsis). After controlling for presepsis characteristics, the readmission risk was found to be 1.51 times greater (95% CI, 1.38-1.66) than nonsepsis hospitalizations. Readmissions after sepsis were more likely to result in death or transition to hospice care (6.1% vs. 13.3% after sepsis; P < 0.001). Independent risk factors associated with 30-day readmissions after sepsis hospitalizations included age, malignancy diagnosis, hospitalizations in the year prior to the index hospitalization, nonelective index admission type, one or more procedures during the index hospitalization, and low hemoglobin and high red cell distribution width at discharge. CONCLUSIONS Post-acute care use and hospital readmissions were common after sepsis. The increased readmission risk after sepsis was observed regardless of sepsis severity and was associated with adverse readmission outcomes.


Clinical Journal of The American Society of Nephrology | 2011

Correlates of Osteoprotegerin and Association with Aortic Pulse Wave Velocity in Patients with Chronic Kidney Disease

Julia J. Scialla; Mary B. Leonard; Raymond R. Townsend; Lawrence J. Appel; Myles Wolf; Matthew J. Budoff; Jing Chen; Eva Lustigova; Crystal A. Gadegbeku; Melanie Glenn; Asaf Hanish; Dominic S. Raj; Sylvia E. Rosas; Stephen L. Seliger; Matthew R. Weir; Rulan S. Parekh

BACKGROUND AND OBJECTIVES Osteoprotegerin (OPG), a cytokine that regulates bone resorption, has been implicated in the process of vascular calcification and stiffness. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS Serum OPG was measured in 351 participants with chronic kidney disease (CKD) from one site of the Chronic Renal Insufficiency Cohort Study. Cortical bone mineral content (BMC) was measured by quantitative computed tomography in the tibia. Multivariable linear regression was used to test the association between serum OPG and traditional cardiovascular risk factors, measures of abnormal bone and mineral metabolism, and pulse wave velocity. RESULTS Higher serum OPG levels were associated with older age, female gender, greater systolic BP, lower estimated GFR, and lower serum albumin. OPG was not associated with measures of abnormal bone or mineral metabolism including serum phosphorus, albumin-corrected serum calcium, intact parathyroid hormone, bone-specific alkaline phosphatase, or cortical BMC. Among 226 participants with concurrent aortic pulse wave velocity measurements, increasing tertiles of serum OPG were associated with higher aortic pulse wave velocity after adjustment for demographics, traditional vascular risk factors, and nontraditional risk factors such as estimated GFR, albuminuria, serum phosphate, corrected serum calcium, presence of secondary hyperparathyroidism, serum albumin, and C-reactive protein or after additional adjustment for cortical BMC in a subset (n = 161). CONCLUSIONS These data support a strong relationship between serum OPG and arterial stiffness independent of many potential confounders including traditional cardiovascular risk factors, abnormal bone and mineral metabolism, and inflammation.


Critical Care Medicine | 2016

Association Between Index Hospitalization and Hospital Readmission in Sepsis Survivors.

Alexander Sun; Giora Netzer; Dylan S. Small; Asaf Hanish; Barry D. Fuchs; David F. Gaieski; Mark E. Mikkelsen

Objectives:Hospital readmission is common after sepsis, yet the relationship between the index admission and readmission remains poorly understood. We sought to examine the relationship between infection during the index acute care hospitalization and readmission and to identify potentially modifiable factors during the index sepsis hospitalization associated with readmission. Design:In a retrospective cohort study, we evaluated 444 sepsis survivors at risk of an unplanned hospital readmission in 2012. The primary outcome was 30-day unplanned hospital readmission. Setting:Three hospitals within an academic healthcare system. Subjects:Four hundred forty-four sepsis survivors. Measurements and Main Results:Of 444 sepsis survivors, 23.4% (95% CI, 19.6–27.6%) experienced an unplanned 30-day readmission compared with 10.1% (95% CI, 9.6–10.7%) among 11,364 nonsepsis survivors over the same time period. The most common cause for readmission after sepsis was infection (69.2%, 72 of 104). Among infection-related readmissions, 51.4% were categorized as recurrent/unresolved. Patients with sepsis present on their index admission who also developed a hospital-acquired infection (“second hit”) were nearly twice as likely to have an unplanned 30-day readmission compared with those who presented with sepsis at admission and did not develop a hospital-acquired infection or those who presented without infection and then developed hospital-acquired sepsis (38.6% vs 22.2% vs 20.0%, p = 0.04). Infection-related hospital readmissions, specifically, were more likely in patients with a “second hit” and patients receiving a longer duration of antibiotics. The use of total parenteral nutrition (p = 0.03), longer duration of antibiotics (p = 0.047), prior hospitalizations, and lower discharge hemoglobin (p = 0.04) were independently associated with hospital readmission. Conclusions:We confirmed that the majority of unplanned hospital readmissions after sepsis are due to an infection. We found that patients with sepsis at admission who developed a hospital-acquired infection, and those who received a longer duration of antibiotics, appear to be high-risk groups for unplanned, all-cause 30-day readmissions and infection-related 30-day readmissions.


BMC Medical Informatics and Decision Making | 2012

Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi-experimental study

Craig A. Umscheid; Asaf Hanish; Jesse Chittams; Mark G. Weiner; Todd E.H. Hecht

BackgroundVenous thromboembolism (VTE) causes morbidity and mortality in hospitalized patients, and regulators and payors are encouraging the use of systems to prevent them. Here, we examine the effect of a computerized clinical decision support (CDS) intervention implemented across a multi-hospital academic health system on VTE prophylaxis and events.MethodsThe study included 223,062 inpatients admitted between April 2007 and May 2010, and used administrative and clinical data. The intervention was integrated into a commercial electronic health record (EHR) in an admission orderset used for all admissions. Three time periods were examined: baseline (period 1), and the time after implementation of the first CDS intervention (period 2) and a second iteration (period 3). Providers were prompted to accept or decline prophylaxis based on patient risk. Time series analyses examined the impact of the intervention on VTE prophylaxis during time periods two and three compared to baseline, and a simple pre-post design examined impact on VTE events and bleeds secondary to anticoagulation. VTE prophylaxis and events were also examined in a prespecified surgical subset of our population meeting the public reporting criteria defined by the Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicator (PSI).ResultsUnadjusted analyses suggested that “recommended”, “any”, and “pharmacologic” prophylaxis increased from baseline to the last study period (27.1% to 51.9%, 56.7% to 78.1%, and 42.0% to 54.4% respectively; p < 0.01 for all comparisons). Results were significant across all hospitals and the health system overall. Interrupted time series analyses suggested that our intervention increased the use of “recommended” and “any” prophylaxis by 7.9% and 9.6% respectively from baseline to time period 2 (p < 0.01 for both comparisons); and 6.6% and 9.6% respectively from baseline to the combined time periods 2 and 3 (p < 0.01 for both comparisons). There were no significant changes in “pharmacologic” prophylaxis in the adjusted model. The overall percent of patients with VTE increased from baseline to the last study period (2.0% to 2.2%; p = 0.03), but an analysis excluding patients with VTE “present on admission” (POA) demonstrated no difference in events (1.3% to 1.3%; p = 0.80). Overall bleeds did not significantly change. An analysis examining VTE prophylaxis and events in a surgical subset of patients defined by the AHRQ PSI demonstrated increased “recommended”, “any”, and “pharmacologic” prophylaxis from baseline to the last study period (32.3% to 60.0%, 62.8% to 85.7%, and 47.9% to 63.3% respectively; p < 0.01 for all comparisons) as well as reduced VTE events (2.2% to 1.7%; p < 0.01).ConclusionsThe CDS intervention was associated with an increase in “recommended” and “any” VTE prophylaxis across the multi-hospital academic health system. The intervention was also associated with increased VTE rates in the overall study population, but a subanalysis using only admissions with appropriate POA documentation suggested no change in VTE rates, and a prespecified analysis of a surgical subset of our sample as defined by the AHRQ PSI for public reporting purposes suggested reduced VTE. This intervention was created in a commonly used commercial EHR and is scalable across institutions with similar systems.


Infection Control and Hospital Epidemiology | 2017

The Impact of a Computerized Clinical Decision Support Tool on Inappropriate Clostridium difficile Testing.

Duncan R White; Keith Hamilton; David A. Pegues; Asaf Hanish; Craig A. Umscheid

OBJECTIVE To evaluate the effectiveness of a computerized clinical decision support intervention aimed at reducing inappropriate Clostridium difficile testing DESIGN Retrospective cohort study SETTING University of Pennsylvania Health System, comprised of 3 large tertiary-care hospitals PATIENTS All adult patients admitted over a 2-year period INTERVENTION Providers were required to use an order set integrated into a commercial electronic health record to order C. difficile toxin testing. The order set identified patients who had received laxatives within the previous 36 hours and displayed a message asking providers to consider stopping laxatives and reassessing in 24 hours prior to ordering C. difficile testing. Providers had the option to continue or discontinue laxatives and to proceed with or forgo testing. The primary endpoint was the change in inappropriate C. difficile testing, as measured by the number of patients who had C. difficile testing ordered while receiving laxatives. RESULTS Compared to the 1-year baseline period, the intervention resulted in a decrease in the proportion of inappropriate C. difficile testing (29.6% vs 27.3%; P=.02). The intervention was associated with an increase in the number of patients who had laxatives discontinued and did not undergo C. difficile testing (5.8% vs 46.4%; P<.01) and who had their laxatives discontinued and underwent testing (5.4% vs 35.2%; P<.01). We observed a nonsignificant increase in the proportion of patients with C. difficile related complications (5.0% vs 8.9%; P=.11). CONCLUSIONS A C. difficile order set was successful in decreasing inappropriate C. difficile testing and improving the timely discontinuation of laxatives. Infect Control Hosp Epidemiol 2017;38:1204-1208.


Infection Control and Hospital Epidemiology | 2014

Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter-associated urinary tract infections.

Charles A. Baillie; Mika Epps; Asaf Hanish; Neil O. Fishman; Benjamin French; Craig A. Umscheid

OBJECTIVE To evaluate the usability and effectiveness of a computerized clinical decision support (CDS) intervention aimed at reducing the duration of urinary tract catheterizations. DESIGN Retrospective cohort study. SETTING Academic healthcare system. PATIENTS All adult patients admitted from March 2009 through May 2012. INTERVENTION A CDS intervention was integrated into a commercial electronic health record. Providers were prompted at order entry to specify the indication for urinary catheter insertion. On the basis of the indication chosen, providers were alerted to reassess the need for the urinary catheter if it was not removed within the recommended time. Three time periods were examined: baseline, after implementation of the first intervention (stock reminder), and after a second iteration (homegrown reminder). The primary endpoint was the usability of the intervention as measured by the proportion of reminders through which providers submitted a remove urinary catheter order. Secondary endpoints were the urinary catheter utilization ratio and the rate of hospital-acquired catheter-associated urinary tract infections (CAUTIs). RESULT The first intervention displayed limited usability, with 2% of reminders resulting in a remove order. Usability improved to 15% with the revised reminder. The catheter utilization ratio declined over the 3 time periods (0.22, 0.20, and 0.19, respectively; P < .001), as did CAUTIs per 1,000 patient-days (0.84, 0.70, and 0.51, respectively; P < .001). CONCLUSIONS A urinary catheter removal reminder system was successfully integrated within a healthcare systems electronic health record. The usability of the reminder was highly dependent on its user interface, with a homegrown version of the reminder resulting in higher impact than a stock reminder.


Annals of the American Thoracic Society | 2015

Clinician Perception of the Effectiveness of an Automated Early Warning and Response System for Sepsis in an Academic Medical Center

Jessica L. Guidi; Katherine Clark; Mark Upton; Hilary Faust; Craig A. Umscheid; Meghan B. Lane-Fall; Mark E. Mikkelsen; William D. Schweickert; Christine VanZandbergen; Joel S. Betesh; Gordon Tait; Asaf Hanish; Kirsten Smith; Denise Feeley; Barry D. Fuchs

RATIONALE We implemented an electronic early warning and response system (EWRS) to improve detection of and response to severe sepsis. Sustainability of such a system requires stakeholder acceptance. We hypothesized that clinicians receiving such alerts perceive them to be useful and effective. OBJECTIVES To survey clinicians after EWRS notification about perceptions of the system. METHODS For a 6-week study period 1 month after EWRS implementation in a large tertiary referral medical center, bedside clinicians, including providers (physicians, advanced practice providers) and registered nurses (RNs), were surveyed confidentially within 2 hours of an alert. MEASUREMENTS AND MAIN RESULTS For the 247 alerts that triggered, 127 providers (51%) and 105 RNs (43%) completed the survey. Clinicians perceived most patients as stable before and after the alert. Approximately half (39% providers, 48% RNs) felt the alert provided new information, and about half (44% providers, 56% RNs) reported changes in management as a result of the alert, including closer monitoring and additional interventions. Over half (54% providers, 65% RNs) felt the alert was appropriately timed. Approximately one-third found the alert helpful (33% providers, 40% RNs) and fewer felt it improved patient care (24% providers, 35% RNs). CONCLUSIONS A minority of responders perceived the EWRS to be useful, likely related to the perception that most patients identified were stable. However, management was altered half the time after an alert. These results suggest further improvements to the system are needed to enhance clinician perception of the systems utility.

Collaboration


Dive into the Asaf Hanish's collaboration.

Top Co-Authors

Avatar

Craig A. Umscheid

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Barry D. Fuchs

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Benjamin French

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Mark E. Mikkelsen

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Gordon Tait

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David F. Gaieski

Thomas Jefferson University

View shared research outputs
Top Co-Authors

Avatar

Joel S. Betesh

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge