Dennis H. Murphree
Mayo Clinic
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dennis H. Murphree.
international conference of the ieee engineering in medicine and biology society | 2015
Dennis H. Murphree; Che Ngufor; Sudhindra Upadhyaya; Nageswar R. Madde; Leanne Clifford; Daryl J. Kor; Jyotishman Pathak
Of the 21 million blood components transfused in the United States during 2011, approximately 1 in 414 resulted in complication [1]. Two complications in particular, transfusion-related acute lung injury (TRALI) and transfusion-associated circulatory overload (TACO), are especially concerning. These two alone accounted for 62% of reported transfusion-related fatalities in 2013 [2]. We have previously developed a set of machine learning base models for predicting the likelihood of these adverse reactions, with a goal towards better informing the clinician prior to a transfusion decision. Here we describe recent work incorporating ensemble learning approaches to predicting TACO/TRALI. In particular we describe combining base models via majority voting, stacking of model sets with varying diversity, as well as a resampling/boosting combination algorithm called RUSBoost. We find that while the performance of many models is very good, the ensemble models do not yield significantly better performance in terms of AUC.
ieee international conference on healthcare informatics | 2015
Dennis H. Murphree; Leanne Clifford; Yaxiong Lin; Nagesh Madde; Che Ngufor; Sudhindra Upadhyaya; Jyotishman Pathak; Daryl J. Kor
In 2011 approximately 21 million blood components were transfused in the United States, with roughly 1 in 414 causing an adverse reaction [1]. Two adverse reactions in particular, transfusion-related acute lung injury (TRALI) and transfusion-associated circulatory overload (TACO), accounted for 62% of reported transfusion-related fatalities in 2013 [2]. We describe newly developed models for predicting the likelihood of these adverse reactions, with a goal towards better informing the clinician prior to a transfusion decision. Our models include both traditional logistic regression as well as modern machine learning techniques, and incorporate over sampling methods to deal with severe class imbalance. We focus on a minimal set of predictors in order to maximize potential application. Results from 8 models demonstrate AUCs ranging from 0.72 to 0.84, with sensitivities tunable by threshold choice across ranges up to 0.93. Many of the models rank the same predictors amongst the most important, perhaps yielding insight into the mechanisms underlying TRALI and TACO. These models are currently being implemented in a Clinical Decision Support System [3] in perioperative environments at Mayo Clinic.
ieee international conference on data science and advanced analytics | 2015
Che Ngufor; Sudhindra Upadhyaya; Dennis H. Murphree; Daryl J. Kor; Jyotishman Pathak
In blood transfusion studies, its is often desirable before a surgical procedure to estimate the likelihood of a patient bleeding, need for blood products, re-operation due to bleeding and other important patient outcomes. Such prediction rules are crucial in allowing for optimal planning, more efficient use of blood bank resources, and identification of high-risk patient cohort for specific perioperative interventions. The goal of this study is to present a simple and efficient algorithm that could estimate the risk of multiple outcomes simultaneously. Specifically, a heterogeneous multi-task learning method is presented for learning important surgical outcomes such as bleeding, intraoperative RBC transfusion, need for ICU care, length of stay and mortality. To improve the performance of the method, a post-learning strategy is implemented to further learn the relationship between the trained tasks by a simple “goodness of fit” measure. Specifically, two tasks are considered similar if the model parameters of one tasks improves predictive performance of the other. This strategy allows tasks to be grouped in clusters where selective cross-task transfer of knowledge is explicitly encouraged. To further improve prediction accuracy, a number of operative measurements or surgical outcomes whose predictions are not of direct interest are incorporated in the multi-task model as supplementary tasks to donate information and help the performance of relevant tasks. Results for predicting bleeding and need for blood transfusion for patients undergoing non-cardiac operations from an institutional transfusion datamart show that the proposed methods can improve prediction accuracy over standard single-tasks learning methods. Additional experiments on a real public available data set show that the method is accurate and competitive with some existing methods in the literature.
artificial intelligence in medicine in europe | 2015
Che Ngufor; Sudhindra Upadhyaya; Dennis H. Murphree; Nageswar R. Madde; Daryl J. Kor; Jyotishman Pathak
It would be desirable before a surgical procedure to have a prediction rule that could accurately estimate the probability of a patient bleeding, need for blood transfusion, and other important outcomes. Such a prediction rule would allow optimal planning, more efficient use of blood bank resources, and identification of high-risk patient cohort for specific perioperative interventions. The goal of this study is to develop an efficient and accurate algorithm that could estimate the risk of multiple outcomes simultaneously. Specifically, a heterogeneous multi-task learning method is proposed for learning outcomes such as perioperative bleeding, intraoperative RBC transfusion, ICU care, and ICU length of stay. Additional outcomes not normally predicted are incorporated in the model for transfer learning and help improve the performance of relevant outcomes. Results for predicting perioperative bleeding and need for blood transfusion for patients undergoing non-cardiac operations from an institutional transfusion datamart show that the proposed method significantly increases AUC and G-Mean by more than 6% and 5% respectively over standard single-task learning methods.
ieee international conference on healthcare informatics | 2015
Dennis H. Murphree; Leanne Clifford; Yaxiong Lin; Nagesh Madde; Che Ngufor; Sudhindra Upadhyaya; Jyotishman Pathak; Daryl J. Kor
During 2011 approximately 21 million blood components were transfused in the United States, with roughly 1 in 414 resulting in complication. For Americans, the two leading causes of transfusion-related death are the respiratory complications Transfusion-related acute lung injury (TRALI) and Transfusion-associated circulatory overload (TACO). Each of these complications results in significantly longer ICU and hospital stays as well as significantly greater rates of mortality. We have developed a set of machine learning models for predicting the likelihood of these adverse reactions in surgical populations. Here we describe deploying these models into a perioperative critical care environment via a continuous monitoring and alerting clinical decision support system. The goal of this system, which directly integrates our suite of machine learning models running in the R statistical environment into a traditional health information system, is to improve transfusion-related outcomes in the perioperative environment. By identifying high-risk patients prior to transfusion, the clinical team may be able to choose a more appropriate therapy or therapeutic course. Identifying high-risk patients for increased observation after transfusion may also allow for a more timely intervention, thereby potentially improving care delivery and resulting patient outcome. An early prototype of this system is currently running in two Mayo Clinic perioperative environments.
Journal of the American Heart Association | 2018
Jacob Jentzer; Courtney Bennett; Brandon M. Wiley; Dennis H. Murphree; Mark T. Keegan; Ognjen Gajic; R. Scott Wright; Gregory W. Barsness
Background Optimal methods of mortality risk stratification in patients in the cardiac intensive care unit (CICU) remain uncertain. We evaluated the ability of the Sequential Organ Failure Assessment (SOFA) score to predict mortality in a large cohort of unselected patients in the CICU. Methods and Results Adult patients admitted to the CICU from January 1, 2007, to December 31, 2015, at a single tertiary care hospital were retrospectively reviewed. SOFA scores were calculated daily, and Acute Physiology and Chronic Health Evaluation (APACHE)‐III and APACHE‐IV scores were calculated on CICU day 1. Discrimination of hospital mortality was assessed using area under the receiver‐operator characteristic curve values. We included 9961 patients, with a mean age of 67.5±15.2 years; all‐cause hospital mortality was 9.0%. Day 1 SOFA score predicted hospital mortality, with an area under the receiver‐operator characteristic curve value of 0.83; area under the receiver‐operator characteristic curve values were similar for the APACHE‐III score, and APACHE‐IV predicted mortality (P>0.05). Mean and maximum SOFA scores over multiple CICU days had greater discrimination for hospital mortality (P<0.01). Patients with an increasing SOFA score from day 1 and day 2 had higher mortality. Patients with day 1 SOFA score <2 were at low risk of mortality. Increasing tertiles of day 1 SOFA score predicted higher long‐term mortality (P<0.001 by log‐rank test). Conclusions The day 1 SOFA score has good discrimination for short‐term mortality in unselected patients in the CICU, which is comparable to APACHE‐III and APACHE‐IV. Advantages of the SOFA score over APACHE include simplicity, improved discrimination using serial scores, and prediction of long‐term mortality.
Studies in health technology and informatics | 2015
Che Ngufor; Dennis H. Murphree; Sudhindra Upadhyaya; Nageswar R. Madde; Daryl J. Kor; Jyotishman Pathak
Perioperative bleeding (PB) is associated with increased patient morbidity and mortality, and results in substantial health care resource utilization. To assess bleeding risk, a routine practice in most centers is to use indicators such as elevated values of the International Normalized Ratio (INR). For patients with elevated INR, the routine therapy option is plasma transfusion. However, the predictive accuracy of INR and the value of plasma transfusion still remains unclear. Accurate methods are therefore needed to identify early the patients with increased risk of bleeding. The goal of this work is to apply advanced machine learning methods to study the relationship between preoperative plasma transfusion (PPT) and PB in patients with elevated INR undergoing noncardiac surgery. The problem is cast under the framework of causal inference where robust meaningful measures to quantify the effect of PPT on PB are estimated. Results show that both machine learning and standard statistical methods generally agree that PPT negatively impacts PB and other important patient outcomes. However, machine learning methods show significant results, and machine learning boosting methods are found to make less errors in predicting PB.
Critical Care Medicine | 2018
Jacob Jentzer; Dennis H. Murphree; Kianoush Banaei-Kashani
Critical Care Medicine • Volume 46 • Number 1 (Supplement) www.ccmjournal.org Learning Objectives: Few studies have examined whether the predictive value of the Acute Physiology and Chronic Health Evaluation (APACHE)-3 score and APACHE-4 predicted mortality model varies across different intensive care unit (ICU) populations. We hypothesized that discrimination of APACHE-3 and APACHE-4 for inpatient mortality would differ across ICU populations. Methods: Cohort study of 71,987 adult ICU patients admitted to a single academic medical center from 2007 to 2013. APACHE-3 score and APACHE-4 predicted mortality were calculated electronically on the first ICU day. Medical ICU (MICU), cardiac ICU (CICU), surgical ICU (SICU), cardiovascular surgical ICU (CVICU) and mixed ICU populations were compared using chi-squared and ANOVA. Discrimination of APACHE-3 and APACHE-4 for inpatient mortality was evaluated using area under the receiver-operator curve (AUROC) analysis, followed by multivariate logistic regression. Results: There were 16857 (23%) MICU patients, 8805 (12%) CICU patients, 19987 (28%) SICU patients, 15631 (22%) CVICU patients and 10707 (15%) mixed ICU patients. Mean age was 63 +/17 years and 58% were male. Mechanical ventilation was used in 40% of patients and vasopressors were used in 27%. Mean APACHE-3 score was 66.8 and mean APACHE-4 predicted mortality was 18.3%, with significant variation across ICU’s (p < 0.001). Overall inpatient mortality was 6.8%, and varied significantly across ICU’s (p < 0.001): MICU (11.5%), CICU (7.2%), SICU (4.4%), CVICU (2.5%), mixed ICU (9.8%). APACHE-3 and APACHE-4 had very good discrimination for inpatient mortality overall (AUROC 0.80 and 0.83, respectively). AUROC values of APACHE-3 varied between ICU’s: MICU (0.78), CICU (0.80), SICU (0.82), CVICU (0.78), mixed ICU (0.77). AUROC values of APACHE-4 likewise varied between ICU’s: MICU (0.81), CICU (0.82), SICU (0.85), CVICU (0.79), mixed ICU (0.85). Admission ICU was a significant predictor of inpatient mortality after adjustment for either APACHE-3 or APACHE-4, with a significant interaction between admission ICU and either APACHE-3 or APACHE-4 (p < 0.001). Conclusions: Both the APACHE-3 score and APACHE-4 predicted mortality had very good discrimination for inpatient mortality in a large mixed ICU population, with variation across ICU types. Admission ICU type is an effect modifier when using either APACHE-3 or APACHE-4 to predict inpatient mortality.
Computers in Biology and Medicine | 2018
Dennis H. Murphree; Elaheh Arabmakki; Che Ngufor; Curtis B. Storlie; Rozalina G. McCoy
OBJECTIVE Metformin is the preferred first-line medication for management of type 2 diabetes and prediabetes. However, over a third of patients experience primary or secondary therapeutic failure. We developed machine learning models to predict which patients initially prescribed metformin will achieve and maintain control of their blood glucose after one year of therapy. MATERIALS AND METHODS We performed a retrospective analysis of administrative claims data for 12,147 commercially-insured adults and Medicare Advantage beneficiaries with prediabetes or diabetes. Several machine learning models were trained using variables available at the time of metformin initiation to predict achievement and maintenance of hemoglobin A1c (HbA1c) < 7.0% after one year of therapy. RESULTS AUC performances based on five-fold cross-validation ranged from 0.58 to 0.75. The most influential variables driving the predictions were baseline HbA1c, starting metformin dosage, and presence of diabetes with complications. CONCLUSIONS Machine learning models can effectively predict primary or secondary metformin treatment failure within one year. This information can help identify effective individualized treatment strategies. Most of the implemented models outperformed traditional logistic regression, highlighting the potential for applying machine learning to problems in medicine.
American Journal of Cardiology | 2018
Jacob Jentzer; Dennis H. Murphree; Brandon M. Wiley; Courtney Bennett; Michael Goldfarb; Mark T. Keegan; Joseph G. Murphy; R. Scott Wright; Gregory W. Barsness
Older adults account for an increasing number of cardiac intensive care unit (CICU) admissions. This study sought to determine the predictive value of illness severity scores for mortality in CICU patients ≥70 years of age. Adult patients admitted to the CICU from 2007 to 2015 at one tertiary care hospital were reviewed. Severity of illness scores were calculated on the first CICU day. Area under the receiver-operator characteristic curve (AUROC) values were used to assess discrimination for hospital mortality in patients ≥70 versus <70 years of age. We included 10,004 patients with a mean age of 67.4 ± 15.2 years (37.4% female); 4,771 patients (47.7%) were ≥70 years of age. Patients ≥70 years of age had greater illness severity and more extensive co-morbidities compared with patients <70 years of age. Patients ≥70 years of age had higher hospital mortality (11.6% vs 6.8%, odds ratio 1.80, 95% confidence interval 1.57 to 2.07, p <0.001), with a progressive increase in mortality as a function of decade. Severity of illness scores had lower AUROC values for hospital mortality in patients ≥70 years of age compared with patients <70 years of age (all p <0.05 by DeLong test). The Braden skin score on CICU admission predicted hospital mortality with an AUROC value only slightly lower than these scores. Increasing age decade was associated with decreased postdischarge survival by Kaplan-Meier analysis (p <0.001 by log-rank). In conclusion, contemporary CICU patients ≥70 years of age have greater illness severity, more co-morbidities and higher mortality than patients <70 years of age, yet severity of illness scores are less accurate for predicting mortality in CICU patients ≥70 years of age, emphasizing the need for more effective risk-stratification methods in this population.