Michael J. Englesbe
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael J. Englesbe.
Journal of The American College of Surgeons | 2010
Michael J. Englesbe; Shaun P. Patel; Kevin He; Raymond J. Lynch; Douglas E. Schaubel; Calista M. Harbaugh; Sven Holcombe; Stewart C. Wang; Dorry L. Segev; Christopher J. Sonnenday
BACKGROUND Surgeons frequently struggle to determine patient suitability for liver transplantation. Objective and comprehensive measures of overall burden of disease, such as sarcopenia, could inform clinicians and help avoid futile transplantations. STUDY DESIGN The cross-sectional area of the psoas muscle was measured on CT scans of 163 liver transplant recipients. After controlling for donor and recipient characteristics using Cox regression models, we described the relationship between psoas area and post-transplantation mortality. RESULTS Psoas area correlated poorly with Model for End-Stage Liver Disease score and serum albumin. Cox regression revealed a strong association between psoas area and post-transplantation mortality (hazard ratio = 3.7/1,000 mm(2) decrease in psoas area; p < 0.0001). When stratified into quartiles based on psoas area (holding donor and recipient characteristics constant), 1-year survival ranged from 49.7% for the quartile with the smallest psoas area to 87.0% for the quartile with the largest. Survival at 3 years among these groups was 26.4% and 77.2%, respectively. The impact of psoas area on survival exceeded that of all other covariates in these models. CONCLUSIONS Central sarcopenia strongly correlates with mortality after liver transplantation. Such objective measures of patient frailty, such as sarcopenia, can inform clinical decision making and, potentially, allocation policy. Additional work is needed develop valid and clinically relevant measures of sarcopenia and frailty in liver transplantation.
Anesthesiology | 2007
Sachin Kheterpal; Kevin K. Tremper; Michael J. Englesbe; Michael O'Reilly; Amy Shanks; Douglas M. Fetterman; Andrew L. Rosenberg; Richard D. Swartz
Background:The authors investigated the incidence and risk factors for postoperative acute renal failure after major noncardiac surgery among patients with previously normal renal function. Methods:Adult patients undergoing major noncardiac surgery with a preoperative calculated creatinine clearance of 80 ml/min or greater were included in a prospective, observational study at a single tertiary care university hospital. Patients were followed for the development of acute renal failure (defined as a calculated creatinine clearance of 50 ml/min or less) within the first 7 postoperative days. Patient preoperative characteristics and intraoperative anesthetic management were evaluated for associations with acute renal failure. Thirty-day, 60-day, and 1-yr all-cause mortality was also evaluated. Results:A total of 65,043 cases between 2003 and 2006 were reviewed. Of these, 15,102 patients met the inclusion criteria; 121 patients developed acute renal failure (0.8%), and 14 required renal replacement therapy (0.1%). Seven independent preoperative predictors were identified (P < 0.05): age, emergent surgery, liver disease, body mass index, high-risk surgery, peripheral vascular occlusive disease, and chronic obstructive pulmonary disease necessitating chronic bronchodilator therapy. Several intraoperative management variables were independent predictors of acute renal failure: total vasopressor dose administered, use of a vasopressor infusion, and diuretic administration. Acute renal failure was associated with increased 30-day, 60-day, and 1-yr all-cause mortality. Conclusions:Several preoperative predictors previously reported to be associated with acute renal failure after cardiac surgery were also found to be associated with acute renal failure after noncardiac surgery. The use of vasopressor and diuretics is also associated with acute renal failure.
Anesthesiology | 2009
Sachin Kheterpal; Kevin K. Tremper; Michael Heung; Andrew L. Rosenberg; Michael J. Englesbe; Amy Shanks; Darrell A. Campbell
Background:The authors sought to identify the incidence, risk factors, and mortality impact of acute kidney injury (AKI) after general surgery using a large and representative national clinical data set. Methods:The 2005–2006 American College of Surgeons– National Surgical Quality Improvement Program participant use data file is a compilation of outcome data from general surgery procedures performed in 121 US medical centers. The primary outcome was AKI within 30 days, defined as an increase in serum creatinine of at least 2 mg/dl or acute renal failure necessitating dialysis. A variety of patient comorbidities and operative characteristics were evaluated as possible predictors of AKI. A logistic regression full model fit was used to create an AKI model and risk index. Thirty-day mortality among patients with and without AKI was compared. Results:Of 152,244 operations reviewed, 75,952 met the inclusion criteria, and 762 (1.0%) were complicated by AKI. The authors identified 11 independent preoperative predictors: age 56 yr or older, male sex, emergency surgery, intraperitoneal surgery, diabetes mellitus necessitating oral therapy, diabetes mellitus necessitating insulin therapy, active congestive heart failure, ascites, hypertension, mild preoperative renal insufficiency, and moderate preoperative renal insufficiency. The c statistic for a simplified risk index was 0.80 in the derivation and validation cohorts. Class V patients (six or more risk factors) had a 9% incidence of AKI. Overall, patients experiencing AKI had an eightfold increase in 30-day mortality. Conclusions:Approximately 1% of general surgery cases are complicated by AKI. The authors have developed a robust risk index based on easily identified preoperative comorbidities and patient characteristics.
Annals of Surgery | 2006
Robert M. Merion; Shawn J. Pelletier; Nathan P. Goodrich; Michael J. Englesbe; Francis L. Delmonico
Objective:This study examines donation after cardiac death (DCD) practices and outcomes in liver transplantation. Summary Background Data:Livers procured from DCD donors have recently been used to increase the number of deceased donors and bridge the gap between limited organ supply and the pool of waiting list candidates. Comprehensive evaluation of this practice and its outcomes has not been previously reported. Methods:A national cohort of all DCD and donation after brain-death (DBD) liver transplants between January 1, 2000 and December 31, 2004 was identified in the Scientific Registry of Transplant Recipients. Time to graft failure (including death) was modeled by Cox regression, adjusted for relevant donor and recipient characteristics. Results:DCD livers were used for 472 (2%) of 24,070 transplants. Annual DCD liver activity increased from 39 in 2000 to 176 in 2004. The adjusted relative risk of DCD graft failure was 85% higher than for DBD grafts (relative risk, 1.85; 95% confidence interval, 1.51–2.26; P < 0.001), corresponding to 3-month, 1-year, and 3-year graft survival rates of 83.0%, 70.1%, and 60.5%, respectively (vs. 89.2%, 83.0%, and 75.0% for DBD recipients). There was no significant association between transplant program DCD liver transplant volume and graft outcome. Conclusions:The annual number of DCD livers used for transplant has increased rapidly. However, DCD livers are associated with a significantly increased risk of graft failure unrelated to modifiable donor or recipient factors. Appropriate recipients for DCD livers have not been fully characterized and recipient informed consent should be obtained before use of these organs.
Journal of The American College of Surgeons | 2008
Darrell A. Campbell; William G. Henderson; Michael J. Englesbe; Bruce L. Hall; Michael O'Reilly; Dale W. Bratzler; E. Patchen Dellinger; Leigh Neumayer; Barbara L. Bass; Matthew M. Hutter; James Schwartz; Clifford Y. Ko; Kamal M.F. Itani; Steven M. Steinberg; Allan Siperstein; Robert G. Sawyer; Douglas J. Turner; Shukri F. Khuri
BACKGROUND Surgical site infections (SSI) continue to be a significant problem in surgery. The American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP) Best Practices Initiative compared process and structural characteristics among 117 private sector hospitals in an effort to define best practices aimed at preventing SSI. STUDY DESIGN Using standard NSQIP methodologies, we identified 20 low outlier and 13 high outlier hospitals for SSI using data from the ACS-NSQIP in 2006. Each hospital was administered a process of care survey, and site visits were conducted to five hospitals. Comparisons between the low and high outlier hospitals were made with regard to patient characteristics, operative variables, structural variables, and processes of care. RESULT Hospitals that were high outliers for SSI had higher trainee-to-bed ratios (0.61 versus 0.25, p < 0.0001), and the operations took significantly longer (128.3+/-104.3 minutes versus 102.7+/-83.9 minutes, p < 0.001). Patients operated on at low outlier hospitals were less likely to present to the operating room anemic (4.9% versus 9.7%, p=0.007) or to receive a transfusion (5.1% versus 8.0%, p=0.03). In general, perioperative policies and practices were very similar between the low and high outlier hospitals, although low outlier hospitals were readily identified by site visitors. Overall, low outlier hospitals were smaller, efficient in the delivery of care, and experienced little operative staff turnover. CONCLUSIONS Our findings suggest that evidence-based SSI prevention practices do not easily distinguish well from poorly performing hospitals. But structural and process of care characteristics of hospitals were found to have a significant association with good results.
Anesthesiology | 2009
Sachin Kheterpal; Michael O'Reilly; Michael J. Englesbe; Andrew L. Rosenberg; Amy Shanks; Lingling Zhang; Edward D. Rothman; Darrell A. Campbell; Kevin K. Tremper
Background:The authors sought to determine the incidence and risk factors for perioperative cardiac adverse events (CAEs) after noncardiac surgery using detailed preoperative and intraoperative hemodynamic data. Methods:The authors conducted a prospective observational study at a single university hospital from 2002 to 2006. All American College of Surgeons–National Surgical Quality Improvement Program patients undergoing general, vascular, and urological surgery were included. The CAE outcome definition included cardiac arrest, non-ST elevation myocardial infarction, Q-wave myocardial infarction, and new clinically significant cardiac dysrhythmia within the first 30 postoperative days. Results:Four years of data demonstrated that of 7,740 noncardiac operations, 83 patients (1.1%) experienced a CAE within 30 days. Nine independent predictors were identified (P ≤ 0.05): age ≥ 68, body mass index ≥ 30, emergent surgery, previous coronary intervention or cardiac surgery, active congestive heart failure, cerebrovascular disease, hypertension, operative duration ≥ 3.8 h, and the administration of 1 or more units of packed red blood cells intraoperatively. The c-statistic of this model was 0.81 ± 0.02. Univariate analysis demonstrated that high-risk patients experiencing a CAE were more likely to experience an episode of mean arterial pressure < 50 mmHg (6% vs. 24%, P = 0.02), experience an episode of 40% decrease in mean arterial pressure (26% vs. 53%, P = 0.01), and an episode of heart rate > 100 (22% vs. 34%, P = 0.05). Conclusions:In comparison with current risk stratification indices, the inclusion of intraoperative elements improves the ability to predict a perioperative CAE after noncardiac surgery.
Journal of Vascular Surgery | 2011
Jay Soong Jin Lee; Kevin He; Calista M. Harbaugh; Douglas E. Schaubel; Christopher J. Sonnenday; Stewart C. Wang; Michael J. Englesbe; Jonathan L. Eliason
OBJECTIVES Determining operative risk in patients undergoing aortic surgery is a difficult process, as multiple variables converge to affect overall mortality. Patient frailty is certainly a contributing factor, but is difficult to measure, with surgeons often relying on subjective or intuitive influences. We sought to use core muscle size as an objective measure of frailty, and determine its utility as a predictor of survival after abdominal aortic aneurysm (AAA) repair. METHODS Four hundred seventy-nine patients underwent elective open AAA repair between 2000 and 2008. Two hundred sixty-two patients (54.7%) had preoperative computed tomography (CT) scans available for analysis. Cross-sectional areas of the psoas muscles at the level of the L4 vertebra were measured. The covariate-adjusted effect of psoas area on postoperative mortality was assessed using Cox regression. RESULTS Of the 262 patients, there were 55 deaths and the mean length of follow-up was 2.3 years. Cox regression revealed a significant association between psoas area and postoperative mortality (P = .003). The effect of psoas area was found to decrease significantly as follow-up time increased (P = .008). Among all covariates included in the Cox models (including predictors of mortality such as American Society of Anesthesiologists [ASA] score), the psoas area was the most significant. CONCLUSION Core muscle size, an objective measure of frailty, correlates strongly with mortality after elective AAA repair. A better understanding of the role of frailty and core muscle size may aid in risk stratification and impact timing of surgical repair, especially in more complex aortic operations.
Annals of Surgery | 2007
Michael J. Englesbe; Shawn J. Pelletier; John C. Magee; Paul G. Gauger; Tracy Schifftner; William G. Henderson; Shukri F. Khuri; Darrell A. Campbell
Objective:We hypothesize that the systems of care within academic medical centers are sufficiently disrupted with the beginning of a new academic year to affect patient outcomes. Methods:This observational multiinstitutional cohort study was conducted by analysis of the National Surgical Quality Improvement Program–Patient Safety in Surgery Study database. The 30-day morbidity and mortality rates were compared between 2 periods of care: (early group: July 1 to August 30) and late group (April 15 to June 15). Patient baseline characteristics were first compared between the early and late periods. A prediction model was then constructed, via stepwise logistic regression model with a significance level for entry and a significance level for selection of 0.05. Results:There was 18% higher risk of postoperative morbidity in the early (n = 9941) versus the late group (n = 10313) (OR 1.18, 95%, CI 1.07–1.29, P = 0.0005, c-index 0.794). There was a 41% higher risk for mortality in the early group compared with the late group (OR 1.41, CI 1.11-1.80, P = 0.005, c-index 0.938). No significant trends in patient risk over time were noted. Conclusion:Our data suggests higher rates of postsurgical morbidity and mortality related to the time of the year. Further study is needed to fully describe the etiologies of the seasonal variation in outcomes.
Liver Transplantation | 2010
Michael J. Englesbe; James Kubus; Wajee Muhammad; Christopher J. Sonnenday; Theodore H. Welling; Jeffrey D. Punch; Raymond J. Lynch; Jorge A. Marrero; Shawn J. Pelletier
The effects of occlusive portal vein thrombosis (PVT) on the survival of patients with cirrhosis are unknown. This was a retrospective cohort study at a single center. The main exposure variable was the presence of occlusive PVT. The primary outcome measure was time‐dependent mortality. A total of 3295 patients were analyzed, and 148 (4.5%) had PVT. Variables independently predictive of mortality from the time of liver transplant evaluation included age [hazard ratio (HR), 1.02; 95% confidence interval (CI), 1.01‐1.03], Model for End‐Stage Liver Disease (MELD) score (HR, 1.10; 95% CI, 1.08‐1.11), hepatitis C (HR, 1.44; 95% CI, 1.24‐1.68), and PVT (HR, 2.61; 95% CI, 1.97‐3.51). Variables independently associated with the risk of mortality from the time of liver transplant listing included age (HR, 1.02; 95% CI, 1.01‐1.03), transplantation (HR, 0.65; 95% CI, 0.50‐0.81), MELD (HR, 1.08; 95% CI, 1.06‐1.10), hepatitis C (HR, 1.50; 95% CI, 1.18‐1.90), and PVT (1.99; 95% CI, 1.25‐3.16). The presence of occlusive PVT at the time of liver transplantation was associated with an increased risk of death at 30 days (odds ratio, 7.39; 95% CI, 2.39‐22.83). In conclusion, patients with cirrhosis complicated by PVT have an increased risk of death. Liver Transpl 16:83–90, 2010.
Annals of Surgery | 2012
Michael J. Englesbe; Jay S. Lee; Kevin He; Ludi Fan; Douglas E. Schaubel; Kyle H. Sheetz; Calista M. Harbaugh; Sven Holcombe; Darrel A. Campbell; Christopher J. Sonnenday; Stewart C. Wang
Objective:Assess the relationship between lean core muscle size, measured on preoperative cross-sectional images, and surgical outcomes. Background:Novel measures of preoperative risk are needed. Analytic morphomic analysis of cross-sectional diagnostic images may elucidate vast amounts of patient-specific data, which are never assessed by clinicians. Methods:The study population included all patients within the Michigan Surgical Quality Collaborative database with a computerized tomography(CT) scan before major, elective general or vascular surgery (N = 1453). The lean core muscle size was calculated using analytic morphomic techniques. The primary outcome measure was survival, whereas secondary outcomes included surgical complications and costs. Covariate adjusted outcomes were assessed using Kaplan-Meier analysis, multivariate cox regression, multivariate logistic regression, and generalized estimating equation methods. Results:The mean follow-up was 2.3 years and 214 patients died during the observation period. The covariate-adjusted hazard ratio for lean core muscle area was 1.45 (P = 0.028), indicating that mortality increased by 45% per 1000 mm2 decrease in lean core muscle area. When stratified into tertiles of core muscle size, the 1-year survival was 87% versus 95% for the smallest versus largest tertile, whereas the 3-year survival was 75% versus 91%, respectively (P < 0.003 for both comparisons). The estimated average risk of complications significantly differed and was 20.9%, 15.0%, and 12.3% in the lower, middle, and upper tertiles of lean core muscle area, respectively. Covariate-adjusted cost increased significantly by an estimated