Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian R. Swenson is active.

Publication


Featured researches published by Brian R. Swenson.


Surgical Infections | 2009

Obesity and Site-Specific Nosocomial Infection Risk in the Intensive Care Unit

Lesly A. Dossett; Leigh Anne Dageforde; Brian R. Swenson; Rosemarie Metzger; Hugo Bonatti; Robert G. Sawyer; Addison K. May

BACKGROUND Obese patients are at higher than normal risk for postoperative infections such as pneumonia and surgical site infections, but the relation between obesity and infections acquired in the intensive care unit (ICU) is unclear. Our objective was to describe the relation between body mass index (BMI) and site-specific ICU-acquired infection risk in adults. METHODS Secondary analysis of a large, dual-institutional, prospective observational study of critically ill and injured surgical patients remaining in the ICU for at least 48 h. Patients were classified into BMI groups according to the National Heart, Lung and Blood Institute guidelines: <or= 18.5 kg/m(2) (underweight), 18.5-24.9 kg/m(2) (normal), 25-29.9 kg/m(2) (overweight), 30.0-39.9 kg/m(2) (obese), and >or= 40.0 kg/m(2) (severely obese). The primary outcomes were the number and site of ICU-acquired U.S. Centers for Disease Control and Prevention-defined infections. Multivariable logistic and Poisson regression were used to determine age-, sex-, and severity-adjusted odds ratios (ORs) and incidence rate ratios associated with differences in BMI. RESULTS A total of 2,037 patients had 1,436 infection episodes involving 1,538 sites in a median ICU length of stay of 9 days. After adjusting for age, sex, and illness severity, severe obesity was an independent risk factor for catheter-related (OR 2.2; 95% confidence interval [CI] 1.5, 3.4) and other blood stream infections (OR 3.2; 95% CI 1.9, 5.3). Cultured organisms did not differ by BMI group. CONCLUSION Obesity is an independent risk factor for ICU-acquired catheter and blood stream infections. This observation may be explained by the relative difficulty in obtaining venous access in these patients and the reluctance of providers to discontinue established venous catheters in the setting of infection signs or symptoms.


Lancet Infectious Diseases | 2012

Aggressive versus conservative initiation of antimicrobial treatment in critically ill surgical patients with suspected intensive-care-unit-acquired infection: a quasi-experimental, before and after observational cohort study

Tjasa Hranjec; Laura H. Rosenberger; Brian R. Swenson; Rosemarie Metzger; Tanya R. Flohr; Amani D. Politano; Lin M. Riccio; Kimberley A. Popovsky; Robert G. Sawyer

BACKGROUND Antimicrobial treatment in critically ill patients can either be started as soon as infection is suspected or after objective data confirm an infection. We postulated that delaying antimicrobial treatment of patients with suspected infections in the surgical intensive care unit (SICU) until objective evidence of infection had been obtained would not worsen patient mortality. METHODS We did a 2-year, quasi-experimental, before and after observational cohort study of patients aged 18 years or older who were admitted to the SICU of the University of Virginia (Charlottesville, VA, USA). From Sept 1, 2008, to Aug 31, 2009, aggressive treatment was used: patients suspected of having an infection on the basis of clinical grounds had blood cultures sent and antimicrobial treatment started. From Sept 1, 2009, to Aug 31, 2010, a conservative strategy was used, with antimicrobial treatment started only after objective findings confirmed an infection. Our primary outcome was in-hospital mortality. Analyses were by intention to treat. FINDINGS Admissions to the SICU for the first and second years were 762 and 721, respectively, with 101 patients with SICU-acquired infections during the aggressive year and 100 patients during the conservative year. Compared with the aggressive approach, the conservative approach was associated with lower all-cause mortality (13/100 [13%] vs 27/101 [27%]; p=0·015), more initially appropriate therapy (158/214 [74%] vs 144/231 [62%]; p=0·0095), and a shorter mean duration of therapy (12·5 days [SD 10·7] vs 17·7 [28·1]; p=0·0080). After adjusting for age, sex, trauma involvement, acute physiology and chronic health evaluation (APACHE) II score, and site of infection, the odds ratio for the risk of mortality in the aggressive therapy group compared with the conservative therapy group was 2·5 (95% CI 1·5-4·0). INTERPRETATION Waiting for objective data to diagnose infection before treatment with antimicrobial drugs for suspected SICU-acquired infections does not worsen mortality and might be associated with better outcomes and use of antimicrobial drugs. FUNDING National Institutes of Health.


Regional Anesthesia and Pain Medicine | 2010

Intravenous lidocaine is as effective as epidural bupivacaine in reducing ileus duration, hospital stay, and pain after open colon resection: a randomized clinical trial.

Brian R. Swenson; Antje Gottschalk; Lynda T. Wells; John C. Rowlingson; Peter W. Thompson; Margaret M. Barclay; Robert G. Sawyer; Charles M. Friel; Eugene F. Foley; Marcel E. Durieux

Background: Both postoperative epidural analgesia and intravenous (IV) infusion of local anesthetic have been shown to shorten ileus duration and hospital stay after colon surgery when compared with the use of systemic narcotics alone. However, they have not been compared directly with each other. Methods: Prospective, randomized clinical trial was conducted comparing the 2 treatments in open colon surgery patients. Before induction of general anesthesia, patients were randomized either to epidural analgesia (bupivacaine 0.125% and hydromorphone 6 &mgr;g/mL were started at 10 mL/hr within 1 hr of the end of surgery) or IV lidocaine (1 mg/min in patients <70 kg, 2 mg/min in patients ≥70 kg). Markers of return of bowel function, length of stay, postoperative pain scores, systemic analgesic requirements, and adverse events were recorded and compared between the 2 groups in an intent-to-treat analysis. Results: Study enrollment took place from April 2005 to July 2006. Twenty-two patients were randomized to IV lidocaine therapy and 20 patients to epidural therapy. No statistically significant differences were found between groups in time to return of bowel function or hospital length of stay. The median pain score difference was not statistically significant. No statistically significant differences were found in pain scores for any specific postoperative day or in analgesic consumption. Conclusions: No differences were observed between groups in terms of return of bowel function, duration of hospital stay, and postoperative pain control, suggesting that IV infusion of local anesthetic may be an effective alternative to epidural therapy in patients in whom epidural anesthesia is contraindicated or not desired.


The American Journal of Clinical Nutrition | 2014

Hypocaloric compared with eucaloric nutritional support and its effect on infection rates in a surgical intensive care unit: a randomized controlled trial

Eric J. Charles; Robin T. Petroze; Rosemarie Metzger; Tjasa Hranjec; Laura H. Rosenberger; Lin M. Riccio; Matthew D. McLeod; Christopher A. Guidry; George J. Stukenborg; Brian R. Swenson; Kate F. Willcutts; Kelly B. O'Donnell; Robert G. Sawyer

BACKGROUND Proper caloric intake goals in critically ill surgical patients are unclear. It is possible that overnutrition can lead to hyperglycemia and an increased risk of infection. OBJECTIVE This study was conducted to determine whether surgical infection outcomes in the intensive care unit (ICU) could be improved with the use of hypocaloric nutritional support. DESIGN Eighty-three critically ill patients were randomly allocated to receive either the standard calculated daily caloric requirement of 25-30 kcal · kg(-1) · d(-1) (eucaloric) or 50% of that value (hypocaloric) via enteral tube feeds or parenteral nutrition, with an equal protein allocation in each group (1.5 g · kg(-1) · d(-1)). RESULTS There were 82 infections in the hypocaloric group and 66 in the eucaloric group, with no significant difference in the mean (± SE) number of infections per patient (2.0 ± 0.6 and 1.6 ± 0.2, respectively; P = 0.50), percentage of patients acquiring infection [70.7% (29 of 41) and 76.2% (32 of 42), respectively; P = 0.57], mean ICU length of stay (16.7 ± 2.7 and 13.5 ± 1.1 d, respectively; P = 0.28), mean hospital length of stay (35.2 ± 4.9 and 31.0 ± 2.5 d, respectively; P = 0.45), mean 0600 glucose concentration (132 ± 2.9 and 135 ± 3.1 mg/dL, respectively; P = 0.63), or number of mortalities [3 (7.3%) and 4 (9.5%), respectively; P = 0.72]. Further analyses revealed no differences when analyzed by sex, admission diagnosis, site of infection, or causative organism. CONCLUSIONS Among critically ill surgical patients, caloric provision across a wide acceptable range does not appear to be associated with major outcomes, including infectious complications. The optimum target for caloric provision remains elusive.


The Annals of Thoracic Surgery | 2010

Donor Age Is Associated With Chronic Allograft Vasculopathy After Adult Heart Transplantation: Implications for Donor Allocation

Alykhan S. Nagji; Tjasa Hranjec; Brian R. Swenson; John A. Kern; James D. Bergin; David R. Jones; Irving L. Kron; Christine L. Lau; Gorav Ailawadi

BACKGROUND Chronic allograft vasculopathy (CAV) is a major cause of long-term complications and mortality after heart transplantation. Although recipient factors have been implicated, little is known of the role of donor factors in CAV development. We sought to identify donor factors associated with development of CAV after heart transplantation. METHODS We reviewed the United Network for Organ Sharing heart transplant database from August 1987 to May 2008. Univariate and multivariate analyses were performed to assess the association between donor variables and the onset of CAV for adult recipients. Donor age was matched to recipient age and analyzed with respect to development of CAV. RESULTS Of the 39,704 recipients, a total of 11,714 (29.5%) experienced CAV. Multivariate analysis demonstrated seven donor factors as independent predictors of CAV: age, ethnicity, sex, weight, history of diabetes, hypertension, and tobacco use. When matching young donors (0 to 19.9 years) and old donors (> or =50 years) to each recipient age group, older donors (> or =50 years) conferred a higher risk of developing CAV. Further modeling demonstrated that for each recipient group, older donor age (> or =50 years) conferred a higher risk of CAV development compared with younger donor age (0 to 19.9 years; p < 0.0001). CONCLUSIONS Donor factors including sex, hypertension, diabetes, and tobacco use are independently associated with recipient CAV. Older donor age confers a greater risk of CAV development regardless of the age of the recipient. A heightened awareness for the development of CAV is warranted when using older donors in adult cardiac transplantation, in particular with recipients 40 years of age or older.


The Annals of Thoracic Surgery | 2009

Model for End-Stage Liver Disease Predicts Mortality for Tricuspid Valve Surgery

Gorav Ailawadi; Damien J. LaPar; Brian R. Swenson; Christine L. Lau; John A. Kern; Benjamin B. Peeler; Keith E. Littlewood; Irving L. Kron

BACKGROUND Patients undergoing tricuspid valve surgery have a mortality of 9.8%, which is higher than expected given the complexity of the procedure. Despite liver dysfunction seen in many patients with tricuspid disease, no existing risk model accounts for this. The Model for End-Stage Liver Disease (MELD) score accurately predicts mortality for abdominal surgery. The objective of this study was to determine if MELD could accurately predict mortality after tricuspid valve surgery and compare it to existing risk models. METHODS From 1994 to 2008, 168 patients (mean age, 61 +/- 14 years; male = 72, female = 96) underwent tricuspid repair (n = 156) or replacement (n = 12). Concomitant operations were performed in 87% (146 of 168). Patients with history of cirrhosis or MELD score 15 or greater (MELD = 3.8*LN [total bilirubin] + 11.2*log normal [international normalized ratio] + 9.6*log normal [creatinine] + 6.4) were compared with patients without liver disease or MELD score less than 15. Preoperative risk, intraoperative findings, and complications including operative mortality were evaluated. Statistical analyses were performed using chi(2), Fishers exact test, and area under the curve (AUC) analyses. RESULTS Patients with a history of liver disease or MELD score of 15 or greater had significantly higher mortality (18.9% [7 of 37] versus 6.1% [8 of 131], p = 0.024). To further characterize the effect of MELD, patients were stratified by MELD alone. No major differences in demographics or operation were identified between groups. Mortality increased as MELD score increased, especially when MELD score of 15 or greater (p = 0.0015). A MELD score less than 10, 10 to 14.9, 15 to 19.9, and more than 20 was associated with operative mortality of 1.9%, 6.8%, 27.3%, and 30.8%, respectively. By multivariate analysis, MELD score of 15 or greater remained strongly associated with mortality (p = 0.0021). The MELD score predicted mortality (AUC = 0.78) as well as the European System for Cardiac Operative Risk Evaluation logistic risk calculator (AUC = 0.78, p = 0.96). CONCLUSIONS The MELD score predicts mortality in patients undergoing tricuspid valve surgery and offers a simple and effective method of risk stratification in these patients.


Journal of Trauma-injury Infection and Critical Care | 2008

High Levels of Endogenous Estrogens are Associated With Death in the Critically Injured Adult

Lesly A. Dossett; Brian R. Swenson; Daithi S. Heffernan; Hugo Bonatti; Rosemarie Metzger; Robert G. Sawyer; Addison K. May

BACKGROUND Sex hormones exhibit predictable changes in their physiologic patterns during critical illness. Endogenous estrogens are elevated in both genders as a result of the peripheral conversion of androgens to estrogens by the aromatase enzyme. Elevated endogenous estrogens have been associated with death in medical and mixed surgical intensive care unit (ICU) patients. Our objective was to determine the relationship between endogenous estrogens and outcomes in critically injured patients. METHODS A prospective cohort of injured patients remaining in the ICU for at least 48 hours at two trauma centers was enrolled. Sex hormones (estradiol, progesterone, testosterone, prolactin, and dehydroepiandrosterone-sulfate) were assayed and mortality was assessed. A logistic regression model was used to determine the association between estradiol and death. The area under the receiver operating characteristic (AUROC) curve was used to estimate the accuracy of estradiol in predicting death. RESULTS Nine hundred ninety-one patients were enrolled with a 13.4% mortality rate. Despite no detectable difference in mortality among genders, estradiol was significantly elevated in nonsurvivors (16 pg/mL vs. 35 pg/mL, p < 0.001). Estradiol was a marker for injury severity with the most severely injured patients exhibiting the highest levels. The ability of estradiol to predict death (AUROC = 0.65) was comparable with Trauma and Injury Severity Score (AUROC = 0.65) and superior to Injury Severity Score (AUROC = 0.54) in this cohort. CONCLUSIONS Serum estradiol is a marker of injury severity and a predictor of death in the critically injured patient, regardless of gender. Whether or not estradiol plays a causal role in outcomes is unclear, but estrogen modulation represents a potential therapy for improving outcomes in critically ill trauma patients.


Surgical Infections | 2010

Surgical Site Infection Prevention: How We Do It

Tjasa Hranjec; Brian R. Swenson; Robert G. Sawyer

BACKGROUND Efforts to prevent surgical site infection (SSI) employ methods that are valid scientifically, but each institution and each surgeon also incorporates methods believed to be useful although this has not been proved by clinical trials. METHODS The surgical literature was reviewed, as were practices at the University of Virginia that the authors believe are of value for the prevention of SSI. RESULTS Preventive antibiotics are established measures. A case can be made for increasing the dose in patients with a large body mass, and antibiotics probably should be re-administered during procedures lasting longer than 3 h. Chlorhexidine showers for the patient are not proven; however, they are inexpensive and of potential benefit. Hair removal is always done with clippers and in the operating room at the time of the procedure. No scientific case can be made specifically for using antiseptic at the surgical site before the incision. Keeping the blood glucose concentration and the core body temperature near normal probably are important, but how close to normal is unclear. Transfusion enhances SSI, but leukocyte reduction of transfused blood may be of benefit. Some evidence supports the value of antibacterial suture in preventing SSI. CONCLUSIONS Many proven and potentially valid methods are employed to prevent SSI. Coordinated and standardized protocols with good data collection can assist the multi-disciplinary efforts to reduce SSI within the unique practices of a given institution.


The Annals of Thoracic Surgery | 2008

Is Mitral Valve Repair Superior to Replacement in Elderly Patients

Gorav Ailawadi; Brian R. Swenson; Micah E. Girotti; Leo M. Gazoni; Benjamin B. Peeler; John A. Kern; Lynn M. Fedoruk; Irving L. Kron

BACKGROUND Mitral valve replacement is more frequently performed and perceived to be equivalent to repair in elderly patients, despite the superiority of repair in younger patients. Our objective was to compare mitral repair to replacement in elderly patients age 75 years or older. Patients younger than 75 years undergoing mitral valve surgery served as a reference population. METHODS Consecutive elderly patients undergoing operation for mitral regurgitation at our institution from 1998 to 2006 were reviewed. Elderly patients (mean age, 78.0 +/- 2.8 years) who underwent mitral repair (n = 70) or replacement (n = 47) were compared with cohorts of young patients (mean age, 58.9 +/- 9.3 years) who underwent repair (n = 100) or replacement (n = 98) during the same period. Patient details and outcomes were compared using univariate, multivariate, and Kaplan-Meier analyses. RESULTS Mitral replacement in elderly patients had higher mortality than repair (23.4%, 11 of 47 versus 7.1%, 5 of 70; p = 0.01) or as compared with either operation in the reference group (p < 0.0001). Postoperative stroke was higher in elderly replacement patients compared with repair (12.8%, 6 of 47 versus 0%; p = 0.003) or compared with either young cohort (p = 0.02). Compared with elderly repair patients, elderly replacement patients had more cerebrovascular disease (21.3%, 10 of 47 versus 4.3%, 3 of 70; p = 0.005) and rheumatic mitral valves (21.3%, 10 of 47 versus 0%; p = 0.0001). In the young group, overall complication and mortality were no different between replacement and repair. Long-term survival favored repair over replacement in elderly patients (p = 0.04). One elderly repair patient experienced late recurrence of persistent mitral regurgitation. CONCLUSIONS In patients age 75 years or older, mitral repair is associated with a lower risk of mortality, postoperative stroke, and prolonged intensive care unit and hospital stay compared with mitral replacement. Mitral repair can be performed in preference over replacement even in patients older than the age of 75.


Heart Rhythm | 2010

Surgically Placed Left Ventricular Leads Provide Similar Outcomes to Percutaneous Leads in Patients with Failed Coronary Sinus Lead Placement

Gorav Ailawadi; Damien J. LaPar; Brian R. Swenson; Cory Maxwell; Micah E. Girotti; James D. Bergin; John A. Kern; John P. DiMarco; Srijoy Mahapatra

BACKGROUND Cardiac resynchronization therapy using a left ventricular (LV) lead inserted via the coronary sinus (CS) improves symptoms of congestive heart failure, decreases hospitalizations, and improves survival. An epicardial LV lead is often placed surgically after a failed percutaneous attempt, but whether it offers the same benefits is unknown. OBJECTIVE The purpose of this study was to determine if patients who receive a surgical LV lead after failed CS lead placement for cardiac resynchronization therapy derive the same benefit as do patients with a successfully placed CS lead. METHODS A total of 452 patients underwent attempted CS lead insertion. Forty-five patients who had failed CS lead placement and then had surgical LV lead placement were matched with 135 patients who had successful CS lead placement. RESULTS No major differences in preoperative variables were seen between groups. Postprocedural complications of acute renal injury (26.2% vs 4.9%, P <.001) and infection (11.9% vs 2.4%, P = .03) were more common in the surgical group. Mean long-term follow-up was 32.4 +/- 17.5 months for surgical patients and 39.4 +/- 14.8 months for percutaneous patients. At follow-up, all-cause mortality (30.6% vs 23.8%, P = .22) and readmission for congestive heart failure (26.2% vs 31.5%, P = .53) were similar between surgical and percutaneous groups. Improvement in New York Heart Association functional class (60.1% vs 49.6%, P = .17) was similar between surgical and percutaneous groups. CONCLUSION Surgical LV lead placement offers functional benefits similar to those of percutaneous placement but with greater risk of perioperative complications, including acute renal failure and infection.

Collaboration


Dive into the Brian R. Swenson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Addison K. May

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge