Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathy Welch is active.

Publication


Featured researches published by Kathy Welch.


Journal of Immunology | 2006

Circulating Cytokine/Inhibitor Profiles Reshape the Understanding of the SIRS/CARS Continuum in Sepsis and Predict Mortality

Marcin F. Osuchowski; Kathy Welch; Javed Siddiqui; Daniel G. Remick

Mortality in sepsis remains unacceptably high and attempts to modulate the inflammatory response failed to improve survival. Previous reports postulated that the sepsis-triggered immunological cascade is multimodal: initial systemic inflammatory response syndrome (SIRS; excessive pro-, but no/low anti-inflammatory plasma mediators), intermediate homeostasis with a mixed anti-inflammatory response syndrome (MARS; both pro- and anti-inflammatory mediators) and final compensatory anti-inflammatory response syndrome (CARS; excessive anti-, but no/low proinflammatory mediators). To verify this, we examined the evolution of the inflammatory response during the early phase of murine sepsis by repetitive blood sampling of septic animals. Increased plasma concentrations of proinflammatory (IL-6, TNF, IL-1β, KC, MIP-2, MCP-1, and eotaxin) and anti-inflammatory (TNF soluble receptors, IL-10, IL-1 receptor antagonist) cytokines were observed in early deaths (days 1–5). These elevations occurred simultaneously for both the pro- and anti-inflammatory mediators. Plasma levels of IL-6 (26 ng/ml), TNF-α (12 ng/ml), KC (33 ng/ml), MIP-2 (14 ng/ml), IL-1 receptor antagonist (65 ng/ml), TNF soluble receptor I (3 ng/ml), and TNF soluble receptor II (14 ng/ml) accurately predicted mortality within 24 h. In contrast, these parameters were not elevated in either the late-deaths (day 6–28) or survivors. Surprisingly, either pro- or anti-inflammatory cytokines were also reliable in predicting mortality up to 48 h before outcome. These data demonstrate that the initial inflammatory response directly correlates to early but not late sepsis mortality. This multifaceted response questions the use of a simple proinflammatory cytokine measurement for classifying the inflammatory status during sepsis.


Infection Control and Hospital Epidemiology | 2011

Evaluation of Hospital Room Assignment and Acquisition of Clostridium difficile Infection

Megan K. Shaughnessy; Renee L. Micielli; Daryl D. DePestel; Jennifer L. Arndt; Cathy Strachan; Kathy Welch; Carol E. Chenoweth

BACKGROUND AND OBJECTIVE Clostridium difficile spores persist in hospital environments for an extended period. We evaluated whether admission to a room previously occupied by a patient with C. difficile infection (CDI) increased the risk of acquiring CDI. DESIGN Retrospective cohort study. SETTING Medical intensive care unit (ICU) at a tertiary care hospital. METHODS Patients admitted from January 1, 2005, through June 30, 2006, were evaluated for a diagnosis of CDI 48 hours after ICU admission and within 30 days after ICU discharge. Medical, ICU, and pharmacy records were reviewed for other CDI risk factors. Admitted patients who did develop CDI were compared with admitted patients who did not. RESULTS Among 1,844 patients admitted to the ICU, 134 CDI cases were identified. After exclusions, 1,770 admitted patients remained for analysis. Of the patients who acquired CDI after admission to the ICU, 4.6% had a prior occupant without CDI, whereas 11.0% had a prior occupant with CDI (P = .002). The effect of room on CDI acquisition remained a significant risk factor (P = .008) when Kaplan-Meier curves were used. The prior occupants CDI status remained significant (p = .01; hazard ratio, 2.35) when controlling for the current patients age, Acute Physiology and Chronic Health Evaluation III score, exposure to proton pump inhibitors, and antibiotic use. CONCLUSIONS A prior room occupant with CDI is a significant risk factor for CDI acquisition, independent of established CDI risk factors. These findings have implications for room placement and hospital design.


Jacc-cardiovascular Interventions | 2009

The Relative Renal Safety of Iodixanol Compared With Low-Osmolar Contrast Media : A Meta-Analysis of Randomized Controlled Trials

Michael C. Reed; Pascal Meier; Umesh Tamhane; Kathy Welch; Mauro Moscucci; Hitinder S. Gurm

OBJECTIVES We sought to compare the nephrotoxicity of the iso-osmolar contrast medium, iodixanol, to low-osmolar contrast media (LOCM). BACKGROUND Contrast-induced acute kidney injury (CI-AKI) is a common cause of in-hospital renal failure. A prior meta-analysis suggested that iodixanol (Visipaque, GE Healthcare, Princeton, New Jersey) was associated with less CI-AKI than LOCM, but this study was limited by ascertainment bias and did not include the most recent randomized controlled trials. METHODS We searched Medline, Embase, ISI Web of Knowledge, Google Scholar, Current Contents, and International Pharmaceutical Abstracts databases, and the Cochrane Central Register of Controlled Trials from 1980 to November 30, 2008, for randomized controlled trials that compared the incidence of CI-AKI with either iodixanol or LOCM. Random-effects models were used to calculate summary risk ratios (RR) for CI-AKI, need for hemodialysis, and death. RESULTS A total of 16 trials including 2,763 subjects were pooled. There was no significant difference in the incidence of CI-AKI in the iodixanol group than in the LOCM group overall (summary RR: 0.79, 95% confidence interval [CI]: 0.56 to 1.12, p = 0.19). There was no significant difference in the rates of post-procedure hemodialysis or death. There was a reduction in CI-AKI when iodixanol was compared with ioxaglate (RR: 0.58, 95% CI: 0.37 to 0.92; p = 0.022) and iohexol (RR: 0.19, 95% CI: 0.07 to 0.56; p = 0.002), but no difference when compared with iopamidol (RR: 1.20, 95% CI: 0.66 to 2.18; p = 0.55), iopromide (RR: 0.93, 95% CI: 0.47 to 1.85; p = 0.84), or ioversol (RR: 0.92, 95% CI: 0.60 to 1.39; p = 0.68). CONCLUSIONS This meta-analysis including 2,763 subjects suggests that iodixanol, when compared with LOCM overall, is not associated with less CI-AKI. The relative renal safety of LOCM compared with iodixanol may vary based on the specific type of LOCM.


Archives of Environmental Health | 1982

Arsenic Exposure, Smoking, and Respiratory Cancer in Copper Smelter Workers

Kathy Welch; Ian T. T. Higgins; Mary Oh; Cecil Burchfiel

A report by Lee and Fraumeni in 1969 linked exposure to arsenic and other contaminants to a threefold excess of respiratory cancer among 8,047 employees at the Anaconda copper smelter. We established vital status through December 1977 for a sample of 1,800 men from the original cohort. Average arsenic concentrations were estimated for each smelter department based on industrial hygiene measurements made from 1943 to 1965. Departments with similar concentrations were combined into four categories of exposure: 1) low (less than 100 micrograms/m3), 2) medium (100-499 micrograms/m3), 3) high (500-4,999 micrograms/m3) and 4) very high (greater than or equal to 5,000 micrograms/m3). Three indices of individual arsenic exposure were developed: time-weighted average, 30-day ceiling, and cumulative. Exposures to sulfur dioxide and asbestos were also examined. Smoking habits were obtained by questionnaire. Mortality was compared to that of men in the State of Montana using the modified lifetable method. A clear dose-response relationship between arsenic exposure and respiratory cancer was demonstrated. Men in the highest exposure category had a sevenfold excess. Those in the low and medium categories had a risk close to that expected. Ceiling arsenic exposure appeared to be more important than did time-weighted average exposure. Sulfur dioxide and asbestos did not appear to be important in the excess of respiratory cancer, although sulfur dioxide and arsenic exposures could not be separated completely. Smoking did not appear to be as important as arsenic exposure. Our findings suggest that had men worked only in departments with low or medium arsenic exposures (i.e., less than 500 micrograms/m3) there would have been little excess respiratory cancer. Since the estimates of arsenic exposure were based on department averages rather than on concentrations for individual jobs, these results must be interpreted with caution.


Liver Transplantation | 2009

Renal outcomes after liver transplantation in the model for end-stage liver disease era.

Pratima Sharma; Kathy Welch; Richard Eikstadt; Jorge A. Marrero; Robert J. Fontana; Anna S. Lok

The proportion of patients undergoing liver transplantation (LT) with renal insufficiency has significantly increased in the Model for End‐Stage Liver Disease (MELD) era. This study was designed to determine the incidence and predictors of post‐LT chronic renal failure (CRF) and its effect on patient survival in the MELD era. Outcomes of 221 adult LT recipients who had LT between February 2002 and February 2007 were reviewed retrospectively. Patients who were listed as status 1, were granted a MELD exception, or had living‐donor, multiorgan LT were excluded. Renal insufficiency at LT was defined as none to mild [estimated glomerular filtration rate (GFR) ≥ 60 mL/minute], moderate (30–59 mL/minute), or severe (<30 mL/minute). Post‐LT CRF was defined as an estimated GFR < 30 mL/minute persisting for 3 months, initiation of renal replacement therapy, or listing for renal transplantation. The median age was 54 years, 66% were male, 89% were Caucasian, and 43% had hepatitis C. At LT, the median MELD score was 20, and 6.3% were on renal replacement therapy. After a median follow‐up of 2.6 years (range, 0.01–5.99), 31 patients developed CRF with a 5‐year cumulative incidence of 22%. GFR at LT was the only independent predictor of post‐LT CRF (hazard ratio = 1.33, P < 0.001). The overall post‐LT patient survival was 74% at 5 years. Patients with MELD ≥ 20 at LT had a higher cumulative incidence of post‐LT CRF in comparison with patients with MELD < 20 (P = 0.03). A decrease in post‐LT GFR over time was the only independent predictor of survival. In conclusion, post‐LT CRF is common in the MELD era with a 5‐year cumulative incidence of 22%. Low GFR at LT was predictive of post‐LT CRF, and a decrease in post‐LT GFR over time was associated with decreased post‐LT survival. Further studies of modifiable preoperative, perioperative, and postoperative factors influencing renal function are needed to improve outcomes following LT. Liver Transpl 15:1142–1148, 2009.


Pediatric Anesthesia | 2007

Childhood body mass index and perioperative complications

Olubukola O. Nafiu; Paul I. Reynolds; Olumuyiwa A. Bamgbade; Kevin K. Tremper; Kathy Welch; Josephine Z. Kasa-Vubu

Background:  Our aim was to describe the incidence of quality assurance events between overweight/obese and normal weight children.


Journal of Immunology | 2007

Chronic Sepsis Mortality Characterized by an Individualized Inflammatory Response

Marcin F. Osuchowski; Kathy Welch; Huan Yang; Javed Siddiqui; Daniel G. Remick

Late mortality in septic patients often exceeds the lethality occurring in acute sepsis, yet the immunoinflammatory alterations preceding chronic sepsis mortality are not well defined. We studied plasma cytokine concentrations preceding late septic deaths (days 6–28) in a murine model of sepsis induced by polymicrobial peritonitis. The late prelethal inflammatory response varied from a virtually nonexistent response in three of 14 to a mixed response in eight of 14 mice to the concurrent presence of nearly all measured cytokines, both proinflammatory and anti-inflammatory in three of 14 mice. In responding mice a consistent prelethal surge of plasma MIP-2 (1.6 vs 0.12 ng/ml in survivors; mean values), MCP-1 (2.0 vs 1.3 ng/ml), soluble TNF receptor type I (2.5 vs 0.66 ng/ml), and the IL-1 receptor antagonist (74.5 vs 3.3 ng/ml) was present, although there were infrequent increases in IL-6 (1.9 vs 0.03 ng/ml) and IL-10 (0.12 vs 0.04 ng/ml). For high mobility group box 1, late mortality was signaled by its decrease in plasma levels (591 vs 864 ng/ml). These results demonstrate that impeding mortality in the chronic phase of sepsis may be accurately predicted by plasma biomarkers, providing a mechanistic basis for individualized therapy. The pattern of late prelethal responses suggest that the systemic inflammatory response syndrome to compensatory anti-inflammatory response syndrome transition paradigm fails to follow a simple linear pattern.


Clinical Lung Cancer | 2012

Negative Predictive Value of Positron Emission Tomography and Computed Tomography for Stage T1-2N0 Non–Small-Cell Lung Cancer: A Meta-Analysis

J. Wang; Kathy Welch; Feng-Ming (Spring) Kong

BACKGROUND Nodal staging of non-small-cell lung cancer (NSCLC) is crucial in evaluation of prognosis and determination of therapeutic strategy. This study aimed to determine the negative predictive value (NPV) of combined positron emission tomography and computed tomography (PET-CT) in patients with stage I (T1-2N0) NSCLC and to investigate the possible risk factors for occult nodal disease. METHODS Studies investigating the performance of PET in conjunction with CT in the nodal staging of stage I NSCLC were identified in the MEDLINE database. The initiative of standards for reporting of diagnostic accuracy (STARD) was used to ensure study quality. Pathologic assessments through mediastinoscopy or thoracotomy were required as the reference standard for evaluation of PET-CT accuracy. Stata-based meta-analysis was applied to calculate the individual and pooled NPVs. RESULTS Ten studies with a total of 1122 patients with stage I (T1-2N0) NSCLC were eligible for analysis. The NPVs of combined PET and CT for mediastinal metastases were 0.94 in T1 disease and 0.89 in T2 disease. Including both T1 disease and T2 disease, the NPVs were 0.93 for mediastinal metastases and 0.87 for overall nodal metastases. Adenocarcinoma histology type (risk ratio [RR], 2.72) and high fluorine-18 (18F) fluorodeoxyglucose (FDG) uptake in the primary lesion were associated with greater risk of occult nodal metastases. CONCLUSIONS Although overall occult nodal metastases in clinical stage T1-2N0 NSCLC is not infrequent, combined PET and CT provide a favorable NPV for mediastinal metastases in T1N0 NSCLC, suggesting a low yield from routine invasive staging procedures for this subgroup of patients.


BMC Pregnancy and Childbirth | 2005

Incidence of stillbirth and perinatal mortality and their associated factors among women delivering at Harare Maternity Hospital, Zimbabwe: a cross-sectional retrospective analysis

Shingairai A. Feresu; Siobán D. Harlow; Kathy Welch; Brenda W. Gillespie

BackgroundDeath of an infant in utero or at birth has always been a devastating experience for the mother and of concern in clinical practice. Infant mortality remains a challenge in the care of pregnant women worldwide, but particularly for developing countries and the need to understand contributory factors is crucial for addressing appropriate perinatal health.MethodsUsing information available in obstetric records for all deliveries (17,072 births) at Harare Maternity Hospital, Zimbabwe, we conducted a cross-sectional retrospective analysis of a one-year data, (1997–1998) to assess demographic and obstetric risk factors for stillbirth and early neonatal death. We estimated risk of stillbirth and early neonatal death for each potential risk factor.ResultsThe annual frequency of stillbirth was 56 per 1,000 total births. Women delivering stillbirths and early neonatal deaths were less likely to receive prenatal care (adjusted relative risk [RR] = 2.54; 95% confidence intervals [CI] 2.19–2.94 and RR = 2.52; 95% CI 1.63–3.91), which for combined stillbirths and early neonatal deaths increased with increasing gestational age (Hazard Ratio [HR] = 3.98, HR = 7.49 at 28 and 40 weeks of gestation, respectively). Rural residence was associated with risk of infant dying in utero, (RR = 1.33; 95% CI 1.12–1.59), and the risk of death increased with increasing gestational age (HR = 1.04, HR = 1.69, at 28 and 40 weeks of gestation, respectively). Older maternal age was associated with risk of death (HR = 1.50; 95% CI 1.21–1.84). Stillbirths were less likely to be delivered by Cesarean section (RR = 0.64; 95% CI 0.51–0.79), but more likely to be delivered as breech (RR = 4.65; 95% CI 3.88–5.57, as were early neonatal deaths (RR = 3.38; 95% CI 1.64–6.96).ConclusionThe frequency of stillbirth, especially macerated, is high, 27 per 1000 total births. Early prenatal care could help reduce perinatal death linking the woman to the health care system, increasing the probability that she would seek timely emergency care that would reduce the likelihood of death of her infant in utero. Improved quality of obstetric care during labor and delivery may help reduce the number of fresh stillbirths and early neonatal deaths.


Journal of Cancer Epidemiology | 2013

Individual and neighborhood socioeconomic status and healthcare resources in relation to black-white breast cancer survival disparities

Tomi F. Akinyemiju; Amr S. Soliman; Norman J. Johnson; Sean F. Altekruse; Kathy Welch; Mousumi Banerjee; Kendra Schwartz; Sofia D. Merajver

Background. Breast cancer survival has improved significantly in the US in the past 10–15 years. However, disparities exist in breast cancer survival between black and white women. Purpose. To investigate the effect of county healthcare resources and SES as well as individual SES status on breast cancer survival disparities between black and white women. Methods. Data from 1,796 breast cancer cases were obtained from the Surveillance Epidemiology and End Results and the National Longitudinal Mortality Study dataset. Cox Proportional Hazards models were constructed accounting for clustering within counties. Three sequential Cox models were fit for each outcome including demographic variables; demographic and clinical variables; and finally demographic, clinical, and county-level variables. Results. In unadjusted analysis, black women had a 53% higher likelihood of dying of breast cancer and 32% higher likelihood of dying of any cause (P < 0.05) compared with white women. Adjusting for demographic variables explained away the effect of race on breast cancer survival (HR, 1.40; 95% CI, 0.99–1.97), but not on all-cause mortality. The racial difference in all-cause survival disappeared only after adjusting for county-level variables (HR, 1.27; CI, 0.95–1.71). Conclusions. Improving equitable access to healthcare for all women in the US may help eliminate survival disparities between racial and socioeconomic groups.

Collaboration


Dive into the Kathy Welch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Meier

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge