Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongying Tang is active.

Publication


Featured researches published by Hongying Tang.


Clinical Journal of The American Society of Nephrology | 2010

Timing of Dialysis Initiation and Survival in ESRD

Seth Wright; Dalia Klausner; Bradley C. Baird; Mark E. Williams; Theodore I. Steinman; Hongying Tang; Regina Ragasa; Alexander S. Goldfarb-Rumyantzev

BACKGROUND AND OBJECTIVES The optimal time of dialysis initiation is unclear. The goal of this analysis was to compare survival outcomes in patients with early and late start dialysis as measured by kidney function at dialysis initiation. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS We performed a retrospective analysis of patients entering the U.S. Renal Data System database from January 1, 1995 to September 30, 2006. Patients were classified into groups by estimated GFR (eGFR) at dialysis initiation. RESULTS In this total incident population (n = 896,546), 99,231 patients had an early dialysis start (eGFR >15 ml/min per 1.73 m(2)) and 113,510 had a late start (eGFR ≤5 ml/min per 1.73 m(2)). The following variables were significantly (P < 0.001) associated with an early start: white race, male gender, greater comorbidity index, presence of diabetes, and peritoneal dialysis. Compared with the reference group with an eGFR of >5 to 10 ml/min per 1.73 m(2) at dialysis start, a Cox model adjusted for potential confounding variables showed an incremental increase in mortality associated with earlier dialysis start. The group with the earliest start had increased risk of mortality, wheras late start was associated with reduced risk of mortality. Subgroup analyses showed similar results. The limitations of the study are retrospective study design, potential unaccounted confounding, and potential selection and lead-time biases. CONCLUSIONS Late initiation of dialysis is associated with a reduced risk of mortality, arguing against aggressive early dialysis initiation based primarily on eGFR alone.


Clinical Transplantation | 2007

Factors affecting kidney-transplant outcome in recipients with lupus nephritis.

Hongying Tang; Madhukar Chelamcharla; Bradley C. Baird; Fuad S. Shihab; James K. Koford; Alexander S. Goldfarb-Rumyantzev

Abstract:  Background:  Factors associated with outcome in renal transplant recipients with lupus nephritis have not been studied.


Nephrology Dialysis Transplantation | 2012

Social adaptability index predicts kidney transplant outcome: a single-center retrospective analysis

Jalaj Garg; Muhammad Karim; Hongying Tang; Gurprataap S. Sandhu; Ranil DeSilva; James R. Rodrigue; Martha Pavlakis; Douglas W. Hanto; Bradley C. Baird; Alexander S. Goldfarb-Rumyantzev

BACKGROUND Social adaptability index (SAI) is the composite index of socioeconomic status based upon employment status, education level, marital status, substance abuse and income. It has been used in the past to define populations at higher risk for inferior clinical outcomes. The objective of this retrospective study was to evaluate the association of the SAI with renal transplant outcome. METHODS We used data from the clinical database at the Beth Israel Deaconess Medical Center Transplant Institute, supplemented with data from United Network for Organ Sharing for the years 2001-09. The association between SAI and graft loss and recipient mortality in renal transplant recipients was studied using Cox model in the entire study population as well as in the subgroups based on age, race, sex and diabetes status. RESULTS We analyzed 533 end-stage renal disease patients (mean age at transplant 50.8 ± 11.8 years, 52.2% diabetics, 58.9% males, 71.1% White). Higher SAI on a continuous scale was associated with decreased risk of graft loss [hazard ratio (HR) 0.89, P < 0.05, per 1 point increment in the SAI] and decreased risk of recipient mortality (HR 0.84, P < 0.01, per 1 point increment in the SAI). Higher SAI was also significantly associated with decreased risk for graft loss/recipient mortality in some study subgroups (age 41-65 years, males, non-diabetics). CONCLUSIONS SAI has an association with graft and recipient survival in renal transplant recipients. It can be helpful in identifying patients at higher risk for inferior transplant outcome as a target population for potential intervention.


Transplantation | 2010

Renal Allograft Failure Predictors After PAK Transplantation: Results From the New England Collaborative Association of Pancreas Programs

Martha Pavlakis; Khalid Khwaja; Didier A. Mandelbrot; Hongying Tang; James W. Whiting; Marc I. Lorber; Amitabh Gautam; Scott R. Johnson; Marc E. Uknis

Background. The reasons for kidney allograft failure subsequent to pancreas after kidney (PAK) are multifactorial; therefore, we examined these factors to identify a meaningful risk assessment that could assist in patient selection. Methods. Five transplant centers in New England collaborated for this multiinstitutional retrospective study of 126 PAK transplantation recipients who had a functioning pancreas allograft 7 days after transplantation. Host factors (age at pancreas transplant, gender, body weight, glomerular filtration rate at 3 months pre-PAK and at 3-, 6-, 9-, and 12-month post-PAK, presence of proteinuria, pre- or post-PAK kidney rejection, pancreas rejection, cytomegalovirus disease, and HbA1C at 6-month post-PAK) and transplant factors (time to PAK, use of induction antibody therapy, and combinations of immunosuppressive medications) were assessed in both univariate and multivariate analyses for the primary outcome of kidney allograft failure. Results. Of the variables assessed, factors associated with kidney allograft loss after PAK include impaired renal function in the 3 months before PAK, proteinuria, the occurrence of a post-PAK kidney rejection episode, and interval between kidney and pancreas transplantation more than 1 year. Conclusions. In our analysis, post-PAK kidney allograft loss was strongly associated with glomerular filtration rate less than 45 mL/min pre-PAK, K to P interval of over 1 year, pre-PAK kidney rejection episode, and pre-PAK proteinuria. Diabetic candidates for PAK with any of these conditions should be counseled regarding the risk of post-PAK renal transplant failure.


Transplantation | 2009

Outcomes with conversion from calcineurin inhibitors to sirolimus after renal transplantation in the context of steroid withdrawal or steroid continuation.

Ogo Egbuna; Roger B. Davis; Robyn Chudinski; Martha Pavlakis; Christin C. Rogers; Phani Molakatalla; Scott R. Johnson; Seth J. Karp; Anthony P. Monaco; Hongying Tang; Douglas W. Hanto; Didier A. Mandelbrot

Background. A number of studies have suggested that conversion from calcineurin inhibitors (CNI) to sirolimus (SRL) can improve graft function in renal transplant patients. None of these studies has converted patients to SRL in the absence of steroids. Methods. We describe our experience with 278 renal transplants of which 153 were converted from CNI to SRL. The majority of patients had steroids withdrawn after 6 days. Almost all patients received antithymocyte globulin induction and were maintained on mycophenolate mofetil. Results. Six months after conversion, patients remaining on SRL therapy had a mean increase in estimated glomerular filtration rate of 6.93 mL/min/1.73 m2 (P<0.0001) compared with preconversion values. SRL-converted patients analyzed by intention-to-treat increased estimated glomerular filtration rate by 5.00 mL/min/1.73 m2 (P=0.0005). Eighty-one percent of patients remaining on SRL had a successful conversion, defined as stable or improved renal function at 6 months. The only factor predictive of unsuccessful conversion was urine protein-to-creatinine ratio more than 1. The benefits of SRL conversion were seen in patients at high immunological risk as well as those at lower risk. Proteinuria increased by a mean of 0.1 (P=0.43) at 6 months. Thirty-six percent of SRL-converted patients experienced adverse effects requiring conversion back to CNI. Rates of rejection, graft loss, and patient death with SRL conversion were low. Conclusions. The results from our clinical practice suggest that even in the absence of steroids, SRL conversion significantly improves renal function, with acceptable rates of adverse events.


Asaio Journal | 2007

The impact of recipient history of cardiovascular disease on kidney transplant outcome.

Emily Petersen; Bradley C. Baird; Fuad S. Shihab; James K. Koford; Madhukar Chelamcharla; Arsalan N. Habib; Abdou S. Gueye; Hongying Tang; Alexander S. Goldfarb-Rumyantzev

Cardiovascular disease (CVD) leads to increased mortality rates among renal transplant recipients; however, its effect on allograft survival has not been well studied. The records from the United States Renal Data System and the United Network for Organ Sharing from January 1, 1995, through December 31, 2002, were examined in this retrospective study. The outcome variables were allograft survival time and recipient survival time. The primary variable of interest was CVD, defined as the presence of at least one of the following: cardiac arrest, myocardial infarction, dysrhythmia, congestive heart failure, ischemic heart disease, peripheral vascular disease, and unstable angina. The Cox models were adjusted for potential confounding factors. Of the 105,181 patients in the data set, 20,371 had a diagnosis of CVD. The presence of CVD had an adverse effect on allograft survival time (HR 1.12, p < 0.001) and recipient survival time (HR 1.41, p < 0.001). Among the subcategories, congestive heart failure (HR 1.14, p < 0.005) and dysrhythmia (HR 1.26, p < 0.05) had adverse effects on allograft survival time. In addition to increasing mortality rates, CVD at the time of end-stage renal disease onset is also a significant risk factor for renal allograft failure. Further research is needed to evaluate the role of specific forms of CVD in allograft and recipient outcome.


Asaio Journal | 2011

Predicting three-year kidney graft survival in recipients with systemic lupus erythematosus.

Hongying Tang; Mollie R. Poynton; John F. Hurdle; Bradley C. Baird; James K. Koford; Alexander S. Goldfarb-Rumyantzev

Predicting the outcome of kidney transplantation is important in optimizing transplantation parameters and modifying factors related to the recipient, donor, and transplant procedure. As patients with end-stage renal disease (ESRD) secondary to lupus nephropathy are generally younger than the typical ESRD patients and also seem to have inferior transplant outcome, developing an outcome prediction model in this patient category has high clinical relevance. The goal of this study was to compare methods of building prediction models of kidney transplant outcome that potentially can be useful for clinical decision support. We applied three well-known data mining methods (classification trees, logistic regression, and artificial neural networks) to the data describing recipients with systemic lupus erythematosus (SLE) in the US Renal Data System (USRDS) database. The 95% confidence interval (CI) of the area under the receiver-operator characteristic curves (AUC) was used to measure the discrimination ability of the prediction models. Two groups of predictors were selected to build the prediction models. Using input variables based on Weka (a open source machine learning software) supplemented with additional variables of known clinical relevance (38 total predictors), the logistic regression performed the best overall (AUC: 0.74, 95% CI: 0.72–0.77)–significantly better (p < 0.05) than the classification trees (AUC: 0.70, 95% CI: 0.67–0.72) but not significantly better (p = 0.218) than the artificial neural networks (AUC: 0.71, 95% CI: 0.69–0.73). The performance of the artificial neural networks was not significantly better than that of the classification trees (p = 0.693). Using the more parsimonious subset of variables (six variables), the logistic regression (AUC: 0.73, 95% CI: 0.71–0.75) did not perform significantly better than either the classification tree (AUC: 0.70, 95% CI: 0.68–0.73) or the artificial neural network (AUC: 0.73, 95% CI: 0.70–0.75) models. We generated several models predicting 3-year allograft survival in kidney transplant recipients with SLE that potentially can be used in practice. The performance of logistic regression and classification tree was not inferior to more complex artificial neural network. Prediction models may be used in clinical practice to identify patients at risk.


American Journal of Transplantation | 2010

Successful DCD Kidney Transplantation Using Early Corticosteroid Withdrawal

Robyn Chudzinski; Khalid Khwaja; P. Teune; J. Miller; Hongying Tang; Martha Pavlakis; Christin Rogers; Scott R. Johnson; Seth J. Karp; Douglas W. Hanto; Didier A. Mandelbrot

Organs from donors after cardiac death (DCD) are being increasingly utilized. Prior reports of DCD kidney transplantation involve the use of prednisone‐based immunosuppression. We report our experience with early corticosteroid withdrawal (ECSW). Data on 63 DCD kidney transplants performed between 2002 and 2007 were analyzed. We compared outcomes in 28 recipients maintained on long‐term corticosteroids (LTCSs) with 35 recipients that underwent ECSW. DGF occurred in 49% of patients on ECSW and 46% on LTCS (p = 0.8). There was no difference between groups for serum creatinine or estimated GFR between 1 and 36 months posttransplant. Acute rejection rates at 1 year were 11.4% and 21.4% for the ECSW and LTCS group (p = 0.2). Graft survival at 1 and 3 years was 94% and 91% for the ECSW group versus 82% and 78% for the LTCS group (p ≥ 0.1). Death censored graft survival was significantly better at last follow‐up for the ECSW group (p = 0.02). Multivariate analysis revealed no correlation between the use of corticosteroids and survival outcomes. In conclusion, ECSW can be used successfully in DCD kidney transplantation with no worse outcomes in DGF, rejection, graft loss or the combined outcome of death and graft loss compared to patients receiving LTCS.


Asaio Journal | 2011

Validating prediction models of kidney transplant outcome using single center data

Hongying Tang; John F. Hurdle; Mollie R. Poynton; Cheri Hunter; Ming Tu; Bradley C. Baird; Sergey Krikov; Alexander S. Goldfarb-Rumyantzev

Prediction of kidney transplant outcome represents an important and clinically relevant problem. Although several prediction models have been proposed based on large, national collections of data, their utility at the local level (where local data distributions may differ from national data) remains unclear. We conducted a comparative analysis that modeled the outcome data of transplant recipients in the national US Renal Data System (USRDS) against a representative local transplant dataset at the University of Utah Health Sciences Center, a regional transplant center. The performance of an identical set of prediction models was evaluated on both national and local data to assess how well national models reflect local outcomes. Compared with the USRDS dataset, several key characteristics of the local dataset differed significantly (e.g., a much higher local graft survival rate; a much higher local percentage of white donors and recipients; and a much higher proportion of living donors). This was reflected in statistically significant differences in model performance. The area under the receiver operating characteristic curve values of the models predicting 1, 3, 5, 7, and 10-year graft survival on the USRDS data were 0.59, 0.63, 0.76, 0.91, and 0.97, respectively. In contrast, in the local dataset, these values were 0.54, 0.58, 0.58, 0.61, and 0.70, respectively. Prediction models trained on a national set of data from the USRDS performed better in the national dataset than in the local data. This might be due to the differences in the data characteristics between the two datasets, suggesting that the wholesale adoption of a prediction model developed on a large national dataset to guide local clinical practice should be done with caution.


Nephrology Dialysis Transplantation | 2009

A population-based assessment of the familial component of acute kidney allograft rejection

Alexander S. Goldfarb-Rumyantzev; Fuad S. Shihab; Lyska Emerson; Geraldine P. Mineau; Carole Schaefer; Hongying Tang; Cheri Hunter; Natalie Naiman; Lonnie Smith; Richard A. Kerber

BACKGROUND The genetic determinants of acute kidney transplant rejection (AR) are not well studied, and familial aggregation has never been demonstrated. The goal of this retrospective case-control study was to exploit the unique nature of the Utah Population Database (UPDB) to evaluate if AR or rejection-free survival aggregates in families. METHODS We identified 891 recipients with genealogy data in the UPDB with at least one year of follow-up, of which 145 (16.1%) had AR and 77 recipients had biopsy-proven rejection graded >or=1A. We compared the genealogical index of familiality (GIF) in cases and controls (i.e. recipients with random assignment of rejection status). RESULTS We did not find evidence for familial clustering of AR in the entire patient population or in the subgroup with early rejection (n = 52). When the subgroup of recipients with rejection grade >or=1A (n = 77) was analysed separately, we observed increased familial clustering (GIF = 3.02) compared to controls (GIF = 1.96), although the p-value did not reach the level of statistical significance (p = 0.17). Furthermore, we observed an increase in familial clustering in recipients who had a rejection-free course (GIF = 2.45) as compared to controls (GIF = 2.08, p = 0.04). When all recipients were compared to non-transplant controls, they demonstrated a much greater degree of familiality (GIF = 2.03 versus GIF 0.63, p < 0.001). CONCLUSIONS There is a familial component to rejection-free transplant course and trend to familial aggregation in recipients with AR grade 1A or higher. If a genetic association study is performed, there are families in Utah identified in the current study that can be targeted to increase the power of the test.

Collaboration


Dive into the Hongying Tang's collaboration.

Top Co-Authors

Avatar

Alexander S. Goldfarb-Rumyantzev

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martha Pavlakis

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Didier A. Mandelbrot

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Douglas W. Hanto

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Gurprataap S. Sandhu

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Jalaj Garg

Beth Israel Deaconess Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge