Kevin C. Mange
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin C. Mange.
Transplantation | 2004
Niraj M. Desai; Kevin C. Mange; Michael D. Crawford; Peter L. Abt; Adam Frank; Joseph W. Markmann; Ergun Velidedeoglu; William C. Chapman; James F. Markmann
Background. The Model for End-Stage Liver Disease (MELD) has been found to accurately predict pretransplant mortality and is a valuable system for ranking patients in greatest need of liver transplantation. It is unknown whether a higher MELD score also predicts decreased posttransplant survival. Methods. We examined a cohort of patients from the United Network for Organ Sharing (UNOS) database for whom the critical pretransplant recipient values needed to calculate the MELD score were available (international normalized ratio of prothrombin time, total bilirubin, and creatinine). In these 2,565 patients, we analyzed whether the MELD score predicted graft and patient survival and length of posttransplant hospitalization. Results. In contrast with its ability to predict survival in patients with chronic liver disease awaiting liver transplant, the MELD score was found to be poor at predicting posttransplant outcome except for patients with the highest 20% of MELD scores. We developed a model with four variables not included in MELD that had greater ability to predict 3-month posttransplant patient survival, with a c-statistic of 0.65, compared with 0.54 for the pretransplant MELD score. These pretransplant variables were recipient age, mechanical ventilation, dialysis, and retransplantation. Recipients with any two of the three latter variables showed a markedly diminished posttransplant survival rate. Conclusions. The MELD score is a relatively poor predictor of posttransplant outcome. In contrast, a model based on four pretransplant variables (recipient age, mechanical ventilation, dialysis, and retransplantation) had a better ability to predict outcome. Our results support the use of MELD for liver allocation and indicate that statistical modeling, such as reported in this article, can be used to identify futile cases in which expected outcome is too poor to justify transplantation.
Journal of The American Society of Nephrology | 2002
Roy D. Bloom; Vinaya Rao; Francis L. Weng; Robert A. Grossman; Debbie L. Cohen; Kevin C. Mange
Posttransplant diabetes mellitus (PTDM) remains a common complication of immunosuppression. Although multiple risk factors have been implicated, none have been clearly identified as predisposing to the increased PTDM frequency observed in patients on tacrolimus. Hepatitis C virus (HCV) has been associated with diabetes and is a significant renal transplant comorbidity. In this study, records of 427 kidney recipients who had no known diabetes before transplantation were retrospectively examined. A multivariate logistic regression model was fit with covariates that had unadjusted relationships with PTDM to examine the independent relationship of HCV and the odds of development of PTDM by 12 mo posttransplant. A potential interaction between HCV and the use of tacrolimus as maintenance therapy on the odds of the development of PTDM was examined. Overall, PTDM occurred more frequently in HCV(+) than HCV(-) patients (39.4% versus 9.8%; P = 0.0005). By multivariate logistic regression, HCV (adjusted odds ratio [OR], 5.58; 95% confidence interval [CI], 2.63 to 11.83; P = 0.0001), weight at transplantation (adjusted OR 1.028; 95% CI, 1.00 to 1.05; P = 0.001), and tacrolimus (adjusted OR, 2.85; 95% CI, 1.01 to 5.28; P = 0.047) were associated with PTDM. A significant interaction (P = 0.0001) was detected between HCV status and tacrolimus use for the odds of PTDM. Among the HCV(+) cohort, PTDM occurred more often in tacrolimus-treated than cyclosporine A-treated patients (57.8% versus 7.7%; P < 0.0001). PTDM rates in HCV(-) patients were similar between the two calcineurin inhibitors (10.0% versus 9.4%; P = 0.521, tacrolimus versus cyclosporine A). In conclusion, HCV is strongly associated with PTDM in renal transplant recipients and appears to account for the increased diabetogenicity observed with tacrolimus.
Journal of The American Society of Nephrology | 2005
Francis L. Weng; Ajay K. Israni; Marshall M. Joffe; Tracey Hoy; Christina Gaughan; Melissa Newman; John D. Abrams; Malek Kamoun; Sylvia E. Rosas; Kevin C. Mange; Brian L. Strom; Kenneth L. Brayman; Harold I. Feldman
Nonadherence to immunosuppressive medications may partly explain the worse allograft outcomes among black recipients of renal transplants. In a prospective cohort study of recipients of deceased donor renal transplants, microelectronic cap monitors were placed on bottles of one immunosuppressive medication to (1) measure average daily percentage adherence during the first posttransplantation year and (2) determine the factors associated with adherence. A total of 278 transplant recipients who provided sufficient microelectronic adherence data were grouped into four categories of average daily percentage adherence: 95 to 100% adherence (41.0% of patients), 80 to 95% adherence (32.4%), 50 to 80% adherence (12.9%), and 0 to 50% adherence (13.7%). In the unadjusted ordinal logistic regression model, black race was associated with decreased adherence (odds ratio [OR], 0.43; 95% confidence interval [CI], 0.26 to 0.72; P = 0.001). Cause of renal disease, Powerful Others health locus of control, transplant center, and dosing frequency were also associated with adherence. After adjustment for transplant center and dosing frequency, the association between black race and decreased adherence was substantially attenuated (OR, 0.65; 95% CI, 0.38 to 1.14, P = 0.13). Transplant center (P = 0.003) and increased dosing frequency (OR, 0.43; 95% CI, 0.22 to 0.86, for three or four times per day dosing; OR, 2.35; 95% CI, 1.01 to 5.45, for daily dosing; versus two times per day dosing; P = 0.003) remained independently associated with adherence. Other baseline demographic, socioeconomic, medical, surgical, and psychosocial characteristics were not associated with adherence. The transplant center and dosing frequencies of immunosuppressive medications are associated with adherence and explain a substantial proportion of the race-adherence relationship.
Annals of Internal Medicine | 1997
Kevin C. Mange; Dean Matsuura; Borut Cizman; Haydee Soto; Fuad N. Ziyadeh; Stanley Goldfarb; Eric G. Neilson
Patients presenting with orthostatic hypotension and normal plasma sodium concentrations are frequently admitted to the hospital with a diagnosis of dehydration. If they are fortunate, they receive fluids containing sodium chloride instead of free water to correct obvious extracellular fluid volume depletion. Confusing this diagnosis highlights the growing and pernicious habit of using the terms dehydration and volume depletion interchangeably at the bedside when the two describe clearly different disturbances. The heuristic value of describing discrete body fluid spaces affected by disorders of salt and water is a well-established bedside strategy [1-5]. It sprang from an early curiosity about the best treatment for fatal diarrhea [6] and seizures [7] and from classic experiments that formulated the volume behavior and osmolarity of cells [8, 9]. Adapting this information from cells to humans in the late 1930s required more conceptual thinking about the special role of vascular volume in the control of body fluids [2, 10]. The wartime assessment of potential fluid losses encountered by shipwrecked aviators and sailors in the early 1940s further enhanced our understanding of salt and water metabolism [11-13], as did the emerging role of cardiac performance [14, 15]. With the advent of radioactive tracers [16, 17], medical language in the latter part of the 20th century began to discriminate more carefully between dehydration associated with hypertonicity, a principal loss of body water from the intracellular and interstitial compartments, and extracellular fluid volume depletion, a fluid deficiency that clinically affects the vascular tree [3, 5, 18]. The proper use of the terms dehydration and volume depletion informs communication and should improve patient care. The Language of Salt and Water in Body Fluid Spaces At steady state, the hydration or water content of body fluids represents a physiologic balance achieved by the ingestion of water and its further distribution, evaporation, and clearance by the kidneys and gut [19, 20]. Total body water disperses in a well-defined pattern [8, 21, 22] across several elastic or virtual spaces [1, 3, 5, 22]. Approximately 66% of water is confined by solute to the intracellular compartment, whereas 33% is found in the extracellular space. Only 25% of this extracellular fluid, or 8% of total body water, resides within the vasculature [1, 4, 17], and eventually all spaces achieve identical osmolarity [23]. The concept of osmotic pressure derives from the fundamental gas laws of physical chemistry [2, 24]. Water moves down a concentration gradient generated by the osmotic properties of solutes bound by a semipermeable membrane to achieve equilibrium [2]. Simple osmosis of water across virtual body compartments is further amended by the Gibbs-Donnan effect of charge-bearing proteins [2] and, in blood vessels, by the hydrostatic attributes of Starling forces [10, 25]. Although measured osmolarity reflects all particles per volume of water, not all osmols influence transmembrane water flow [5]. The power to move water across cell membranes is a property of effective osmols [2, 26, 27]. Tonicity describes the volume behavior of such cells in solution and is modulated by the number of effective osmols, or osmotically active particles, that are restricted to one side of the cell membrane because of permeability characteristics, transmembrane pumps, or both [28]. Most effective osmols are extracellular sodium, chloride, and bicarbonate or intracellular potassium, chloride, and phosphate. Less abundant effective osmols are sugars, lipids, and proteins. Solutes such as urea or alcohol, however, freely move across cell membranes and are therefore ineffective osmols unable to effect transmembrane water flow [26]. Acutely, effective osmols in the intracellular space (particularly potassium salts) are relatively fixed, and thus the major influence on the location of water in this space is the effective osmolarity of the extracellular compartment [3, 26]. When water is lost from the skin, gut, or kidneys, the hypertonicity created in the extracellular space is directly transferred to the larger intracellular space [5]. Worsening hypertonicity therefore has its biggest impact on the size of the intracellular compartment and, to a lesser extent, on interstitial spaces. To dehydrate is to lose this intracellular water and stimulate thirst. The physiologic concept of dehydration, at first glance, might subsume the definition of volume depletion. This erroneous assumption, made by investigators early in this century [26, 29], was corrected by physiologists in the era after World War II [3, 18, 30] but today has insidiously resurfaced because volume depletion has become a shorthand for extracellular fluid volume depletion, and the first two words of the latter phrase make all the difference. The volume of the extracellular fluid space is principally regulated by the ingestion and excretion of sodium salts [31]. Sodium is largely confined to extracellular fluid because cell membrane pumps operate to actively exclude it from the intracellular compartment [28, 32]. Thus, the addition of sodium leads to a specific gain of effective osmols in extracellular spaces. If sodium is added isotonically to the extracellular compartment, no shift of water from the intracellular space will ensue and the volume increase of the extracellular space will equal the volume of isotonic infusate. If hypotonic or hypertonic sodium is added to the extracellular space, the volume of the intracellular space changes accordingly [26, 29]. Changes in extracellular volume can therefore be dissociated from changes in intracellular volume [5, 21, 33]. For example, a patient who bleeds will have a rapid decline in vascular volume but, in the absence of tissue injury or change in extracellular tonicity, will not have redistribution of water from intracellular spaces [3, 34]. Such a person will have a deficit of body water equal to the proportionately small water content of the lost blood. This can be illustrated quantitatively by considering the fate of an administrated infusate of 5% dextrose compared with an equal volume of fluid given as 0.9% saline (Table 1). Both infusates provide equal amounts of water, but their effect on plasma volume is vastly different. Table 1. Effect of a 1-L Infusion of Water or 0.9% Saline on Virtual Body Fluid Spaces Assessment of Body Fluid Spaces in Designing Effective Therapies Dehydration To best assess the state of hydration, one needs to ascertain the concentration of a marker substance whose content is constant and whose distribution is uniform throughout all virtual fluid spaces. Of course, surrogate markers were devised because no such natural substance exists [16]. Because sodium is the most abundant extracellular solute and its concentration (p[Na+]) influences water movement across cells, p[Na+] may be used as a surrogate at the bedside to gauge the relation between water and effective osmols in all body fluids [5]. Although anions and large molecules contribute to the property of tonicity, some intracellular anions are complex moieties that are not easy to formulate in simple terms; therefore, it is more convenient to estimate effective osmols in a representational shorthand that consists of cations. Equation 1 Formula [1] provides a conceptual framework with which to predict relative water deficit or excess determining tonicity [30]: (Equation 1) where TBNa+ is total body sodium, TBK+ is total body potassium, TBH2O is total body water, CM is cell membrane, ECH (2) O is extracellular water, and ICH2O is intracellular water. In this formula, extracellular total body sodium and intracellular total body potassium represent the principal effective osmols that partition total body water (0.6 L/kg of body weight in adult men and 0.5 L/kg in adult women) across cell membranes at equilibrium [21, 23, 27, 35]. The signs and symptoms of acute dehydration are thirst and, progressively, confusion, coma, and respiratory paralysis [28]. These complications may be mitigated if hypertonicity develops over time and if the brain and other tissues are allowed to adapt by generating new intracellular solutes (previously called idiogenic osmols) to minimize shrinkage [33, 36]. These new solutes include sodium chloride, amino acids, myoinositol, and methylamines [37, 38]. Isolated water deficits are corrected by water replacement and can be estimated [30], over and above any isotonic change in extracellular volume, by using Equation 8 formula [5] (for the derivation of Equation 8 formula [5], see Appendix): (Equation 2) The presentation of dehydration is well illustrated by the case of an elderly 70-kg woman with bipolar disorder and angina who was receiving lithium therapy and was admitted after a positive stress test result. Her blood pressure was 128/85 mm Hg, and her heart rate was 82 beats/min. Evaluation was unremarkable except for thirst and a p[Na+] of 150 mEq/L. Without orthostasis or evidence of decreased tissue perfusion, the patient was given a diagnosis of hypertonicity brought on by acute water deprivation superimposed on lithium-induced nephrogenic diabetes insipidus. She required intravenous water expansion with 5% dextrose before cardiac catheterization because the dye load and ensuing osmotic diuresis would have worsened the hypertonicity by producing urine with lower concentrations of sodium and potassium than are found in body fluids [39]. Assuming the expected restoration of p[Na+] to 140 mEq/L, the patients free water deficit was calculated by using Equation 8 formula [5], as follows: (Equation 3) In addition, any urine output during treatment should be replaced in the same ratio of solute (sodium plus potassium) to water. If the patients condition had actually been mislabeled as extracellular fluid volume depletion and 0.9% saline (154 mEq of Na+/L) had been administered inste
Transplantation | 2002
Ergun Velidedeoglu; Noel N. Williams; Kenneth L. Brayman; Niraj M. Desai; Luis Campos; Maral Palanjian; Martin Wocjik; Roy D. Bloom; Robert A. Grossman; Kevin C. Mange; Clyde F. Barker; Ali Naji; James F. Markmann
Background. Minimally invasive donor nephrectomy has become a favored procedure for the procurement of kidneys from live donors. The optimal minimally invasive surgical approach has not been determined. In the current work, we compared the outcome of kidneys procured using the traditional open approach with two minimally invasive techniques: the standard laparoscopic procedure and a hand-assist procedure. Methods. The function of live-donor kidneys procured by open versus minimally invasive procedures was compared (procedures compared were the traditional open donor nephrectomy [ODN], the standard laparoscopic [LAP] approach, and the hand-assisted [HA] laparoscopic technique). The length of donor operation, donor length of stay in the hospital, surgical complications, and cost of hospitalization for three groups of patients were assessed in a series of 150 live-donor nephrectomies. Results. We found that both minimally invasive procedures yielded kidney allografts with excellent early function and a minimum of complications in the donor. The open procedure was associated with a reduced operative time but increased donor length of stay in the hospital. Resource utilization analysis revealed that both minimally invasive techniques were associated with a slight increase in costs compared with the open procedure, despite a shorter hospital stay. Conclusions. Minimally invasive donor nephrectomy is safe and effective for procuring normally functioning organs for live-donor transplantation. Of the two minimally invasive approaches examined, the hand-assisted technique was found to afford a number of important advantages, including facilitating teaching of residents and students, that it is more readily mastered by transplant surgeons, and that it may provide an additional margin of safety for the donor.
American Journal of Transplantation | 2004
Peter L. Abt; Kevin C. Mange; Kim M. Olthoff; James F. Markmann; K. Rajender Reddy; Abraham Shaked
Adult‐to‐adult living donor liver transplantation (AALDLT) is emerging as a method to treat patients with end‐stage liver disease. The aims of this study were to identify donor and recipient characteristics of AALDLT, to determine variables that affect allograft survival, and to examine outcomes compared with those achieved following cadaveric transplantation. Cox proportional hazards models were fit to examine characteristics associated with the survival of AALDLT. Survival of AALDLT was then compared with cadaveric allografts in multivariable Cox models. Older donor age (>44 years), female‐to‐male donor to recipient relationship, recipient race, and the recipient medical condition before transplant were factors related to allograft failure among 731 AALDLT. Despite favorable donor and recipient characteristics, the rate of allograft failure, specifically the need for retransplantation, was increased among AALDLT (hazard ratio 1.66, 95% C.I. = 1.30–2.11) compared with cadaveric recipients. In conclusion, among AALDLT recipients, selecting younger donors, placing the allografts in recipients who have not had a prior transplant and are not in the ICU, may enhance allograft survival. Analysis of this early experience with AALDLT suggests that allograft failure may be higher than among recipients of a cadaveric liver.
Transplantation | 2004
Ergun Velidedeoglu; Kevin C. Mange; Adam Frank; Peter L. Abt; Niraj M. Desai; Joseph W. Markmann; Rajender Reddy; James F. Markmann
Background. Survival following liver transplantation for hepatitis C virus (HCV) is significantly poorer than for liver transplants performed for other causes of chronic liver disease. The factors responsible for the inferior outcome in HCV+ recipients, and whether they differ from factors associated with survival in HCV- recipients, are unknown. Methods. The UNOS database was analyzed to identify factors associated with outcome in HCV+ and HCV- recipients. Kaplan-Meier graft and patient survival and Cox proportional hazards analysis were conducted on 13,026 liver transplants to identify the variables that were differentially associated with outcome survival in HCV- and HCV+ recipients. Results. Of the 13,026 recipients, 7386 (56.7%) were HCV- and 5640 were HCV+. In HCV- and HCV+ recipient populations, five-year patient survival rates were 83.5% vs. 74.6% (P<0.00001) and five-year graft survival rates 80.6% vs. 69.9% (P<0.00001), respectively. In a multivariate regression model, donor age and recipient creatinine were observed to be significant covariates in both groups, while donor race, cold ischemia time (CIT), female to male transplants, and recipient albumin were independent predictors of survival of HCV- recipients. In the HCV+ cohort, recipient race, warm ischemia time (WIT), and diabetes also independently predicted graft survival. Conclusions. A number of parameters are differentially correlated with outcome in HCV- and HCV+ recipients of orthotopic liver transplantion. These findings may not only have practical implications in the selection and management of liver transplant patients, but also may shed new insight into the biology of HCV infection posttransplant.
American Journal of Transplantation | 2005
Ajay K. Israni; Harold I. Feldman; Kathleen J. Propert; Mary B. Leonard; Kevin C. Mange
Since 1988 over 10 000 simultaneous cadaveric pancreas–kidney transplants (SPK) have been performed in the United States among patients with end‐stage renal disease due to Type 1 diabetes (T1DM). The two aims of this study were to assess the impact on kidney allograft survival of (i) SPK versus transplantation of a kidney alone (KA), and (ii) SPK prior to versus after initiation of chronic dialysis. This retrospective, non‐concurrent cohort study examined registry data collected from 8323 patients waitlisted in the United States for an SPK and transplanted with either an SPK or a KA during January 1, 1990 – October 31, 2002. SPK recipients had an adjusted hazard ratio for kidney allograft loss of 0.63 (95% CI: 0.51–0.77, p < 0.001) compared to transplantation without pancreas allograft. SPK recipients who received their allografts prior to beginning chronic dialysis had a lower rate of kidney allograft loss than SPK recipients who received their transplant after initiation of chronic dialysis (adjusted hazard rates (HR) = 0.83, 95% CI: 0.69–0.99, p = 0.042). Simultaneous transplantation of pancreas–kidney compared to kidney transplantation alone and SPK prior to the initiation of chronic dialysis compared to SPK after initiation of dialysis were both associated with longer kidney allograft survival.
Transplantation | 2009
Hans B. Lehmkuhl; Arizón Jm; Mario Viganò; Luis Almenar; Gino Gerosa; Massimo Maccherini; Shaida Varnous; Francesco Musumeci; J.Mark Hexham; Kevin C. Mange; Ugolino Livi
Background. Pharmacokinetic modeling supports trough monitoring of everolimus, but prospective data comparing this approach versus mycophenolate mofetil (MMF) in de novo cardiac transplant recipients are currently unavailable. Methods. In a 12-month multicenter open-label study, cardiac transplant patients received everolimus (trough level 3–8 ng/mL) with reduced cyclosporine A (CsA) or MMF (3 g/day) with standard CsA, both with corticosteroids±induction therapy. Results. In total, 176 patients were randomized (everolimus 92, MMF 84). Mean creatinine clearance was 72.5±27.9 and 76.8±32.1 mL/min at baseline, 65.4±24.7 and 72.2±26.2 mL/min at month 6, and 68.7±27.7 and 71.8±29.8 mL/min at month 12 with everolimus and MMF, respectively. The primary endpoint was not met since calculated CrCl at month 6 posttransplant was 6.9 mL/min lower with everolimus, exceeding the predefined margin of 6 mL/min. However, by month 12 the between-group difference had narrowed versus baseline (3.1 mL/min). All efficacy endpoints were noninferior for everolimus versus MMF. The 12-month incidence of biopsy-proven acute rejection International Heart and Lung Transplantation grade more than or equal to 3A was 21 of 92 (22.8%) with everolimus and 25 of 84 (29.8%) with MMF. Adverse events were consistent with class effects including less-frequent cytomegalovirus infection with everolimus (4 [4.4%]) than MMF (14 [16.9%], P=0.01). Conclusion. Concentration-controlled everolimus with reduced CsA results in similar renal function and equivalent efficacy compared with MMF with standard CsA at 12 months after cardiac transplantation.
Transplantation | 2004
Ergun Velidedeoglu; Roy D. Bloom; Michael D. Crawford; Niraj M. Desai; Luis Campos; Peter L. Abt; Joseph W. Markmann; Kevin C. Mange; Kim M. Olthoff; Abraham Shaked; James F. Markmann
Background. Acute and chronic renal dysfunction (ARD, CRD) are common complications after liver transplantation and are associated with poor outcome. Methods. We reviewed the results of 181 liver transplants performed in our institution between January 1, 1998 and December 31, 2000 in which the recipients were alive with good liver function at the end of the follow-up period (mean 2.7 years). Renal dysfunction was defined as a serum creatinine (Cr) greater than or equal to 2 mg/dL in both acute and chronic settings. Results. The incidence of ARD during the first posttransplant week was 39.2% (n=71), whereas late CRD occurred in 6.0% (n=11) of the patients by the end of the follow-up period. Among the variables we examined for association with CRD, five factors were found to be statistically significant in univariate analysis: pretransplant diabetes (PRTDM) (0.000), Cr greater than or equal to 2 during the first postoperative week (0.003), posttransplant diabetes (POTDM) (0.014), age greater than 50 (0.025), and tacrolimus level greater than 15 ng/mL at postoperative day 15 (0.058). In binary logistic regression analysis, PRTDM (odds ratio [OR]=5.7, 95% confidence interval [CI]) and early postoperative ARD (OR=10.2 95% CI) remained consistently significant. Nine of 11 patients with CRD also had a history of ARD during the first postoperative week. These patients progressed to CRD despite the fact that seven of nine had normalized their renal function by day 90 posttransplant. Conclusion. We suggest that a combination of events during the first postoperative week after liver transplant serve as a physiologic “stress test” for the kidneys. Patients who fail the test (peak Cr ≥2 mg/dL during the first postoperative week) as well as the patients with diabetes mellitus are at increased risk of CRD. In such cases, conversion to a less nephrotoxic regimen may be beneficial.