Nancy E. Fink
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nancy E. Fink.
Nature Genetics | 2008
W.H. Linda Kao; Michael J. Klag; Lucy A. Meoni; David Reich; Yvette Berthier-Schaad; Man Li; Josef Coresh; Nick Patterson; Arti Tandon; Neil R. Powe; Nancy E. Fink; John H. Sadler; Matthew R. Weir; Hanna E. Abboud; Sharon G. Adler; Jasmin Divers; Sudha K. Iyengar; Barry I. Freedman; Paul L. Kimmel; William C. Knowler; Orly F. Kohn; Kristopher Kramp; David J. Leehey; Susanne B. Nicholas; Madeleine V. Pahl; Jeffrey R. Schelling; John R. Sedor; Denyse Thornley-Brown; Cheryl A. Winkler; Michael W. Smith
As end-stage renal disease (ESRD) has a four times higher incidence in African Americans compared to European Americans, we hypothesized that susceptibility alleles for ESRD have a higher frequency in the West African than the European gene pool. We carried out a genome-wide admixture scan in 1,372 ESRD cases and 806 controls and found a highly significant association between excess African ancestry and nondiabetic ESRD (lod score = 5.70) but not diabetic ESRD (lod = 0.47) on chromosome 22q12. Each copy of the European ancestral allele conferred a relative risk of 0.50 (95% CI = 0.39–0.63) compared to African ancestry. Multiple common SNPs (allele frequencies ranging from 0.2 to 0.6) in the gene encoding nonmuscle myosin heavy chain type II isoform A (MYH9) were associated with two to four times greater risk of nondiabetic ESRD and accounted for a large proportion of the excess risk of ESRD observed in African compared to European Americans.
Journal of The American Society of Nephrology | 2002
J. Craig Longenecker; Josef Coresh; Neil R. Powe; Andrew S. Levey; Nancy E. Fink; Alice Martin; Michael J. Klag
Although atherosclerotic cardiovascular disease (ASCVD) risk in end-stage renal disease (ESRD) is 5 to 30 times that of the general population, few data exist comparing ASCVD risk factors among new dialysis patients to the general population. This cross-sectional study of 1041 dialysis patients describes the prevalence of ASCVD risk factors at the beginning of ESRD compared with estimates of ASCVD risk factors in the adult US population derived from the Third National Health and Nutrition Examination (NHANES III). CHOICE Study participants had a high prevalence of diabetes (54%), hypertension (96%), left ventricular hypertrophy by electrocardiogram (EKG) criteria (22%), low physical activity (80%), hypertriglyceridemia (36%), and low HDL cholesterol (33%). CHOICE participants were more likely to be older, black, and male than NHANES III participants. After adjustment for age, race, gender, and ASCVD (defined as myocardial infarction, revascularization procedure, stroke, carotid endarterectomy, and amputation in CHOICE; and as myocardial infarction and stroke in NHANES III), the prevalence of diabetes, hypertension, left ventricular hypertrophy by EKG, low physical activity, low HDL cholesterol, and hypertriglyceridemia were still more common in CHOICE participants. Smoking, obesity, hypercholesterolemia, and high LDL cholesterol, however, were less common in CHOICE than NHANES III participants. The projected 5-yr ASCVD risk based on the Framingham Risk Equation among those older than 40 yr without ASCVD was higher in CHOICE Study participants (13%) than in the NHANES III participants (6%). In summary, many ASCVD risk factors are more prevalent in ESRD than in the general population and may explain some, but probably not all, of the increased ASCVD risk in ESRD.
JAMA | 2010
John K. Niparko; Emily A. Tobey; Donna J. Thal; Laurie S. Eisenberg; Nae Yuh Wang; Alexandra L. Quittner; Nancy E. Fink
CONTEXT Cochlear implantation is a surgical alternative to traditional amplification (hearing aids) that can facilitate spoken language development in young children with severe to profound sensorineural hearing loss (SNHL). OBJECTIVE To prospectively assess spoken language acquisition following cochlear implantation in young children. DESIGN, SETTING, AND PARTICIPANTS Prospective, longitudinal, and multidimensional assessment of spoken language development over a 3-year period in children who underwent cochlear implantation before 5 years of age (n = 188) from 6 US centers and hearing children of similar ages (n = 97) from 2 preschools recruited between November 2002 and December 2004. Follow-up completed between November 2005 and May 2008. MAIN OUTCOME MEASURES Performance on measures of spoken language comprehension and expression (Reynell Developmental Language Scales). RESULTS Children undergoing cochlear implantation showed greater improvement in spoken language performance (10.4; 95% confidence interval [CI], 9.6-11.2 points per year in comprehension; 8.4; 95% CI, 7.8-9.0 in expression) than would be predicted by their preimplantation baseline scores (5.4; 95% CI, 4.1-6.7, comprehension; 5.8; 95% CI, 4.6-7.0, expression), although mean scores were not restored to age-appropriate levels after 3 years. Younger age at cochlear implantation was associated with significantly steeper rate increases in comprehension (1.1; 95% CI, 0.5-1.7 points per year younger) and expression (1.0; 95% CI, 0.6-1.5 points per year younger). Similarly, each 1-year shorter history of hearing deficit was associated with steeper rate increases in comprehension (0.8; 95% CI, 0.2-1.2 points per year shorter) and expression (0.6; 95% CI, 0.2-1.0 points per year shorter). In multivariable analyses, greater residual hearing prior to cochlear implantation, higher ratings of parent-child interactions, and higher socioeconomic status were associated with greater rates of improvement in comprehension and expression. CONCLUSION The use of cochlear implants in young children was associated with better spoken language learning than would be predicted from their preimplantation scores.
Annals of Internal Medicine | 2002
Kraig S. Kinchen; John H. Sadler; Nancy E. Fink; Ron Brookmeyer; Michael J. Klag; Andrew S. Levey; Neil R. Powe
Context In the United States, the 5-year survival of patients undergoing dialysis is 29%. Early nephrologist evaluation is associated with better outcomes, but 25% of patients first see a nephrologist within a month of beginning dialysis. Contribution Late nephrology evaluation (<4 months before start of dialysis) was most common among black men, uninsured patients, and patients with severe comorbid illness. The later the first evaluation by a nephrologist, the greater the risk for death. Clinical Implications Clinicians need a system to remind them to refer patients at an early stage of chronic renal failure, especially black men, the uninsured, or patients with severe comorbid illness. The Editors In the United States, approximately 300 000 persons have treated end-stage renal disease (ESRD) and an estimated 800 000 persons have a serum creatinine concentration of 177 mol/L (2.0 mg/dL) or greater (1, 2). Annual U.S. spending related to treatment of ESRD exceeds
Ophthalmology | 1994
Maureen G. Maguire; Walter J. Stark; John D. Gottsch; R. Doyle Stulting; Alan Sugar; Nancy E. Fink; Ann Schwartz
15 billion (3). Yet, outcomes for patients with ESRD remain relatively poor, with a 5-year survival rate of about 29% for patients undergoing dialysis (4). Whether better care of patients with ESRD earlier in their disease course improves outcomes is under increased investigation (5, 6). Many patients with chronic kidney disease may benefit from beginning their care with primary care physicians. As in the management of other chronic diseases, primary care physicians must decide whether evaluation by a specialist might improve care and, if so, when in the disease course specialist evaluation is most appropriate. Although 39% of patients undergoing hemodialysis and 52% of those undergoing peritoneal dialysis are evaluated by a nephrologist more than 1 year before dialysis, 25% and 16%, respectively, of such patients are seen less than 1 month before dialysis (7). One argument for early evaluation by a nephrologist is that management of chronic renal insufficiency and its complications, such as anemia and renal osteodystrophy, may be improved. Early evaluation might facilitate improved patient education about dialysis; provide more time to make an informed choice about the type of dialysis; and allow timely placement of permanent vascular access, which is associated with better dialysis and fewer complications compared with temporary access (8, 9). Late evaluation is associated with a higher risk for unplanned first dialysis, more complications, higher hospital costs, and longer duration of hospitalization in the first 3 months of dialysis (10-13). Most previous studies of late evaluation were done in countries other than the United States, involved only one center, or have had relatively short follow-up (10, 11, 13-16). We sought to determine the patient factors that are associated with late evaluation by a nephrologist in the United States and the effect of late evaluation on mortality. Methods Study Design and Sample We conducted a national, concurrent, prospective cohort study as part of the Choices for Healthy Outcomes in Caring for ESRD (CHOICE) Study. (For a list of all investigators on the CHOICE Study, see the Appendix.) Between October 1995 and June 1998, 1041 patients undergoing incident dialysis were enrolled at 81 dialysis clinics in 19 states (79 clinics affiliated with Dialysis Clinics Incorporated and 2 clinics affiliated with Beth Israel Medical System) (17). Median time from initiation of dialysis to enrollment was 45 days, and 98% of enrollment took places within 4 months of initial dialysis. Patients were excluded if they were younger than 18 years of age or did not speak English or Spanish. The Johns Hopkins University School of Medicine Institutional Review Board and the review boards of each clinical center approved the CHOICE protocol. The CHOICE Study was designed to examine the choices that patients and providers make in initiation and maintenance of renal replacement therapy, particularly the choice of hemodialysis versus peritoneal dialysis. Data Collection At enrollment, patients completed a baseline questionnaire on medical and social history and provided the month and year in which they first visited a nephrologist. Demographic data, insurance information, the assigned cause of renal failure, baseline laboratory values, and the date of initial dialysis were obtained from the Center for Medicare & Medicaid Services medical evidence form. A trained research nurse abstracted medical records to determine the Index of Disease Severity score for 19 medical conditions. The Index of Physical Impairment score, a measure of impairment in 11 areas, was assessed by clinic staff. The Index of Physical Impairment and Index of Disease Severity were combined to form the Index of Coexistent Disease, a measure of the burden and severity of comorbid disease that is scored from 0 or 1 (mild coexistent disease) to 3 (severe coexistent disease) (18-21). Information from clinic reports and the Center for Medicare and Medicaid Services were used to determine the date of death. The time between first evaluation by a nephrologist and the date of first dialysis is referred to as the time of evaluation and was categorized as late (<4 months), intermediate (4 to 12 months), or early (>1 year). The 4-month cutoff for late evaluation has been used in other studies (13, 22). The 12-month cutoff for early evaluation was chosen because some experts recommend 1 year as the minimal time necessary to prepare a patient adequately for dialysis (23). We reviewed available medical records for patients having late evaluation and a 10% sample of remaining charts. In 13 cases, time of evaluation was adjusted from late to intermediate or early evaluation. Three hundred thirty-four patients did not answer the question on time of evaluation. On the basis of medical record review, 70 of these patients were categorized as having early evaluation and 51 as having intermediate evaluation. For 182 of the 334 patients, the medical records showed no definite evidence to indicate evaluation by a nephrologist more than 4 months before dialysis. These patients, along with 31 patients without available medical records, were categorized as having missing data. The total sample comprises 828 patients. Statistical Analysis Characteristics of the sample stratified by time of evaluation were compared by using chi-square tests and analysis of variance, as appropriate. Distributions of patients according to the time of evaluation were compared by using the Wilcoxon rank-sum test for two-category comparisons and the KruskalWallis test for multiple comparisons. Unadjusted percentages of patients having late evaluation were calculated for different characteristics. Multivariate logistic regression was performed to determine the presence, magnitude, and independence of the association between patient characteristics and late evaluation. We considered the potential effect of multiple centers on our analyses (24). Given the possibility of confounding by clinic, all logistic regression analyses were conditioned on clinic (25). Patient characteristics in the multivariate conditional logistic regression model that were significantly associated (P < 0.10) with late evaluation in univariate analysis were considered potential confounders. Adjusted percentages were calculated on the basis of the adjusted odds ratios derived from logistic regression (26) and the relevant unadjusted frequencies of reference group in each analysis. A HosmerLemeshow test was used to assess model adequacy. We used Cox proportional-hazards regression to test the presence, strength, and independence of the association between time of evaluation and mortality. Survival time was calculated from the date of first dialysis. Patients were considered to be under observation from time of enrollment until death or 30 April 2000. Patients were censored if they received a transplant, changed to a non-CHOICE clinic, or declined to participate further in the study. To account for the possibility that differing standards of care at the various clinics explained differences in survival, all proportional hazards models were stratified by clinic (27). We first included nonmodifiable risk factors in the regression models (demographic characteristics and socioeconomic status) and then added potentially modifiable risk factors (smoking, exercise, comorbid conditions and disease severity, and laboratory values) to examine whether these factors explained associations between time of evaluation and mortality. Factors included in the final Cox proportional-hazards model were significantly associated with mortality (P < 0.10) in univariate analysis or had been shown in the literature to have a clinically important association with mortality. Sex and type of dialysis were included a priori. Some modifiable factors may appear in the causal pathway between late evaluation and mortality. Their addition to the model might explain any observed associations. Thus, hematocrit less than 0.3 and hypoalbuminemia (serum albumin level < 36 g/L) were added to the model as potentially modifiable risk factors because previous studies have shown a relationship between these factors and ESRD mortality (28, 29). The glomerular filtration rate, which was calculated according to the Modification of Diet in Renal Disease equation (30), was included to adjust for renal function. We performed sensitivity analyses to explore the effect of assumptions about missing data on time of evaluation and the effect of alternative categorizations of evaluation time. Statistical analyses were performed by using Stata software, version 6.0 (Stata Corp., College Station, Texas). Role of the Funding Source The project was funded by a grant from the Agency for Healthcare Research and Quality (Dr. Powe, principal investigator), the Robert Wood Johnson Clinical Scholars Program (Dr. Kinchen), and the National Institute of Diabetes and Digestive and Kidney Diseases (Drs. Powe and Klag). N
Journal of The American Society of Nephrology | 2005
Brad C. Astor; Joseph A. Eustace; Neil R. Powe; Michael J. Klag; Nancy E. Fink; Josef Coresh
PURPOSE To evaluate comprehensively the magnitude of suspected risk factors for corneal graft failure from any cause, failure from rejection, and immunologic reaction in patients at high risk for graft failure after corneal transplantation. METHODS The records of the 457 participants in the Collaborative Corneal Transplantation Studies were reviewed. All participants had at least two quadrants of stromal vascularization and/or a history or previous graft rejection. Patients were followed for 2 to 5 years. Characteristics of the patient, study eye, donor, donor-recipient histocompatibility, and surgical procedure were examined for their association with the graft outcomes of failure from any cause, rejection failure, and immunologic reaction. Multivariate survival analysis techniques were used to estimate rates of graft outcome events and to estimate the magnitude of risk factors. RESULTS Many apparent risk factors did not maintain their association with graft outcomes after adjustment for other risk factors. Young recipient age, the number of previous grafts, history of previous anterior segment surgery, preoperative glaucoma, quadrants of anterior synechiae, quadrants of stromal vessels, a primary diagnosis of chemical burn, and blood group ABO incompatibility were among the strongest risk factors identified for graft failure. Donor and corneal preservation characteristics had little influence on graft outcome. CONCLUSIONS Risk of graft failure varies substantially, even within a high-risk population. The number of risk factors present should be considered by the patient and surgeon when contemplating transplantation and planning follow-up.
Annals of Internal Medicine | 2005
Bernard G. Jaar; Josef Coresh; Laura C. Plantinga; Nancy E. Fink; Michael J. Klag; Andrew S. Levey; Nathan W. Levin; John H. Sadler; Alan S. Kliger; Neil R. Powe
Arteriovenous fistulae (AVF) have advantages over arteriovenous grafts (AVG) and central venous catheters (CVC), but whether AVF are associated independently with better survival is unclear. Recent studies showing such a survival benefit did not include early access experience or account for changes in access type over time and did not include data on some important confounders. Reported here are survival rates stratified by the type of access in use up to 3 yr after initiation of hemodialysis among 616 incident patients who were enrolled in the Choices for Healthy Outcomes in Caring for ESRD (CHOICE) Study. A total of 1084 accesses (185 AVF, 296 AVG, 603 CVC) were used for a total of 1381 person-years. At initiation, 409 (66%) patients were using a CVC, 122 (20%) were using an AVG, and 85 (14%) were using an AVF. After 6 mo, 34% were using a CVC, 40% were using an AVG, and 26% were using an AVF. Annual mortality rates were 11.7% for AVF, 14.2% for AVG, and 16.1% for CVC. Adjusted relative hazards (RH) of death compared with AVF were 1.5 (95% confidence interval, 1.0 to 2.2) for CVC and 1.2 (0.8 to 1.8) for AVG. The increased hazards associated with CVC, as compared with AVF, were stronger in men (n = 334; RH = 2.0; P = 0.01) than women (n = 282; RH = 1.0 for CVC; P = 0.92). These results strongly support existing clinical practice guidelines and suggest that the use of venous catheters should be minimized to reduce the frequency of access complications and to improve patient survival, especially among male hemodialysis patients.
Journal of The American Society of Nephrology | 2004
Albert W. Wu; Nancy E. Fink; Jane Marsh-Manzi; Klemens B. Meyer; Frederic O. Finkelstein; Michelle M. Chapman; Neil R. Powe
Context Does dialysis method affect survival of patients with end-stage renal disease? Contribution At 81 clinics in 19 states, 25% of the patients receiving peritoneal dialysis and 5% of those receiving hemodialysis switched methods at least once within 7 years. Patients initiating treatment with peritoneal dialysis appeared healthier and of higher socioeconomic status than did those receiving hemodialysis. Analyses that adjusted for baseline differences found statistically significantly higher risks for death among patients receiving peritoneal dialysis compared with those receiving hemodialysis during the second, but not first, year of treatment. Cautions This prospective study of incident dialysis was not a randomized trial. The Editors The burden of end-stage renal disease (ESRD) in the United States has increased dramatically over the past 30 years, with the number of patients treated for ESRD with dialysis or transplantation reaching more than 400000 by the end of 2001 (1). The number of patients requiring renal replacement therapy is projected to exceed 2 million patients by 2030 (1). Although kidney transplantation remains the best treatment option for eligible patients with ESRD (2), rates of kidney donation have not kept pace with the number of cases, leading to an increase in the number of patients on waiting lists (1). Thus, most patients with ESRD, including those eligible for kidney transplantation, must select a type of dialysis for renal replacement therapy. Since the introduction of peritoneal dialysis in the mid-1970s, several studies have tried to assess differences in survival between patients undergoing peritoneal dialysis and those undergoing hemodialysis, but the influence of type of dialysis on survival of patients with ESRD or subgroups of these patients remains controversial. Limitations of previously published studies include the enrollment of patients who had been receiving dialysis therapy for different amounts of time after the onset of ESRD (3), noncontemporary cohorts of patients undergoing incident dialysis (3-6), short follow-up (1 to 2 years) (3, 7-12), and limited information on comorbid conditions (3, 6, 7, 9, 13). Furthermore, many of these studies used only intention-to-treat analyses that do not account for switches in type of dialysis that occur over time (3-5, 7-9, 13). We performed a comprehensive comparative study of survival in patients undergoing peritoneal dialysis and those undergoing hemodialysis. Methods Study Design and Sample We conducted a national prospective cohort study of patients undergoing incident dialysis. Eligibility criteria for enrollment into the Choices for Healthy Outcomes in Caring for ESRD (CHOICE) study included ability to provide informed consent for participation, age older than 17 years, and ability to speak English or Spanish. Median time from initiation of dialysis to enrollment was 45 days; 98% of patients enrolled within 4 months of initial dialysis. Enrolled patients were oversampled for peritoneal dialysis to allow statistical comparisons by type of dialysis The institutional review boards of Johns Hopkins University and the clinical centers approved the study protocol, and all participants gave written informed consent. From October 1995 to June 1998, 1041 participants from 19 U.S. states were enrolled at 81 dialysis clinics associated with Dialysis Clinic, Inc., Nashville, Tennessee (923 patients), and New Haven Continuous Ambulatory Peritoneal Dialysis (86 patients) or St. Raphaels Hospital, New Haven, Connecticut (32 patients). Data Collection Demographic and Clinical Data At baseline, patients used a questionnaire to self-report demographic characteristics, health behaviors, work history, medical history, preparation for dialysis, social support, and distance to dialysis unit. Baseline data on routine care were available for serum albumin level, hemoglobin level, calcium-phosphate product, and total cholesterol level. High-sensitivity C-reactive protein was measured at a median of 5.0 months from initiation of dialysis by using a colorimetric competitive enzyme-linked immunosorbent assay (coefficient of variation, 8.9%). Residual urine output was defined as the ability to produce at least 250 mL of urine daily; this information was obtained from the baseline self-report questionnaire. Glomerular filtration rate before initiation of dialysis was estimated from the Modification of Diet in Renal Disease equation by using the serum creatinine concentration obtained from the ESRD Medical Evidence Report (Form 2728) (14). Dialysis Technique Dialysis technique at baseline was defined as the type of dialysis being used at 4 weeks after enrollment in the study (an average of 10 weeks after starting dialysis) and was obtained from clinic records. All forms of peritoneal dialysis (continuous ambulatory peritoneal dialysis, continuous cycling peritoneal dialysis, and intermittent cycling peritoneal dialysis) were combined as a single category. Patients were considered to have switched technique when they changed from one type of dialysis to another and continued to use the new technique for at least 30 days. Assessment of Comorbid Conditions Comorbid conditions were assessed by using the Index of Coexistent Disease, a medical recordderived index that has been demonstrated to predict death in patients undergoing dialysis (15, 16). Scores on this index range from 0 (no comorbid condition) to 3 (highest severity of comorbid conditions). The Index of Coexistent Disease aggregates the presence and severity of comorbid conditions within 2 scales: the Index of Disease Severity and the Index of Physical Impairment. The Index of Disease Severity consists of 19 categories of medical conditions (ischemic heart disease, congestive heart failure, arrhythmias, other heart disease, hypertension, cerebrovascular disease, peripheral vascular disease, diabetes mellitus, respiratory disease, cancer, hepatobiliary disease, gastrointestinal disease, nonvascular neurological disease, musculoskeletal disease, hematologic disease, HIV or AIDS, anticoagulation, urogenital disease, and ophthalmologic disease), with 4 levels of severity for each condition. Information for the Index of Disease Severity was abstracted from dialysis unit records, hospital discharge summaries, medication lists, consultation notes, diagnostic imaging, and cardiac imaging reports. These data were collected at each dialysis unit, photocopied, and sent to New England Medical Center for abstraction and scoring. The Index of Physical Impairment is an observer-based assessment of 11 functional domains (circulation, respiration, neurologic function, mental function, urinary elimination, bowel elimination, feeding, ambulation, vision, hearing, and speech), each with 3 severity levels. The Index of Physical Impairment was completed by a local dialysis nurse who was familiar with the patients level of functioning, with input from a family member or caregiver if necessary. Two dialysis nurses with previous training and experience in using the Index of Coexistent Disease reviewed and scored all charts. The reliability of data abstraction and severity scoring was assessed by using a masked recoding of 45 charts. Interrater reliability, as assessed by the statistic, was excellent for Index of Disease Severity score ( = 0.93), maximum Index of Disease Severity score ( = 0.84), and maximum Index of Physical Impairment score ( = 1.0). Statistical Analysis We compared characteristics of patients undergoing peritoneal dialysis with those of patients undergoing hemodialysis by using analysis of variance for continuous variables and the Pearson chi-square test for categorical variables. The C-reactive protein level was log-transformed to reduce skewness of distribution. Patients were followed for death for up to 7 years. We used survival analysis to assess the presence, direction, strength, and independence of an association between dialysis technique and death. We used stratified Cox proportional hazards models to assess the risk for death among patients undergoing peritoneal dialysis versus those undergoing hemodialysis, independent of differences in demographic characteristics (age, sex, ethnicity, and employment status), clinical factors (smoking status, score on the Index of Coexistent Disease, diabetes status, history of cardiovascular disease, primary cause of renal failure, late referral to a nephrologist [<4 months from first evaluation by a nephrologist to initiation of dialysis], body mass index, and baseline residual urine output), and laboratory values (serum albumin level, hemoglobin level, calcium-phosphate product, total cholesterol level, C-reactive protein level, and creatinine concentration). We censored patients at transplantation or loss to follow-up. As the main analysis, we used intention-to-treat models based on the type of dialysis at baseline. Within the limitations of an observational study, this analysis was an attempt to replicate the intention-to-treat analysis in a clinical trial. If we could have adjusted for all of the factors that determine the choice of initial treatment technique, our results would mirror those of a randomized, controlled trial. We believe that this is the most important matter from the clinical point of view, because the real therapeutic choice for the clinician and the patient occurs primarily at the time of initiation of dialysis, whereas future switches may be motivated by treatment failures over which clinicians have little control. For the intention-to-treat models, we used multivariate Cox models that included all covariates, as well as models that incorporated dialysis technique and a technique propensity score. The propensity score, an established method used to address selection bias due to observed factors, is the estimated probability of being treated initially with peritoneal dialysis rather than hemodialysis. This propensity score, which
American Journal of Kidney Diseases | 2010
Tariq Shafi; Bernard G. Jaar; Laura C. Plantinga; Nancy E. Fink; John H. Sadler; Rulan S. Parekh; Neil R. Powe; Josef Coresh
Despite more than 20 yr of use, relative differences in health-related quality of life (HRQOL) between hemodialysis (HD) and peritoneal dialysis (PD) are not clearly known. The objective of this study was to compare self-reported HRQOL and overall health status for HD and PD patients at the initiation of dialysis therapy and 1 yr later. A prospective cohort of incident ESRD patients was enrolled between October 1995 and June 1998 at 81 outpatient dialysis units in 19 states and included 698 HD and 230 PD patients who completed a baseline CHOICE Health Experience Questionnaire. The main outcome measured was change in qualify-of-life scores from start of dialysis to 1 yr on dialysis and overall health status. Of 928 patients who completed the baseline questionnaire, 585 also completed the 12-mo questionnaire; 101 had died, 55 had received a kidney transplant, and 88 had moved to a new dialysis clinic. PD patients were slightly younger, were more likely to be white, were well-educated, were employed, were married, had less comorbidity, and had higher hematocrit. Unadjusted baseline scores showed better HRQOL for PD patients in both generic and ESRD domains (bodily pain, travel, diet restrictions, and dialysis access [P < 0.05]). At 1 yr, SF-36 scores improved, whereas some ESRD domains improved and others deteriorated. HD patients had greater improvements in two SF-36 domains (physical functioning and general health perception) than PD patients, but results were mixed for ESRD domains (PD is better for finances, HD is better for sleep and overall quality of life). HD and PD patients did not differ in change in overall health status. HD and PD are associated with similar HRQOL outcomes at 1 yr. Generic HRQOL in two domains improved more for HD patients. However, for ESRD-specific HRQOL, results were not consistent; some domains were better for PD patients whereas others were better for HD patients. In advising patients about modality choices, trade-offs should be discussed and individual preferences for specific aspects of HRQOL should be elicited.
Kidney International | 2008
Rulan S. Parekh; Laura C. Plantinga; W.H. Linda Kao; Lucy A. Meoni; Bernard G. Jaar; Nancy E. Fink; Neil R. Powe; Josef Coresh; Michael J. Klag
BACKGROUND Residual kidney function (RKF) is associated with improved survival in peritoneal dialysis patients, but its role in hemodialysis patients is less well known. Urine output may provide an estimate of RKF. The aim of our study is to determine the association of urine output with mortality, quality of life (QOL), and inflammation in incident hemodialysis patients. STUDY DESIGN Nationally representative prospective cohort study. SETTING & PARTICIPANTS 734 incident hemodialysis participants treated in 81 clinics; enrollment, 1995-1998; follow-up until December 2004. PREDICTOR Urine output, defined as producing at least 250 mL (1 cup) of urine daily, ascertained using questionnaires at baseline and year 1. OUTCOMES & MEASUREMENTS Primary outcomes were all-cause and cardiovascular mortality, analyzed using Cox regression adjusted for demographic, clinical, and treatment characteristics. Secondary outcomes were QOL, inflammation (C-reactive protein and interleukin 6 levels), and erythropoietin (EPO) requirements. RESULTS 617 of 734 (84%) participants reported urine output at baseline, and 163 of 579 (28%), at year 1. Baseline urine output was not associated with survival. Urine output at year 1, indicating preserved RKF, was independently associated with lower all-cause mortality (HR, 0.70; 95% CI, 0.52-0.93; P = 0.02) and a trend toward lower cardiovascular mortality (HR, 0.69; 95% CI, 0.45-1.05; P = 0.09). Participants with urine output at baseline reported better QOL and had lower C-reactive protein (P = 0.02) and interleukin 6 (P = 0.03) levels. Importantly, EPO dose was 12,000 U/wk lower in those with urine output at year 1 compared with those without (P = 0.001). LIMITATIONS Urine volume was measured in only a subset of patients (42%), but agreed with self-report (P < 0.001). CONCLUSIONS RKF in hemodialysis patients is associated with better survival and QOL, lower inflammation, and significantly less EPO use. RKF should be monitored routinely in hemodialysis patients. The development of methods to assess and preserve RKF is important and may improve dialysis care.