Camille A. Jones
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Camille A. Jones.
American Journal of Kidney Diseases | 1998
Camille A. Jones; Geraldine M. McQuillan; John W. Kusek; Mark S. Eberhardt; William H. Herman; Josef Coresh; Marcel E. Salive; Camara P. Jones; Lawrence Y. Agodoa
This report describes the distribution of serum creatinine levels by sex, age, and ethnic group in a representative sample of the US population. Serum creatinine level was evaluated in the third National Health and Nutrition Examination Survey (NHANES III) in 18,723 participants aged 12 years and older who were examined between 1988 and 1994. Differences in mean serum creatinine levels were compared for subgroups defined by sex, age, and ethnicity (non-Hispanic white, non-Hispanic black, and Mexican-American). The mean serum creatinine value was 0.96 mg/dL for women in the United States and 1.16 mg/dL for men. Overall mean creatinine levels were highest in non-Hispanic blacks (women, 1.01 mg/dL; men, 1.25 mg/dL), lower in non-Hispanic whites (women, 0.97 mg/dL; men, 1.16 mg/dL), and lowest in Mexican-Americans (women, 0.86 mg/dL; men, 1.07 mg/dL). Mean serum creatinine levels increased with age among both men and women in all three ethnic groups, with total US mean levels ranging from 0.88 to 1.10 mg/dL in women and 1.00 to 1.29 mg/dL in men. The highest mean creatinine level was seen in non-Hispanic black men aged 60+ years. In the total US population, creatinine levels of 1.5 mg/dL or greater were seen in 9.74% of men and 1.78% of women. Overall, among the US noninstitutionalized population, 10.9 million people are estimated to have creatinine values of 1.5 mg/dL or greater, 3.0 million have values of 1.7 mg/dL or greater, and 0.8 million have serum creatinine levels of 2.0 mg/dL or greater. Mean serum creatinine values are higher in men, non-Hispanic blacks, and older persons and are lower in Mexican-Americans. In the absence of information on glomerular filtration rate (GFR) or lean body mass, it is not clear to what extent the variability by sex, ethnicity, and age reflects normal physiological differences rather than the presence of kidney disease. Until this information is known, the use of a single cutpoint to define elevated serum creatinine values may be misleading.
American Journal of Kidney Diseases | 2000
Robert A. Wolfe; Valarie B. Ashby; John T. Daugirdas; Lawrence Y. Agodoa; Camille A. Jones; Friedrich K. Port
This study investigates the role of body size on the mortality risk associated with dialysis dose in chronic hemodialysis patients. A national US random sample from the US Renal Data System was used for this observational longitudinal study of 2-year mortality. Prevalent hemodialysis patients treated between 1990 and 1995 were included (n = 9,165). A Cox proportional hazards model, adjusting for patient characteristics, was used to calculate the relative risk (RR) for mortality. Both dialysis dose (equilibrated Kt/V [eKt/V]) and body size (body weight, body volume, and body mass index) were independently and significantly (P < 0.01 for each measure) inversely related to mortality when adjusted for age and diabetes. Mortality was less among larger patients and those receiving greater eKt/V. The overall association of mortality risk with eKt/V was negative and significant in all patient subgroups defined by body size and by race-sex categories in the range 0.6 < eKt/V < 1.6. The association was negative in the restricted range 0.9 < eKt/V < 1.6 (although not generally significant) for all body-size subgroups and for three of four race-by-sex subgroups, excepting black men (RR = 1. 003/0.1 eKt/V; P > 0.95). These findings suggest that dose of dialysis and several measures of body size are important and independent correlates of mortality. These results suggest that patient management protocols should attempt to ensure both good patient nutrition and adequate dose of dialysis, in addition to managing coexisting medical conditions.
Urology | 1998
Camille A. Jones; Leroy M. Nyberg
OBJECTIVES To discuss what is currently known about the population prevalence of interstitial cystitis (IC) and demographic characteristics of IC patients. METHODS Changes over time in the criteria for diagnosis of IC are described. The 3 published studies of the population prevalence of IC are reviewed. Epidemiologic issues important in the design of studies of IC are cited. RESULTS IC is a disease of chronic voiding symptoms. There is very little reliable information published on the etiology, risk factors, or number of persons affected. The criteria used for diagnosis of IC by different investigators have been variable. In 1988, research criteria for a case definition of IC were published, to be applied for IC patients enrolled in National Institutes of Health-funded studies. Three published studies of the population prevalence of IC are available. Each study used different criteria for defining a case of IC, and none used the NIH research criteria to define a case. Prevalence estimates for IC vary significantly, from 10 cases/100,000 reported in Finland in 1975, (based on hospital record review), to 30/100,000 in the United States in 1987, (based on a mailed survey of board certified urologists), to 510 cases/100,000 in the United States in 1989, (based on participant self-report in the 1989 National Health Interview Survey). It is unclear the extent to which these estimates represent true differences in prevalence, rather than reflect the different methods used to define an IC case. Several investigators have reported demographic characteristics of the IC patients followed in their clinics. All studies of adults show a marked female predominance, with reported onset of symptoms generally in the middle years of life. Patients may experience a delay of years from the onset of symptoms to the time of definitive diagnosis. The natural history of symptoms of IC has been reported to be that of a subacute onset with a rapid peak in severity, and then a relatively constant plateau of chronic symptoms thereafter. However, many patients do experience remissions and flares in their disease symptoms. CONCLUSIONS Few therapies for IC have been evaluated using rigorous epidemiologic methods. Many questions remain to be answered. New studies of IC should include epidemiologic consultation at the stage of study design.
Hypertension | 1997
Lee A. Hebert; John W. Kusek; Tom Greene; Lawrence Y. Agodoa; Camille A. Jones; Andrew S. Levey; Julia A. Breyer; Pierre Faubert; Henry A. Rolin; Shin-Ru Wang
Abstract African Americans (blacks) have a disproportionately high incidence of end-stage renal disease due to hypertension. The Modification of Diet in Renal Disease (MDRD) Study found that strict blood pressure control slowed the decline in glomerular filtration rate (GFR) only in the subgroup of patients with proteinuria. The present report compares the effects of blood pressure control in black and white MDRD Study participants. Fifty-three black and 495 white participants with baseline GFRs of 25 to 55 mL/min/1.73 m 2 were randomly assigned to a usual or low mean arterial pressure (MAP) goal of ≤107 or ≤92 mm Hg, respectively. GFR decline was compared between randomized groups and correlated with the level of achieved blood pressure. The mean (±SE) GFR decline over 3 years in the low blood pressure group was 11.8±7.3 mL/min slower than in the usual blood pressure group among blacks ( P =.11) compared with 0.3±1.3 mL/min slower among whites ( P =.81) ( P =.12 between blacks and whites). In both blacks and whites, higher baseline urine protein excretion was associated with a greater beneficial effect of the low MAP goal on GFR decline ( P =.02 for both races). Combining both blood pressure groups and controlling for baseline characteristics, higher follow-up achieved MAP was associated with faster GFR decline in both blacks ( P P =.002), with a sevenfold stronger relationship in blacks ( P 1 g/d). In addition, a lower level of blood pressure control may be even more important in blacks than in whites in slowing the progression of renal disease.
American Journal of Nephrology | 2001
Kevin C. Abbott; James D. Oliver; Iman O. Hypolite; Lawrence L. Lepler; Allan D. Kirk; Chia W. Ko; Clifton A. Hawkes; Camille A. Jones; Lawrence Y. Agodoa
Background: It is common belief in the transplant community that rates of septicemia in transplant recipients have declined, but this has not been studied in a national population. Methods: Therefore, 33,479 renal transplant recipients in the United States Renal Data System from July 1, 1994 to June 30, 1997 were analyzed in a retrospective registry study of the incidence, associated factors, and mortality of hospitalizations with a primary discharge diagnosis of septicemia (ICD9 Code 038.x). Results: Renal transplant recipients had an adjusted incidence ratio of hospitalizations for septicemia of 41.52 (95% CI 35.45–48.96) compared to the general population. Hospitalizations for septicemia were most commonly associated with urinary tract infection as a secondary diagnosis (30.6%). In multivariate analysis, diabetes and urologic disease, female gender, delayed graft function, rejection, and pre-transplant dialysis, but not induction antibody therapy, were associated with hospitalizations for septicemia. Recipients hospitalized for septicemia had a mean patient survival of 9.03 years (95% CI 7.42–10.63) compared to 15.73 years (95% CI 14.77–16.69) for all other recipients. Conclusions: Even in the modern era, renal transplant recipients remain at high risk for hospitalizations for septicemia, which are associated with substantially decreased patient survival. Newly identified risks in this population were female recipients and pre-transplant dialysis.
American Journal of Kidney Diseases | 2008
Clara Y. Jones; Camille A. Jones; Ira B. Wilson; Tamsin A. Knox; Andrew S. Levey; Donna Spiegelman; Sherwood L. Gorbach; Frederick Van Lente; Lesley A. Stevens
BACKGROUND Human immunodeficiency virus (HIV)-infected persons have an increased risk of chronic kidney disease (CKD). Serum creatinine level may underestimate the prevalence of CKD in subjects with decreased lean body mass or liver disease. Level of serum cystatin C, an alternative kidney function marker, is independent of lean body mass. STUDY DESIGN Cross-sectional. SETTING & PARTICIPANTS 250 HIV-infected subjects on highly active antiretroviral therapy in the Nutrition for Healthy Living (NFHL) cohort; 2,628 National Health and Nutrition Examination Survey (NHANES) 2001-2002 subjects. PREDICTORS & OUTCOMES Comparison of serum creatinine levels in NFHL to those in NHANES subjects; comparison of CKD in NFHL subjects ascertained using serum creatinine versus cystatin C levels. MEASUREMENTS Standardized serum creatinine, serum cystatin C, glomerular filtration rate (GFR) estimated from serum creatinine and cystatin C levels. RESULTS Creatinine levels were lower in NFHL than NHANES subjects despite greater rates of hepatitis, diabetes, and drug use (mean difference, -0.18 mg/dL; P < 0.001 adjusted for age, sex, and race). Of NFHL subjects, only 2.4% had a creatinine-based estimated GFR less than 60 mL/min/1.73 m(2), but 15.2% had a cystatin-based estimated GFR less than 60 mL/min/1.73 m(2). LIMITATIONS GFR was estimated rather than measured. Other factors in addition to GFR may affect creatinine and cystatin C levels. Measurements of proteinuria were not available. CONCLUSIONS Serum creatinine levels may overestimate GFRs in HIV-infected subjects. Kidney disease prevalence may be greater than previously appreciated.
Transplant Infectious Disease | 2002
Swanson Sj; Allan D. Kirk; Chia W. Ko; Camille A. Jones; Lawrence Y. Agodoa; Kevin C. Abbott
Abstract: Background.National statistics are presented for patient survival and graft survival in patients seropositive for the human immunodeficiency virus (HIV+) at the time of renal transplantation in the era prior to highly active antiretroviral therapy (HAART). Methods. Historical cohort analysis of 63, 210 cadaveric solitary renal transplant recipients with valid HIV serology entries in the United States Renal Data System (USRDS) from 1 January 1987 to 30 June 1997. The medical evidence form was also used for additional variables but, because of fewer available values, was analyzed in a separate model. Outcomes were patient characteristics and survival associated with HIV+ status. Results. Thirty‐two patients (0.05%) in the study period were HIV+ at transplant. HIV+ patients were comparable to the national renal transplant population in terms of gender and ethnic distribution but were younger and had younger donors and better HLA matching than the USRDS population. Patient and graft three‐year survival were significantly reduced in HIV+ recipients (53% graft, 83% patient survival) relative to the USRDS population (73% and 88%, respectively). In multivariate analysis, HIV+ status was independently associated with patient mortality and decreased graft survival in recipients of cadaveric kidney transplants. Conclusions. This analysis was retrospective and may underestimate the number of HIV+ patients transplanted in the United States. Although the clinical details of patient selection for transplant were unknown, these results show HIV+ patients can have successful outcomes after cadaveric renal transplantation, although outcomes are significantly different from HIV– recipients.
American Journal of Kidney Diseases | 1997
Marc N. Turenne; Friedrich K. Port; Robert L. Strawderman; Robert B. Ettenger; Steven R. Alexander; John E. Lewy; Camille A. Jones; Lawrence Y. Agodoa; Philip J. Held
We compared growth rates by modality over a 6- to 14-month period in 1,302 US pediatric end-stage renal disese (ESRD) patients treated during 1990. Modality comparisons were adjusted for age, sex, race, ethnicity, and ESRD duration using linear regression models by age group (0.5 to 4 years, 5 to 9 years, 10 to 14 years, and 15 to 18 years). Growth rates were higher in young children receiving a transplant compared with those receiving dialysis (ages 0.5 to 4 years, delta = 3.1 cm/yr v continuous cycling peritoneal dialysis [CCPD], P < 0.01; ages 5 to 9 years, delta = 2.0 to 2.6 cm/yr v CCPD, chronic ambulatory peritoneal dialysis (CAPD), and hemodialysis, P < 0.01). In contrast, growth rates in older children were not statistically different when comparing transplantation with each dialysis modality. For most age groups of transplant recipients, we observed faster growth with alternate-day versus daily steroids that was not fully explained by differences in allograft function. Younger patients (<15 years) grew at comparable rates with each dialysis modality, while older CAPD patients grew faster compared with hemodialysis or CCPD patients (P < 0.02). There was no substantial pubertal growth spurt in transplant or dialysis patients. This national US study of pediatric growth rates with dialysis and transplantation shows differences in growth by modality that vary by age group.
American Journal of Nephrology | 1996
Lawrence Y. Agodoa; Camille A. Jones; Philip J. Held
Treated end-stage renal disease continues to increase at an alarming rate in the US. There has been an exponential growth in the incidence rate between 1982 and 1991 at the rate of 8.76% per year. Approximately 218,042 patients received treatment for ESRD in 1991, of which 49,909 were new patients. Although the increase in the incidence rate is seen for all the major disease categories responsible for ESRD, diabetes mellitus, probably type 2, and hypertension are responsible for the bulk of the increase. African Americans and Native Americans have shown the most dramatic increase; diabetes being the major reason for both races, but for African Americans, hypertension is the leading cause of ESRD. A bulk of the increase in the ESRD patient population has been in the older age (greater than 65 years of age) group. The mortality rate for the ESRD patient population, and, specifically, for the dialysis population remains relatively high, with 1-year survival probabilities of approximately 78%. Some of the contributing factors cited for the high death rate, especially in the dialysis patient population include inadequate dialysis dose, low flux of the dialysis membranes, shortened dialysis times, an increase in the age of the ESRD population, and bioincompatible dialysis membranes. The effect of the widely practiced dialyzer reuse on dialysis patient morbidity and mortality remains unclear.
Controlled Clinical Trials | 1996
Paul K. Whelton; Jeannette Y. Lee; John W. Kusek; Jeanne Charleston; Jennifer DeBruge; Margaret A. Douglas; Marquetta Faulkner; Paul G. Greene; Camille A. Jones; Sally Kiefer; Katharine A. Kirk; Betty Levell; Keith Norris; Sandra N. Powers; Tamrat M. Retta; Delia E. Smith; Harry Ward
Several approaches for recruitment of African American adults with renal insufficiency due to hypertension (glomerular filtration rate between 25 and 70 ml/min/1.73 m2) were explored in the Pilot Study for the African American Study of Kidney Disease and Hypertension (AASK). Over a period of 42 weeks, prescreening information was obtained on 2880 individuals, of whom 498 (17%) were evaluated at a screening visit. Two hundred and twenty-five (8%) had an 125I-iothalamate assessment of glomerular filtration rate. Ninety-four of 97 participants who met all the study eligibility criteria were enrolled in the trial. The most common reasons for ineligibility during screening were absence of renal insufficiency or hypertension, presence of diabetes mellitus, and a body mass index above the acceptable level. Overall, an average of 31 prescreen contacts and 8 screening visits were conducted for every randomization (3.3% yield from prescreening to randomization). Screening in clinical practice was the most efficient method for recruitment (12.6% yield from prescreen contact to randomization compared to 1.1% from mass mailing campaigns, 1.3% from mass media campaigns, and 1.7% from referrals by patients with end-stage renal disease). Randomization yields increased with progressively higher age ranges (2.4%, 3.3%, and 6.0% prescreen to randomization yields for those aged < or = 50, 51-60, and 61-70, respectively). A slight majority (51%) of the prescreen contacts were women, but 75% of the randomized participants were men. Our results suggest that clinic-based screening is an effective approach for recruitment of African Americans with hypertension and renal insufficiency into clinical trials. They also suggest that enrollment of African American women in such studies is a special challenge.