Allan B. Massie
Johns Hopkins University School of Medicine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Allan B. Massie.
JAMA | 2014
Abimereki D. Muzaale; Allan B. Massie; Mei Cheng Wang; Robert A. Montgomery; Maureen A. McBride; Jennifer L. Wainright; Dorry L. Segev
IMPORTANCE Risk of end-stage renal disease (ESRD) in kidney donors has been compared with risk faced by the general population, but the general population represents an unscreened, high-risk comparator. A comparison to similarly screened healthy nondonors would more properly estimate the sequelae of kidney donation. OBJECTIVES To compare the risk of ESRD in kidney donors with that of a healthy cohort of nondonors who are at equally low risk of renal disease and free of contraindications to live donation and to stratify these comparisons by patient demographics. DESIGN, SETTINGS, AND PARTICIPANTS A cohort of 96,217 kidney donors in the United States between April 1994 and November 2011 and a cohort of 20,024 participants of the Third National Health and Nutrition Examination Survey (NHANES III) were linked to Centers for Medicare & Medicaid Services data to ascertain development of ESRD, which was defined as the initiation of maintenance dialysis, placement on the waiting list, or receipt of a living or deceased donor kidney transplant, whichever was identified first. Maximum follow-up was 15.0 years; median follow-up was 7.6 years (interquartile range [IQR], 3.9-11.5 years) for kidney donors and 15.0 years (IQR, 13.7-15.0 years) for matched healthy nondonors. MAIN OUTCOMES AND MEASURES Cumulative incidence and lifetime risk of ESRD. RESULTS Among live donors, with median follow-up of 7.6 years (maximum, 15.0), ESRD developed in 99 individuals in a mean (SD) of 8.6 (3.6) years after donation. Among matched healthy nondonors, with median follow-up of 15.0 years (maximum, 15.0), ESRD developed in 36 nondonors in 10.7 (3.2) years, drawn from 17 ESRD events in the unmatched healthy nondonor pool of 9364. Estimated risk of ESRD at 15 years after donation was 30.8 per 10,000 (95% CI, 24.3-38.5) in kidney donors and 3.9 per 10,000 (95% CI, 0.8-8.9) in their matched healthy nondonor counterparts (P < .001). This difference was observed in both black and white individuals, with an estimated risk of 74.7 per 10,000 black donors (95% CI, 47.8-105.8) vs 23.9 per 10,000 black nondonors (95% CI, 1.6-62.4; P < .001) and an estimated risk of 22.7 per 10,000 white donors (95% CI, 15.6-30.1) vs 0.0 white nondonors (P < .001). Estimated lifetime risk of ESRD was 90 per 10,000 donors, 326 per 10,000 unscreened nondonors (general population), and 14 per 10,000 healthy nondonors. CONCLUSIONS AND RELEVANCE Compared with matched healthy nondonors, kidney donors had an increased risk of ESRD over a median of 7.6 years; however, the magnitude of the absolute risk increase was small. These findings may help inform discussions with persons considering live kidney donation.
JAMA | 2011
Lauren M. Kucirka; Morgan E. Grams; Justin Lessler; Erin C. Hall; Nathan T. James; Allan B. Massie; Robert A. Montgomery; Dorry L. Segev
CONTEXT Many studies have reported that black individuals undergoing dialysis survive longer than those who are white. This observation is paradoxical given racial disparities in access to and quality of care, and is inconsistent with observed lower survival among black patients with chronic kidney disease. We hypothesized that age and the competing risk of transplantation modify survival differences by race. OBJECTIVE To estimate death among dialysis patients by race, accounting for age as an effect modifier and kidney transplantation as a competing risk. DESIGN, SETTING, AND PARTICIPANTS An observational cohort study of 1,330,007 incident end-stage renal disease patients as captured in the United States Renal Data System between January 1, 1995, and September 28, 2009 (median potential follow-up time, 6.7 years; range, 1 day-14.8 years). Multivariate age-stratified Cox proportional hazards and competing risk models were constructed to examine death in patients who receive dialysis. MAIN OUTCOME MEASURES Death in black vs white patients who receive dialysis. RESULTS Similar to previous studies, black patients undergoing dialysis had a lower death rate compared with white patients (232,361 deaths [57.1% mortality] vs 585,792 deaths [63.5% mortality], respectively; adjusted hazard ratio [aHR], 0.84; 95% confidence interval [CI], 0.83-0.84; P <.001). However, when stratifying by age and treating kidney transplantation as a competing risk, black patients had significantly higher mortality than their white counterparts at ages 18 to 30 years (27.6% mortality vs 14.2%; aHR, 1.93; 95% CI, 1.84-2.03), 31 to 40 years (37.4% mortality vs 26.8%; aHR, 1.46; 95% CI, 1.41-1.50), and 41 to 50 years (44.8% mortality vs 38.0%; aHR, 1.12; 95% CI, 1.10-1.14; P <.001 for interaction terms between race and each aforementioned age category), as opposed to patients aged 51 to 60 years (51.5% vs 50.9%; aHR, 0.93; 95% CI, 0.92-0.94), 61 to 70 years (64.9% vs 67.2%; aHR, 0.87; 95% CI, 0.86-0.88), 71 to 80 years (76.1% vs 79.7%; aHR, 0.85; 95% CI, 0.84-0.86), and older than 80 years (82.4% vs 83.6%; aHR, 0.87; 95% CI, 0.85-0.88). CONCLUSIONS Overall, among dialysis patients in the United States, there was a lower risk of death for black patients compared with their white counterparts. However, the commonly cited survival advantage for black dialysis patients applies only to older adults, and those younger than 50 years have a higher risk of death.
American Journal of Transplantation | 2011
Allan B. Massie; Brian Caffo; Sommer E. Gentry; Erin Carlyle Hall; David A. Axelrod; Krista L. Lentine; Mark A. Schnitzler; Adrian Gheorghian; Paolo R. Salvalaggio; Dorry L. Segev
Model for End‐stage Liver Disease (MELD)‐based allocation of deceased donor livers allows exceptions for patients whose score may not reflect their true mortality risk. We hypothesized that organ procurement organizations (OPOs) may differ in exception practices, use of exceptions may be increasing over time, and exception patients may be advantaged relative to other patients. We analyzed longitudinal MELD score, exception and outcome in 88 981 adult liver candidates as reported to the United Network for Organ Sharing from 2002 to 2010. Proportion of patients receiving an HCC exception was 0–21.4% at the OPO‐level and 11.9–18.8% at the region level; proportion receiving an exception for other conditions was 0.0%–13.1% (OPO‐level) and 3.7–9.5 (region‐level). Hepatocellular carcinoma (HCC) exceptions rose over time (10.5% in 2002 vs. 15.5% in 2008, HR = 1.09 per year, p<0.001) as did other exceptions (7.0% in 2002 vs. 13.5% in 2008, HR = 1.11, p<0.001). In the most recent era of HCC point assignment (since April 2005), both HCC and other exceptions were associated with decreased risk of waitlist mortality compared to nonexception patients with equivalent listing priority (multinomial logistic regression odds ratio [OR] = 0.47 for HCC, OR = 0.43 for other, p<0.001) and increased odds of transplant (OR = 1.65 for HCC, OR = 1.33 for other, p<0.001). Policy advantages patients with MELD exceptions; differing rates of exceptions by OPO may create, or reflect, geographic inequity.
NeuroImage | 2000
Michael I. Miller; Allan B. Massie; J. Tilak Ratnanather; Kelly N. Botteron; John G. Csernansky
This paper describes the construction of cortical metrics quantifying the probabilistic occurrence of gray matter, white matter, and cerebrospinal fluid compartments in their correlation to the geometry of the neocortex as measured in 0.5-1.0 mm magnetic resonance imagery. These cortical profiles represent the density of the tissue types as a function of distance to the cortical surface. These metrics are consistent when generated across multiple brains indicating a fundamental property of the neocortex. Methods are proposed for incorporating such metrics into automated Bayes segmentation.
American Journal of Transplantation | 2013
Sommer E. Gentry; Allan B. Massie; Sidney W. Cheek; Krista L. Lentine; E. Chow; Corey E. Wickliffe; Nino Dzebashvili; Paolo R. Salvalaggio; Mark A. Schnitzler; David A. Axelrod; Dorry L. Segev
Severe geographic disparities exist in liver transplantation; for patients with comparable disease severity, 90‐day transplant rates range from 18% to 86% and death rates range from 14% to 82% across donation service areas (DSAs). Broader sharing has been proposed to resolve geographic inequity; however, we hypothesized that the efficacy of broader sharing depends on the geographic partitions used. To determine the potential impact of redistricting on geographic disparity in disease severity at transplantation, we combined existing DSAs into novel regions using mathematical redistricting optimization. Optimized maps and current maps were evaluated using the Liver Simulated Allocation Model. Primary analysis was based on 6700 deceased donors, 28 063 liver transplant candidates, and 242 727 Model of End‐Stage Liver Disease (MELD) changes in 2010. Fully regional sharing within the current regional map would paradoxically worsen geographic disparity (variance in MELD at transplantation increases from 11.2 to 13.5, p = 0.021), although it would decrease waitlist deaths (from 1368 to 1329, p = 0.002). In contrast, regional sharing within an optimized map would significantly reduce geographic disparity (to 7.0, p = 0.002) while achieving a larger decrease in waitlist deaths (to 1307, p = 0.002). Redistricting optimization, but not broader sharing alone, would reduce geographic disparity in allocation of livers for transplant across the United States.
JAMA | 2011
Keith P. West; Parul Christian; Alain B. Labrique; Mahbubur Rashid; Abu Ahmed Shamim; Rolf Klemm; Allan B. Massie; Sucheta Mehra; Kerry Schulze; Hasmot Ali; Barkat Ullah; Lee S.-F. Wu; Joanne Katz; Hashina Banu; Halida H. Akhter; Alfred Sommer
CONTEXT Maternal vitamin A deficiency is a public health concern in the developing world. Its prevention may improve maternal and infant survival. OBJECTIVE To assess efficacy of maternal vitamin A or beta carotene supplementation in reducing pregnancy-related and infant mortality. DESIGN, SETTING, AND PARTICIPANTS Cluster randomized, double-masked, placebo-controlled trial among pregnant women 13 to 45 years of age and their live-born infants to 12 weeks (84 days) postpartum in rural northern Bangladesh between 2001 and 2007. Interventions Five hundred ninety-six community clusters (study sectors) were randomized for pregnant women to receive weekly, from the first trimester through 12 weeks postpartum, 7000 μg of retinol equivalents as retinyl palmitate, 42 mg of all-trans beta carotene, or placebo. Married women (n = 125,257) underwent 5-week surveillance for pregnancy, ascertained by a history of amenorrhea and confirmed by urine test. Blood samples were obtained from participants in 32 sectors (5%) for biochemical studies. MAIN OUTCOME MEASURES All-cause mortality of women related to pregnancy, stillbirth, and infant mortality to 12 weeks (84 days) following pregnancy outcome. RESULTS Groups were comparable across risk factors. For the mortality outcomes, neither of the supplement group outcomes was significantly different from the placebo group outcomes. The numbers of deaths and all-cause, pregnancy-related mortality rates (per 100,000 pregnancies) were 41 and 206 (95% confidence interval [CI], 140-273) in the placebo group, 47 and 237 (95% CI, 166-309) in the vitamin A group, and 50 and 250 (95% CI, 177-323) in the beta carotene group. Relative risks for mortality in the vitamin A and beta carotene groups were 1.15 (95% CI, 0.75-1.76) and 1.21 (95% CI, 0.81-1.81), respectively. In the placebo, vitamin A, and beta carotene groups the rates of stillbirth and infant mortality were 47.9 (95% CI, 44.3-51.5), 45.6 (95% CI, 42.1-49.2), and 51.8 (95% CI, 48.0-55.6) per 1000 births and 68.1 (95% CI, 63.7-72.5), 65.0 (95% CI, 60.7-69.4), and 69.8 (95% CI, 65.4-72.3) per 1000 live births, respectively. Vitamin A compared with either placebo or beta carotene supplementation increased plasma retinol concentrations by end of study (1.46 [95% CI, 1.42-1.50] μmol/L vs 1.13 [95% CI, 1.09-1.17] μmol/L and 1.18 [95% CI, 1.14-1.22] μmol/L, respectively; P < .001) and reduced, but did not eliminate, gestational night blindness (7.1% for vitamin A vs 9.2% for placebo and 8.9% for beta carotene [P < .001 for both]). CONCLUSION Use of weekly vitamin A or beta carotene in pregnant women in Bangladesh, compared with placebo, did not reduce all-cause maternal, fetal, or infant mortality. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00198822.
American Journal of Transplantation | 2013
Andrew M. Cameron; Allan B. Massie; C. E. Alexander; B. Stewart; Robert A. Montgomery; N. R. Benavides; G. D. Fleming; Dorry L. Segev
Despite countless media campaigns, organ donation rates in the United States have remained static while need has risen dramatically. New efforts to increase organ donation through public education are necessary to address the waiting list of over 100,000 patients. On May 1, 2012, the online social network, Facebook, altered its platform to allow members to specify “Organ Donor” as part of their profile. Upon such choice, members were offered a link to their state registry to complete an official designation, and their “friends” in the network were made aware of the new status as a donor. Educational links regarding donation were offered to those considering the new organ donor status. On the first day of the Facebook organ donor initiative, there were 13 054 new online registrations, representing a 21.1‐fold increase over the baseline average of 616 registrations. This first‐day effect ranged from 6.9× (Michigan) to 108.9× (Georgia). Registration rates remained elevated in the following 12 days. During the same time period, no increase was seen in registrations from the DMV. Novel applications of social media may prove effective in increasing organ donation rates and likewise might be utilized in other refractory public health problems in which communication and education are essential.
The New England Journal of Medicine | 2016
Babak J. Orandi; Xun Luo; Allan B. Massie; J. M. Garonzik-Wang; Bonnie E. Lonze; Rizwan Ahmed; K. J. Van Arendonk; Mark D. Stegall; Stanley C. Jordan; J. Oberholzer; Ty B. Dunn; Lloyd E. Ratner; Sandip Kapur; Ronald P. Pelletier; John P. Roberts; Marc L. Melcher; Pooja Singh; Debra Sudan; Marc P. Posner; Jose M. El-Amm; R. Shapiro; Matthew Cooper; George S. Lipkowitz; Michael A. Rees; Christopher L. Marsh; Bashir R. Sankari; David A. Gerber; P. W. Nelson; J. Wellen; Adel Bozorgzadeh
BACKGROUND A report from a high-volume single center indicated a survival benefit of receiving a kidney transplant from an HLA-incompatible live donor as compared with remaining on the waiting list, whether or not a kidney from a deceased donor was received. The generalizability of that finding is unclear. METHODS In a 22-center study, we estimated the survival benefit for 1025 recipients of kidney transplants from HLA-incompatible live donors who were matched with controls who remained on the waiting list or received a transplant from a deceased donor (waiting-list-or-transplant control group) and controls who remained on the waiting list but did not receive a transplant (waiting-list-only control group). We analyzed the data with and without patients from the highest-volume center in the study. RESULTS Recipients of kidney transplants from incompatible live donors had a higher survival rate than either control group at 1 year (95.0%, vs. 94.0% for the waiting-list-or-transplant control group and 89.6% for the waiting-list-only control group), 3 years (91.7% vs. 83.6% and 72.7%, respectively), 5 years (86.0% vs. 74.4% and 59.2%), and 8 years (76.5% vs. 62.9% and 43.9%) (P<0.001 for all comparisons with the two control groups). The survival benefit was significant at 8 years across all levels of donor-specific antibody: 89.2% for recipients of kidney transplants from incompatible live donors who had a positive Luminex assay for anti-HLA antibody but a negative flow-cytometric cross-match versus 65.0% for the waiting-list-or-transplant control group and 47.1% for the waiting-list-only control group; 76.3% for recipients with a positive flow-cytometric cross-match but a negative cytotoxic cross-match versus 63.3% and 43.0% in the two control groups, respectively; and 71.0% for recipients with a positive cytotoxic cross-match versus 61.5% and 43.7%, respectively. The findings did not change when patients from the highest-volume center were excluded. CONCLUSIONS This multicenter study validated single-center evidence that patients who received kidney transplants from HLA-incompatible live donors had a substantial survival benefit as compared with patients who did not undergo transplantation and those who waited for transplants from deceased donors. (Funded by the National Institute of Diabetes and Digestive and Kidney Diseases.).
American Journal of Transplantation | 2014
Babak J. Orandi; Jacqueline M. Garonzik-Wang; Allan B. Massie; Andrea A. Zachary; J. R. Montgomery; K. J. Van Arendonk; Mark D. Stegall; Stanley C. Jordan; Jose Oberholzer; Ty B. Dunn; Lloyd E. Ratner; Sandip Kapur; Ronald P. Pelletier; John P. Roberts; Marc L. Melcher; Pooja Singh; Debra Sudan; Marc P. Posner; Jose M. El-Amm; R. Shapiro; Matthew Cooper; George S. Lipkowitz; Michael A. Rees; Christopher L. Marsh; B. R. Sankari; David A. Gerber; P. W. Nelson; Jason R. Wellen; Adel Bozorgzadeh; A. O. Gaber
Incompatible live donor kidney transplantation (ILDKT) offers a survival advantage over dialysis to patients with anti‐HLA donor‐specific antibody (DSA). Program‐specific reports (PSRs) fail to account for ILDKT, placing this practice at regulatory risk. We collected DSA data, categorized as positive Luminex, negative flow crossmatch (PLNF) (n = 185), positive flow, negative cytotoxic crossmatch (PFNC) (n = 536) or positive cytotoxic crossmatch (PCC) (n = 304), from 22 centers. We tested associations between DSA, graft loss and mortality after adjusting for PSR model factors, using 9669 compatible patients as a comparison. PLNF patients had similar graft loss; however, PFNC (adjusted hazard ratio [aHR] = 1.64, 95% confidence interval [CI]: 1.15–2.23, p = 0.007) and PCC (aHR = 5.01, 95% CI: 3.71–6.77, p < 0.001) were associated with increased graft loss in the first year. PLNF patients had similar mortality; however, PFNC (aHR = 2.04; 95% CI: 1.28–3.26; p = 0.003) and PCC (aHR = 4.59; 95% CI: 2.98–7.07; p < 0.001) were associated with increased mortality. We simulated Centers for Medicare & Medicaid Services flagging to examine ILDKTs effect on the risk of being flagged. Compared to equal‐quality centers performing no ILDKT, centers performing 5%, 10% or 20% PFNC had a 1.19‐, 1.33‐ and 1.73‐fold higher odds of being flagged. Centers performing 5%, 10% or 20% PCC had a 2.22‐, 4.09‐ and 10.72‐fold higher odds. Failure to account for ILDKTs increased risk places centers providing this life‐saving treatment in jeopardy of regulatory intervention.
American Journal of Transplantation | 2015
Allan B. Massie; E. Chow; Corey E. Wickliffe; Xun Luo; Sommer E. Gentry; David C. Mulligan; Dorry L. Segev
In June 2013, a change to the liver waitlist priority algorithm was implemented. Under Share 35, regional candidates with MELD ≥ 35 receive higher priority than local candidates with MELD < 35. We compared liver distribution and mortality in the first 12 months of Share 35 to an equivalent time period before. Under Share 35, new listings with MELD ≥ 35 increased slightly from 752 (9.2% of listings) to 820 (9.7%, p = 0.3), but the proportion of deceased‐donor liver transplants (DDLTs) allocated to recipients with MELD ≥ 35 increased from 23.1% to 30.1% (p < 0.001). The proportion of regional shares increased from 18.9% to 30.4% (p < 0.001). Sharing of exports was less clustered among a handful of centers (Gini coefficient decreased from 0.49 to 0.34), but there was no evidence of change in CIT (p = 0.8). Total adult DDLT volume increased from 4133 to 4369, and adjusted odds of discard decreased by 14% (p = 0.03). Waitlist mortality decreased by 30% among patients with baseline MELD > 30 (SHR = 0.70, p < 0.001) with no change for patients with lower baseline MELD (p = 0.9). Posttransplant length‐of‐stay (p = 0.2) and posttransplant mortality (p = 0.9) remained unchanged. In the first 12 months, Share 35 was associated with more transplants, fewer discards, and lower waitlist mortality, but not at the expense of CIT or early posttransplant outcomes.