Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria E. Montez-Rath is active.

Publication


Featured researches published by Maria E. Montez-Rath.


Journal of The American Society of Nephrology | 2012

Multivessel Coronary Artery Bypass Grafting Versus Percutaneous Coronary Intervention in ESRD

Tara I. Chang; David Shilane; Dhruv S. Kazi; Maria E. Montez-Rath; Mark A. Hlatky; Wolfgang C. Winkelmayer

Thirty to sixty percent of patients with ESRD on dialysis have coronary heart disease, but the optimal strategy for coronary revascularization is unknown. We used data from the United States Renal Data System to define a cohort of 21,981 patients on maintenance dialysis who received initial coronary revascularization with either coronary artery bypass grafting (CABG) or percutaneous coronary intervention (PCI) between 1997 and 2009 and had at least 6 months of prior Medicare coverage as their primary payer. The primary outcome was death from any cause, and the secondary outcome was a composite of death or myocardial infarction. Overall survival rates were consistently poor during the study period, with unadjusted 5-year survival rates of 22%-25% irrespective of revascularization strategy. Using multivariable-adjusted proportional hazards regression, we found that CABG compared with PCI associated with significantly lower risks for both death (HR=0.87, 95% CI=0.84-0.90) and the composite of death or myocardial infarction (HR=0.88, 95% CI=0.86-0.91). Results were similar in analyses using a propensity score-matched cohort. In the absence of data from randomized trials, these results suggest that CABG may be preferred over PCI for multivessel coronary revascularization in appropriately selected patients on maintenance dialysis.


Journal of The American Society of Nephrology | 2012

Trends in Acute Nonvariceal Upper Gastrointestinal Bleeding in Dialysis Patients

Ju-Yeh Yang; Tsung-Chun Lee; Maria E. Montez-Rath; Jane Paik; Glenn M. Chertow; Manisha Desai; Wolfgang C. Winkelmayer

Impaired kidney function is a risk factor for upper gastrointestinal (GI) bleeding, an event associated with poor outcomes. The burden of upper GI bleeding and its effect on patients with ESRD are not well described. Using data from the US Renal Data System, we quantified the rates of occurrence of and associated 30-day mortality from acute, nonvariceal upper GI bleeding in patients undergoing dialysis; we used medical claims and previously validated algorithms where available. Overall, 948,345 patients contributed 2,296,323 patient-years for study. The occurrence rates for upper GI bleeding were 57 and 328 episodes per 1000 person-years according to stringent and lenient definitions of acute, nonvariceal upper GI bleeding, respectively. Unadjusted occurrence rates remained flat (stringent) or increased (lenient) from 1997 to 2008; after adjustment for sociodemographic characteristics and comorbid conditions, however, we found a significant decline for both definitions (linear approximation, 2.7% and 1.5% per year, respectively; P<0.001). In more recent years, patients had higher hematocrit levels before upper GI bleeding episodes and were more likely to receive blood transfusions during an episode. Overall 30-day mortality was 11.8%, which declined significantly over time (relative declines of 2.3% or 2.8% per year for the stringent and lenient definitions, respectively). In summary, despite declining trends worldwide, crude rates of acute, nonvariceal upper GI bleeding among patients undergoing dialysis have not decreased in the past 10 years. Although 30-day mortality related to upper GI bleeding declined, perhaps reflecting improvements in medical care, the burden on the ESRD population remains substantial.


American Journal of Kidney Diseases | 2015

Outcomes After Warfarin Initiation in a Cohort of Hemodialysis Patients With Newly Diagnosed Atrial Fibrillation

Jenny I. Shen; Maria E. Montez-Rath; Colin R. Lenihan; Mintu P. Turakhia; Tara I. Chang; Wolfgang C. Winkelmayer

BACKGROUND Although warfarin is indicated to prevent ischemic strokes in most patients with atrial fibrillation (AF), evidence supporting its use in hemodialysis patients is limited. Our aim was to examine outcomes after warfarin therapy initiation, relative to no warfarin use, following incident AF in a large cohort of hemodialysis patients who had comprehensive prescription drug coverage through Medicare Part D. STUDY DESIGN Retrospective observational cohort study. SETTING & PARTICIPANTS Patients in the US Renal Data System undergoing maintenance hemodialysis who had AF newly diagnosed in 2007 to 2011, with Medicare Part D coverage, who had no recorded history of warfarin use. PREDICTOR Warfarin therapy initiation, identified by a filled prescription within 30 days of the AF event. OUTCOMES Death, ischemic stroke, hemorrhagic stroke, severe gastrointestinal bleeding, and composite outcomes. MEASUREMENTS HRs estimated by applying Cox regression to an inverse probability of treatment and censoring-weighted cohort. RESULTS Of 12,284 patients with newly diagnosed AF, 1,838 (15%) initiated warfarin therapy within 30 days; however, ∼70% discontinued its use within 1 year. In intention-to-treat analyses, warfarin use was marginally associated with a reduced risk of ischemic stroke (HR, 0.68; 95% CI, 0.47-0.99), but not with the other outcomes. In as-treated analyses, warfarin use was associated with reduced mortality (HR, 0.84; 95% CI, 0.73-0.97). LIMITATIONS Short observation period, limited number of nonfatal events, limited generalizability of results to more affluent patients. CONCLUSIONS In hemodialysis patients with incident AF, warfarin use was marginally associated with reduced risk of ischemic stroke, and there was a signal toward reduced mortality in as-treated analyses. These results support clinical equipoise regarding the use of warfarin in hemodialysis patients and underscore the need for randomized trials to fill this evidence gap.


The Annals of Thoracic Surgery | 2013

Trends in Acute Kidney Injury, Associated Use of Dialysis, and Mortality After Cardiac Surgery, 1999 to 2008

Colin R. Lenihan; Maria E. Montez-Rath; Christina Mora Mangano; Glenn M. Chertow; Wolfgang C. Winkelmayer

BACKGROUND The development of acute kidney injury (AKI) after cardiac surgery is associated with significant mortality, morbidity, and cost. The last decade has seen major changes in the complexity of cardiac surgical candidates and in the number and type of cardiac surgical procedures being performed. METHODS Using data from the Nationwide Inpatient Sample, we determined the annual rates of AKI, AKI requiring dialysis (AKI-D), and inpatient mortality after cardiac surgery in the United States in the years 1999 through 2008. RESULTS Inpatient mortality with AKI and AKI-D decreased from 27.9% and 45.9%, respectively, in 1999 to 12.8% and 35.3%, respectively, in 2008. Compared with 1999, the odds of AKI and AKI-D in 2008, adjusted for demographic and clinical factors, were 3.30 (95% confidence interval [CI]: 2.89 to 3.77) and 2.23 (95% CI: 1.78 to 2.80), respectively. Corresponding adjusted odds of death associated with AKI and AKI-D were 0.31 (95% CI: 0.26 to 0.36) and 0.47 (95% CI: 0.34 to 0.65.) Taken together, the attributable risks for death after cardiac surgery associated with AKI and AKI-D increased from 30% and 5%, respectively, in 1999 to 47% and 14%, respectively, in 2008. CONCLUSIONS In sum, despite improvements in individual patient outcomes over the decade 1999 to 2008, the population contribution of AKI and AKI-D to inpatient mortality after surgery increased over the same period.


Medical Care | 2006

Tracking rates of patient safety indicators over time: Lessons from the veterans administration

Amy K. Rosen; Shibei Zhao; Peter E. Rivard; Susan Loveland; Maria E. Montez-Rath; Anne Elixhauser; Patrick S. Romano

Background:The Patient Safety Indicators (PSIs), developed by the Agency for Healthcare Research and Quality, are useful screening tools for highlighting areas in which quality should be further investigated and providing useful benchmarks for tracking progress. Objectives:Our objectives were to: 1) provide a descriptive analysis of the incidence of PSI events from 2001 to 2004 in the Veterans Health Administration (VA); 2) examine trends in national PSI rates at the hospital discharge level over time; and 3) assess whether hospital characteristics (eg, teaching status, number of beds, and degree of quality improvement implementation) and baseline safety-related hospital performance predict future hospital safety-related performance. Methods:We examined changes in risk-adjusted PSI rates at the discharge level, calculated the correlation between hospitals’ risk-adjusted PSI rates in 2001 with subsequent years, and developed generalized linear models to examine predictors of hospitals’ 2004 risk-adjusted PSI rates. Results:Risk-adjusted rates of 2 of the 15 PSIs demonstrated significant trends over time. Rates of iatrogenic pneumothorax increased over time, whereas rates of failure to rescue decreased. Most PSIs demonstrated consistent rates over time. After accounting for patient and hospital characteristics, hospitals’ baseline risk-adjusted PSI rates were the most important predictors of their 2004 risk-adjusted rates for 8 PSIs. Conclusions:The PSIs are useful tools for tracking and monitoring patient safety events in the VA. Future research should investigate whether trends reflect better or worse care or increased attention to documenting patient safety events.


Journal of The American Society of Nephrology | 2017

Estimating the Risk of Radiocontrast-Associated Nephropathy

Emilee R. Wilhelm-Leen; Maria E. Montez-Rath; Glenn M. Chertow

Estimates of the incidence of radiocontrast-associated nephropathy vary widely and suffer from misclassification of the cause of AKI and confounding. Using the Nationwide Inpatient Sample, we created multiple estimates of the risk of radiocontrast-associated nephropathy among adult patients hospitalized in the United States in 2009. First, we stratified patients according to the presence or absence of 12 relatively common diagnoses associated with AKI and evaluated the rate of AKI between strata. Next, we created a logistic regression model, controlling for comorbidity and acuity of illness, to estimate the risk of AKI associated with radiocontrast administration within each stratum. Finally, we performed an analysis stratified by the degree of preexisting comorbidity. In general, patients who received radiocontrast did not develop AKI at a clinically significant higher rate. Adjusted only for the complex survey design, patients to whom radiocontrast was and was not administered developed AKI at rates of 5.5% and 5.6%, respectively. After controlling for comorbidity and acuity of illness, radiocontrast administration associated with an odds ratio for AKI of 0.93 (95% confidence interval, 0.88 to 0.97). In conclusion, the risk of radiocontrast-associated nephropathy may be overstated in the literature and overestimated by clinicians. More accurate AKI risk estimates may improve clinical decision-making when attempting to balance the potential benefits of radiocontrast-enhanced imaging and the risk of AKI.


Medical Care | 2008

Potentially inappropriate prescribing for the elderly: Effects of geriatric care at the patient and health care system level

Mary Jo Pugh; Amy K. Rosen; Maria E. Montez-Rath; Megan E. Amuan; Benjamin G. Fincke; Muriel Burk; Arlene S. Bierman; Francesca E. Cunningham; Eric M. Mortensen; Dan R. Berlowitz

Background:Many studies have identified patient characteristics associated with potentially inappropriate prescribing in the elderly (PIPE), however, little attention has been directed toward how health care system factors such as geriatric care may affect this patient safety issue. Objective:This study examines the association between geriatric care and PIPE in a community dwelling elderly population. Research Design:Cross-sectional retrospective database study. Subjects:Veterans age ≥65 years who received health care in the VA system during Fiscal Years (FY99-00), and also received at medications from the Veterans Administration in FY00. Measures:PIPE was identified using the Zhan adaptation of the Beers criteria. Geriatric care penetration was calculated as the proportion of patients within a facility who received at least 1 geriatric outpatient clinic or inpatient visit. Analyses:Logistic regression models with generalized estimating equations were used to assess the relationship between geriatric care and PIPE after controlling for patient and health care system characteristics. Results:Patients receiving geriatric care were less likely to have PIPE exposure (odds ratio, 0.64; 95% confidence interval, 0.59–0.73). There was also a weak effect for geriatric care penetration, with a trend for patients in low geriatric care penetration facilities having higher risk for PIPE regardless of individual geriatric care exposure (odds ratio, 1.14; 95% confidence interval, 0.99–1.30). Conclusions:Although geriatric care is associated with a lower risk of PIPE, additional research is needed to determine if heterogeneity in the organization and delivery of geriatric care resulted in the weak effect of geriatric care penetration, or whether this is a result of low power.


Clinical Journal of The American Society of Nephrology | 2014

Addressing Missing Data in Clinical Studies of Kidney Diseases

Maria E. Montez-Rath; Wolfgang C. Winkelmayer; Manisha Desai

Missing data constitute a problem present in all studies of medical research. The most common approach to handling missing data-complete case analysis-relies on assumptions about missing data that rarely hold in practice. The implications of this approach are biased and inefficient descriptions of relationships of interest. Here, various approaches for handling missing data in clinical studies are described. In particular, this work promotes the use of multiple imputation methods that rely on assumptions about missingness that are more flexible than those assumptions relied on by the most common method in use. Furthermore, multiple imputation methods are becoming increasingly more accessible in mainstream statistical software packages, making them both a sound and practical choice. The use of multiple imputation methods is illustrated with examples pertinent to kidney research, and concrete guidance on their use is provided.


American Journal of Transplantation | 2014

Temporal Trends in the Incidence, Treatment and Outcomes of Hip Fracture After First Kidney Transplantation in the United States

S. Sukumaran Nair; Colin R. Lenihan; Maria E. Montez-Rath; David W. Lowenberg; Glenn M. Chertow; Wolfgang C. Winkelmayer

It is currently unknown whether any secular trends exist in the incidence and outcomes of hip fracture in kidney transplant recipients (KTR). We identified first‐time KTR (1997–2010) who had >1 year of Medicare coverage and no recorded history of hip fracture. New hip fractures were identified from corresponding diagnosis and surgical procedure codes. Outcomes studied included time to hip fracture, type of surgery received and 30‐day mortality. Of 69 740 KTR transplanted in 1997–2010, 597 experienced a hip fracture event during 155 341 person‐years of follow‐up for an incidence rate of 3.8 per 1000 person‐years. While unadjusted hip fracture incidence did not change, strong confounding by case mix was present. Using year of transplantation as a continuous variable, the hazard ratio (HR) for hip fracture in 2010 compared with 1997, adjusted for demographic, dialysis, comorbid and most transplant‐related factors, was 0.56 (95% confidence interval [CI]: 0.41–0.77). Adjusting for baseline immunosuppression modestly attenuated the HR (0.68; 95% CI: 0.47–0.99). The 30‐day mortality was 2.2 (95% CI: 1.3–3.7) per 100 events. In summary, hip fractures remain an important complication after kidney transplantation. Since 1997, case‐mix adjusted posttransplant hip fracture rates have declined substantially. Changes in immunosuppressive therapy appear to be partly responsible for these favorable findings.


BMC Medical Research Methodology | 2006

Performance of statistical models to predict mental health and substance abuse cost

Maria E. Montez-Rath; Cindy L. Christiansen; Susan L. Ettner; Susan Loveland; Amy K. Rosen

BackgroundProviders use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice.MethodsOur sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample.ResultsThe Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples.ConclusionModels with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

Collaboration


Dive into the Maria E. Montez-Rath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jenny I. Shen

Los Angeles Biomedical Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge