eGEMs | 2019

Controlling Confounding in a Study of Oral Anticoagulants: Comparing Disease Risk Scores Developed Using Different Follow-Up Approaches

 
 
 
 
 
 
 

Abstract


Purpose: Little is known about how disease risk score (DRS) development should proceed under different pharmacoepidemiologic follow-up strategies. In an analysis of dabigatran vs. warfarin and risk of major bleeding, we compared the results of DRS adjustment when models were developed under “intention-to-treat” (ITT) and “as-treated” (AT) approaches. Methods: We assessed DRS model discrimination, calibration, and ability to induce prognostic balance via the “dry run analysis”. AT treatment effects stratified on each DRS were compared with each other and with a propensity score (PS) stratified reference estimate. Bootstrap resampling of the historical cohort at 10 percent–90 percent sample size was performed to assess the impact of sample size on DRS estimation. Results: Historically-derived DRS models fit under AT showed greater decrements in discrimination and calibration than those fit under ITT when applied to the concurrent study population. Prognostic balance was approximately equal across DRS models (–6 percent to –7 percent “pseudo-bias” on the hazard ratio scale). Hazard ratios were between 0.76 and 0.78 with all methods of DRS adjustment, while the PS stratified hazard ratio was 0.83. In resampling, AT DRS models showed more overfitting and worse prognostic balance, and led to hazard ratios further from the reference estimate than did ITT DRSs, across sample sizes. Conclusions: In a study of anticoagulant safety, DRSs developed under an AT principle showed signs of overfitting and reduced confounding control. More research is needed to determine if development of DRSs under ITT is a viable solution to overfitting in other settings.

Volume 7
Pages None
DOI 10.5334/egems.254
Language English
Journal eGEMs

Full Text