Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Danielle Braun is active.

Publication


Featured researches published by Danielle Braun.


Breast Cancer Research and Treatment | 2017

Breast cancer risk models: a comprehensive overview of existing models, validation, and clinical applications.

Jessica Cintolo-Gonzalez; Danielle Braun; Amanda Blackford; Emanuele Mazzola; Ahmet Acar; Jennifer K. Plichta; Molly Griffin; Kevin S. Hughes

Numerous models have been developed to quantify the combined effect of various risk factors to predict either risk of developing breast cancer, risk of carrying a high-risk germline genetic mutation, specifically in the BRCA1 and BRCA2 genes, or the risk of both. These breast cancer risk models can be separated into those that utilize mainly hormonal and environmental factors and those that focus more on hereditary risk. Given the wide range of models from which to choose, understanding what each model predicts, the populations for which each is best suited to provide risk estimations, the current validation and comparative studies that have been performed for each model, and how to apply them practically is important for clinicians and researchers seeking to utilize risk models in their practice. This review provides a comprehensive guide for those seeking to understand and apply breast cancer risk models by summarizing the majority of existing breast cancer risk prediction models including the risk factors they incorporate, the basic methodology in their development, the information each provides, their strengths and limitations, relevant validation studies, and how to access each for clinical or investigative purposes.


Journal of Clinical Oncology | 2014

Misreported Family Histories and Underestimation of Risk

Danielle Braun; Malka Gorfine; Giovanni Parmigiani

TO THE EDITOR: The study by Daniels et al focused on women with high-grade serous ovarian cancer and showed that carrier probabilities provided by risk prediction model BRCAPRO are too low among low-risk patients. This important observation agrees with earlier reports, including probands with both ovarian and breast cancers, in which the ratio of observed to expected cases (O/E) in low-risk groups was substantially greater than one. As genetic testing becomes more affordable and appropriately broadens to segments of the population who are at lower prior risk, this limitation deserves serious consideration. We are constantly working on improvements to make BRCAPRO more accurate and clinically helpful. We note with interest the suggestion to incorporate ovarian cancer histology, and we will consider it for future versions. Although Daniels et al report on the version of BRCAPRO included in BayesMendel 2-0.5 (October 2010), the latest version is 2-0.9 (March 2014). An array of improvements has been made since 2010, including updated penetrance estimates for contralateral breast cancer, more flexible incorporation of ethnicity, consideration of mastectomy in the proband and relatives, updated sensitivity parameters for BRCA1/2 testing, updated marker parameters, inclusion of HER2 status, and identical twins. It would be very interesting to evaluate the impact of the last 4 years of improvements on the calibration issues reported by Daniels et al. An additional issue deserving close scrutiny is the possibility that underestimation of risk in the low quintiles may be driven in large part by misreporting of family history. Family history collected by Daniels et al was obtained by genetic counselors, and although this is not fully clarified in the paper, may be self-reported by the proband, rather than validated through medical records, cancer registries, pathology reports, or death certificates. Self-reporting is the standard in clinical environments, and we consider it an appropriate approach for this type of model validation. At the same time, various studies evaluated misreporting of family history comparing self-reported with validated histories and showed that sensitivity and specificity for reported disease status can be serious. For example, in first-degree relatives, sensitivity estimates for breast cancer vary from 65% to 95% and for ovarian cancer from 67% to 84%. Sensitivity decreases further with the degree of the relative. Specificity estimates are approximately 98% to 99%. The effects of misreported family history on Mendelian risk prediction models, and BRCAPRO specifically, have been examined by Katki, who considered both underreporting of disease status and rounding of age, and showed that misreporting of family history, especially in disease status, leads to inaccurate calibration. To better understand the results reported by Daniels et al, we performed new analyses to evaluate whether misreporting could account for a significant portion of the observed miscalibration. We mimic the data collected by Daniels et al using data from the Cancer Genetics Network Model Validation Study (described in detail elsewhere). We focus on the 157 families of probands affected with ovarian cancer, ran BRCAPRO, and calculate the O/E ratio for each of the risk quintiles defined by Daniels et al (shown in gold in Fig 1). The overall O/E ratio is 1.02. Although we do not have any families in the first quintile, in the second quintile (1% to 3% BRCAPRO probability) the O/E ratio is 16, similar to that observed by Daniels et al in the low-risk quintile. We then used a novel technique developed by Braun et al to adjust for measurement error. The measurement error adjustment involves averaging across all possible combinations of true disease status for relatives, each time weighing by the positive and negative predictive value of the reported history. We implemented it using estimates from Ziogas and Anton-Culver. This calculation, shown in blue in Figure 1, reduces the ratio in the second quintile from 16 to 5, whereas the overall O/E ratio remains 1.01. In conclusion, misreporting of family history is likely to play an important role in model calibration for low-risk probands. Using verified information in genetic counseling would likely lead to both more accuracy and better calibration of predictions for these women. However, verified information is impractical in many clinical settings. Therefore future versions of BRCAPRO will use the approach described by Braun et al to ameliorate calibration. In the interim, the results from Daniels et al can provide informal guidance for using lower BRCAPRO thresholds in settings where collection of verified information is impractical. Ra tio o f O bs er ve d to E xp ec te d Ca se s, in L og S ca le


Statistics in Medicine | 2017

Simpson's paradox in the integrated discrimination improvement

Jonathan Chipman; Danielle Braun

The integrated discrimination improvement (IDI) is commonly used to compare two risk prediction models; it summarizes the extent a new model increases risk in events and decreases risk in non-events. The IDI averages risks across events and non-events and is therefore susceptible to Simpsons paradox. In some settings, adding a predictive covariate to a well calibrated model results in an overall negative (positive) IDI. However, if stratified by that same covariate, the strata-specific IDIs are positive (negative). Meanwhile, the calibration (observed to expected ratio and Hosmer-Lemeshow Goodness of Fit Test), area under the receiver operating characteristic curve, and Brier score improve overall and by stratum. We ran extensive simulations to investigate the impact of an imbalanced covariate upon metrics (IDI, area under the receiver operating characteristic curve, Brier score, and R2), provide an analytic explanation for the paradox in the IDI, and use an investigative metric, a Weighted IDI, to better understand the paradox. In simulations, all instances of the paradox occurred under stratum-specific mis-calibration, yet there were mis-calibrated settings in which the paradox did not occur. The paradox is illustrated on Cancer Genomics Network data by calculating predictions based on two versions of BRCAPRO, a Mendelian risk prediction model for breast and ovarian cancer. In both simulations and the Cancer Genomics Network data, overall model calibration did not guarantee stratum-level calibration. We conclude that the IDI should only assess model performance among a clinically relevant subset when stratum-level calibration is strictly met and recommend calculating additional metrics to confirm the direction and conclusions of the IDI. Copyright


Journal of Cardiac Failure | 2016

Challenges in the Use of Administrative Data for Heart Failure Services Research.

Marcela Horvitz-Lennon; Danielle Braun; Sharon-Lise T. Normand

Administrative data routinely collected when patients interact with the health care system are widely used for accountability, quality improvement efforts, and health services research.Although these data were not designed for such purposes, they provide a feasible alternative to fit-forpurpose prospective data collection activities. In the United States, all health care transactions are required by the Health Insurance Portability and Accountability Act to use a standard code set to indicate diagnoses and procedures. Before October 1, 2015, the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code set has been widely used for inpatient hospital procedures as well as for inpatient and outpatient diagnoses. Two studies in this issue of the Journal examined the validity of ICD-9-CM diagnostic information available in administrative data relative to gold standard sources in heart failure research: Bender and Smith evaluated diagnostic codes recorded in community hospital administrative data to assess the impact of “mental health issues” on heart failure outcomes of hospitalized adults; Kucharska-Newton and colleagues assessed codes in Medicare claims data to identify “acute decompensated and chronic stable” heart failure among hospitalized participants in the Atherosclerosis Risk in Communities study. Both papers report good agreement between ICD-9-CM codes and gold standard sources (Table 1). Although agreement statistics and specificities were high, the sensitivities (probabilities that codes were present when the condition was present) were modest— kappa = 0.42 for some specific mental health subcategories


Journal of the American Statistical Association | 2018

Nonparametric Adjustment for Measurement Error in Time to Event Data: Application to Risk Prediction Models

Danielle Braun; Malka Gorfine; Hormuzd A. Katki; Argyrios Ziogas; Giovanni Parmigiani

ABSTRACT Mismeasured time-to-event data used as a predictor in risk prediction models will lead to inaccurate predictions. This arises in the context of self-reported family history, a time-to-event predictor often measured with error, used in Mendelian risk prediction models. Using validation data, we propose a method to adjust for this type of error. We estimate the measurement error process using a nonparametric smoothed Kaplan–Meier estimator, and use Monte Carlo integration to implement the adjustment. We apply our method to simulated data in the context of both Mendelian and multivariate survival prediction models. Simulations are evaluated using measures of mean squared error of prediction (MSEP), area under the response operating characteristics curve (ROC-AUC), and the ratio of observed to expected number of events. These results show that our method mitigates the effects of measurement error mainly by improving calibration and total accuracy. We illustrate our method in the context of Mendelian risk prediction models focusing on misreporting of breast cancer, fitting the measurement error model on data from the University of California at Irvine, and applying our method to counselees from the Cancer Genetics Network. We show that our method improves overall calibration, especially in low risk deciles. Supplementary materials for this article are available online.


Journal of Genetic Counseling | 2018

A Clinical Decision Support Tool to Predict Cancer Risk for Commonly Tested Cancer-Related Germline Mutations

Danielle Braun; Jiabei Yang; Molly Griffin; Giovanni Parmigiani; Kevin S. Hughes

The rapid drop in the cost of DNA sequencing led to the availability of multi-gene panels, which test 25 or more cancer susceptibility genes for a low cost. Clinicians and genetic counselors need a tool to interpret results, understand risk of various cancers, and advise on a management strategy. This is challenging as there are multiple studies regarding each gene, and it is not possible for clinicians and genetic counselors to be aware of all publications, nor to appreciate the relative accuracy and importance of each. Through an extensive literature review, we have identified reliable studies and derived estimates of absolute risk. We have also developed a systematic mechanism and informatics tools for (1) data curation, (2) the evaluation of quality of studies, and (3) the statistical analysis necessary to obtain risk. We produced the risk prediction clinical decision support tool ASK2ME (All Syndromes Known to Man Evaluator). It provides absolute cancer risk predictions for various hereditary cancer susceptibility genes. These predictions are specific to patients’ gene carrier status, age, and history of relevant prophylactic surgery. By allowing clinicians to enter patient information and receive patient-specific cancer risks, this tool aims to have a significant impact on the quality of precision cancer prevention and disease management activities relying on panel testing. It is important to note that this tool is dynamic and constantly being updated, and currently, some of its limitations include (1) for many gene-cancer associations risk estimates are based on one study rather than meta-analysis, (2) strong assumptions on prior cancers, (3) lack of uncertainty measures, and (4) risk estimates for a growing set of gene-cancer associations which are not always variant specific. All of these concerns are being addressed on an ongoing basis, aiming to make the tool even more accurate.


Genetic Epidemiology | 2018

Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data

Thomas Madsen; Danielle Braun; Gang Peng; Giovanni Parmigiani; Lorenzo Trippa

The Elston–Stewart peeling algorithm enables estimation of an individuals probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling‐based models to include many genetic loci. The algorithm creates a trade‐off between accuracy and speed, and allows the user to control this trade‐off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties.


Breast Cancer Research and Treatment | 2018

Pathologic findings in reduction mammoplasty specimens: a surrogate for the population prevalence of breast cancer and high-risk lesions

Francisco Acevedo; V. Diego Armengol; Zhengyi Deng; Rong Tang; Suzanne B. Coopey; Danielle Braun; Adam Yala; Regina Barzilay; Clara Li; Amy S. Colwell; Anthony J. Guidi; Curtis L. Cetrulo; Judy Garber; Barbara L. Smith; Tari A. King; Kevin S. Hughes

PurposeMammoplasty removes random samples of breast tissue from asymptomatic women providing a unique method for evaluating background prevalence of breast pathology in normal population. Our goal was to identify the rate of atypical breast lesions and cancers in women of various ages in the largest mammoplasty cohort reported to date.MethodsWe analyzed pathologic reports from patients undergoing bilateral mammoplasty, using natural language processing algorithm, verified by human review. Patients with a prior history of breast cancer or atypia were excluded.ResultsA total of 4775 patients were deemed eligible. Median age was 40 (range 13–86) and was higher in patients with any incidental finding compared to patients with normal reports (52 vs. 39 years, p = 0.0001). Pathological findings were detected in 7.06% (337) of procedures. Benign high-risk lesions were found in 299 patients (6.26%). Invasive carcinoma and ductal carcinoma in situ were detected in 15 (0.31%) and 23 (0.48%) patients, respectively. The rate of atypias and cancers increased with age.ConclusionThe overall rate of abnormal findings in asymptomatic patients undergoing mammoplasty was 7.06%, increasing with age. As these results are based on random sample of breast tissue, they likely underestimate the prevalence of abnormal findings in asymptomatic women.


Biostatistics | 2017

Propensity scores with misclassified treatment assignment: a likelihood-based adjustment

Danielle Braun; Malka Gorfine; Giovanni Parmigiani; Nils D. Arvold; Francesca Dominici; Corwin Zigler

Summary Propensity score methods are widely used in comparative effectiveness research using claims data. In this context, the inaccuracy of procedural or billing codes in claims data frequently misclassifies patients into treatment groups, that is, the treatment assignment (T) is often measured with error. In the context of a validation data where treatment assignment is accurate, we show that misclassification of treatment assignment can impact three distinct stages of a propensity score analysis: (i) propensity score estimation; (ii) propensity score implementation; and (iii) outcome analysis conducted conditional on the estimated propensity score and its implementation. We examine how the error in T impacts each stage in the context of three common propensity score implementations: subclassification, matching, and inverse probability of treatment weighting (IPTW). Using validation data, we propose a two‐step likelihood‐based approach which fully adjusts for treatment misclassification bias under subclassification. This approach relies on two common measurement error‐assumptions; non‐differential measurement error and transportability of the measurement error model. We use simulation studies to assess the performance of the adjustment under subclassification, and also investigate the methods performance under matching or IPTW. We apply the methods to Medicare Part A hospital claims data to estimate the effect of resection versus biopsy on 1‐year mortality among 10284 Medicare beneficiaries diagnosed with brain tumors. The ICD9 billing codes from Medicare Part A inaccurately reflect surgical treatment, but SEER‐Medicare validation data are available with more accurate information.


Annals of Oncology | 2011

Differential efficacy of three cycles of CMF followed by tamoxifen in patients with ER-positive and ER-negative tumors: Long-term follow up on IBCSG Trial IX

Stefan Aebi; Zhuoxin Sun; Danielle Braun; Karen N. Price; M. Castiglione-Gertsch; Manuela Rabaglio; Richard D. Gelber; Diana Crivellari; Jurij Lindtner; Raymond Snyder; Per Karlsson; Edda Simoncini; Barry A. Gusterson; Giuseppe Viale; Meredith M. Regan; Alan S. Coates; A. Goldhirsch

Collaboration


Dive into the Danielle Braun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Malka Gorfine

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hormuzd A. Katki

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Yala

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge