Featured Researches

Methodology

Estimation and Sensitivity Analysis for Causal Decomposition in Heath Disparity Research

In the field of disparities research, there has been growing interest in developing a counterfactual-based decomposition analysis to identify underlying mediating mechanisms that help reduce disparities in populations. Despite rapid development in the area, most prior studies have been limited to regression-based methods, undermining the possibility of addressing complex models with multiple mediators and/or heterogeneous effects. We propose an estimation method that effectively addresses complex models. Moreover, we develop a novel sensitivity analysis for possible violations of identification assumptions. The proposed method and sensitivity analysis are demonstrated with data from the Midlife Development in the US study to investigate the degree to which disparities in cardiovascular health at the intersection of race and gender would be reduced if the distributions of education and perceived discrimination were the same across intersectional groups.

Read more
Methodology

Estimation for network snowball sampling: Preventing pandemics

Snowball designs are the most natural of the network sampling designs. They have many desirable properties for sampling hidden and hard-to reach populations. They have been under-used in recent years because simple design-based estimators and confidence intervals have not been available for them. The needed estimation methods are supplied in this paper. Snowball sampling methods and accurate estimators with them are needed for sampling of the people exposed to the animals from which new coronavirus outbreaks originate, and to sample the animal populations to which they are exposed. Accurate estimates are needed to evaluate the effectiveness of interventions to reduce the risk to the people exposed to the animals. In this way the frequencies of major outbreaks and pandemics can be reduced. Snowball designs are needed in studies of sexual and opioid networks through which HIV can spread explosively, so that prevention intervention methods can be developed, accurately assessed, and effectively distributed.

Read more
Methodology

Estimation of Health and Demographic Indicators with Incomplete Geographic Information

In low and middle income countries, household surveys are a valuable source of information for a range of health and demographic indicators. Increasingly, subnational estimates are required for targeting interventions and evaluating progress towards targets. In the majority of cases, stratified cluster sampling is used, with clusters corresponding to enumeration areas. The reported geographical information varies. A common procedure, to preserve confidentiality, is to give a jittered location with the true centroid of the cluster is displaced under a known algorithm. An alternative situation, which was used for older surveys in particular, is to report the geographical region within the cluster lies. In this paper, we describe a spatial hierarchical model in which we account for inaccuracies in the cluster locations. The computational algorithm we develop is fast and avoids the heavy computation of a pure MCMC approach. We illustrate by simulation the benefits of the model, over naive alternatives.

Read more
Methodology

Estimation of future discretionary benefits in traditional life insurance

In the context of traditional life insurance, the future discretionary benefits ( FDB ), which are a central item for Solvency~II reporting, are generally calculated by computationally expensive Monte Carlo algorithms. We derive analytic formulas for lower and upper bounds for the FDB . This yields an estimation interval for the FDB , and the average of lower and upper bound is a simple estimator. These formulae are designed for real world applications, and we compare the results to publicly available reporting data.

Read more
Methodology

Estimation of marriage incidence rates by combining two cross-sectional retrospective designs: Event history analysis of two dependent processes

The aim of this work is to develop methods for studying the determinants of marriage incidence using marriage histories collected under two different types of retrospective cross-sectional study designs. These designs are: sampling of ever married women before the cross-section, a prevalent cohort, and sampling of women irrespective of marital status, a general cross-sectional cohort. While retrospective histories from a prevalent cohort do not identify incidence rates without parametric modelling assumptions, the rates can be identified when combined with data from a general cohort. Moreover, education, a strong endogenous covariate, and marriage processes are correlated. Hence, they need to be modelled jointly in order to estimate the marriage incidence. For this purpose, we specify a multi-state model and propose a likelihood-based estimation method. We outline the assumptions under which a likelihood expression involving only marriage incidence parameters can be derived. This is of particular interest when either retrospective education histories are not available or related parameters are not of interest. Our simulation results confirm the gain in efficiency by combining data from the two designs, while demonstrating how the parameter estimates are affected by violations of the assumptions used in deriving the simplified likelihood expressions. Two Indian National Family Health Surveys are used as motivation for the methodological development and to demonstrate the application of the methods.

Read more
Methodology

Estimation of separable direct and indirect effects in continuous time

Many research questions involve time-to-event outcomes that can be prevented from occurring due to competing events. In these settings, we must be careful about the causal interpretation of classical statistical estimands. In particular, estimands on the hazard scale, such as ratios of cause specific or subdistribution hazards, are fundamentally hard to be interpret causally. Estimands on the risk scale, such as contrasts of cumulative incidence functions, do have a causal interpretation, but they only capture the total effect of the treatment on the event of interest; that is, effects both through and outside of the competing event. To disentangle causal treatment effects on the event of interest and competing events, the separable direct and indirect effects were recently introduced. Here we provide new results on the estimation of direct and indirect separable effects in continuous time. In particular, we derive the nonparametric influence function in continuous time and use it to construct an estimator that has certain robustness properties. We also propose a simple estimator based on semiparametric models for the two cause specific hazard functions. We describe the asymptotic properties of these estimators, and present results from simulation studies, suggesting that the estimators behave satisfactorily in finite samples. Finally, we re-analyze the prostate cancer trial from Stensrud et al (2020).

Read more
Methodology

Evaluating Catchment Models as Multiple Working Hypotheses: on the Role of Error Metrics, Parameter Sampling, Model Structure, and Data Information Content

To evaluate models as hypotheses, we developed the method of Flux Mapping to construct a hypothesis space based on dominant runoff generating mechanisms. Acceptable model runs, defined as total simulated flow with similar (and minimal) model error, are mapped to the hypothesis space given their simulated runoff components. In each modeling case, the hypothesis space is the result of an interplay of factors: model structure and parameterization, chosen error metric, and data information content. The aim of this study is to disentangle the role of each factor in model evaluation. We used two model structures (SACRAMENTO and SIMHYD), two parameter sampling approaches (Latin Hypercube Sampling of the parameter space and guided-search of the solution space), three widely used error metrics (Nash-Sutcliffe Efficiency - NSE, Kling-Gupta Efficiency skill score - KGEss, and Willmott refined Index of Agreement - WIA), and hydrological data from a large sample of Australian catchments. First, we characterized how the three error metrics behave under different error types and magnitudes independent of any modeling. We then conducted a series of controlled experiments to unpack the role of each factor in runoff generation hypotheses. We show that KGEss is a more reliable metric compared to NSE and WIA for model evaluation. We further demonstrate that only changing the error metric -- while other factors remain constant -- can change the model solution space and hence vary model performance, parameter sampling sufficiency, and or the flux map. We show how unreliable error metrics and insufficient parameter sampling impair model-based inferences, particularly runoff generation hypotheses.

Read more
Methodology

Evaluating probabilistic classifiers: Reliability diagrams and score decompositions revisited

A probability forecast or probabilistic classifier is reliable or calibrated if the predicted probabilities are matched by ex post observed frequencies, as examined visually in reliability diagrams. The classical binning and counting approach to plotting reliability diagrams has been hampered by a lack of stability under unavoidable, ad hoc implementation decisions. Here we introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way. CORP is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm - essentially, the CORP reliability diagram shows the graph of the PAV- (re)calibrated forecast probabilities. The CORP approach allows for uncertainty quantification via either resampling techniques or asymptotic theory, furnishes a new numerical measure of miscalibration, and provides a CORP based Brier score decomposition that generalizes to any proper scoring rule. We anticipate that judicious uses of the PAV algorithm yield improved tools for diagnostics and inference for a very wide range of statistical and machine learning methods.

Read more
Methodology

Evaluating the Discrimination Ability of Proper Multivariate Scoring Rules

Proper scoring rules are commonly applied to quantify the accuracy of distribution forecasts. Given an observation they assign a scalar score to each distribution forecast, with the the lowest expected score attributed to the true distribution. The energy and variogram scores are two rules that have recently gained some popularity in multivariate settings because their computation does not require a forecast to have parametric density function and so they are broadly applicable. Here we conduct a simulation study to compare the discrimination ability between the energy score and three variogram scores. Compared with other studies, our simulation design is more realistic because it is supported by a historical data set containing commodity prices, currencies and interest rates, and our data generating processes include a diverse selection of models with different marginal distributions, dependence structure, and calibration windows. This facilitates a comprehensive comparison of the performance of proper scoring rules in different settings. To compare the scores we use three metrics: the mean relative score, error rate and a generalised discrimination heuristic. Overall, we find that the variogram score with parameter p=0.5 outperforms the energy score and the other two variogram scores.

Read more
Methodology

Evaluation of Logistic Regression Applied to Respondent-Driven Samples: Simulated and Real Data

Objective: To investigate the impact of different logistic regression estimators applied to RDS samples obtained by simulation and real data. Methods: Four simulated populations were created combining different connectivity models, levels of clusterization and infection processes. Each subject in the population received two attributes, only one of them related to the infection process. From each population, RDS samples with different sizes were obtained. Similarly, RDS samples were obtained from a real-world dataset. Three logistic regression estimators were applied to assess the association between the attributes and the infection status, and subsequently the observed coverage of each was measured. Results: The type of connectivity had more impact on estimators performance than the clusterization level. In simulated datasets, unweighted logistic regression estimators emerged as the best option, although all estimators showed a fairly good performance. In the real dataset, the performance of weighted estimators presented some instabilities, making them a risky option. Conclusion: An unweighted logistic regression estimator is a reliable option to be applied to RDS samples, with similar performance to random samples and, therefore, should be the preferred option.

Read more

Ready to get started?

Join us today