Featured Researches

Methodology

A principle feature analysis

A key task of data science is to identify relevant features linked to certain output variables that are supposed to be modeled or predicted. To obtain a small but meaningful model, it is important to find stochastically independent variables capturing all the information necessary to model or predict the output variables sufficiently. Therefore, we introduce in this work a framework to detect linear and non-linear dependencies between different features. As we will show, features that are actually functions of other features do not represent further information. Consequently, a model reduction neglecting such features conserves the relevant information, reduces noise and thus improves the quality of the model. Furthermore, a smaller model makes it easier to adopt a model of a given system. In addition, the approach structures dependencies within all the considered features. This provides advantages for classical modeling starting from regression ranging to differential equations and for machine learning. To show the generality and applicability of the presented framework 2154 features of a data center are measured and a model for classification for faulty and non-faulty states of the data center is set up. This number of features is automatically reduced by the framework to 161 features. The prediction accuracy for the reduced model even improves compared to the model trained on the total number of features. A second example is the analysis of a gene expression data set where from 9513 genes 9 genes are extracted from whose expression levels two cell clusters of macrophages can be distinguished.

Read more
Methodology

A regression framework for a probabilistic measure of cost-effectiveness

To make informed health policy decisions regarding a treatment, we must consider both its cost and its clinical effectiveness. In past work, we introduced the net benefit separation (NBS) as a novel measure of cost-effectiveness. The NBS is a probabilistic measure that characterizes the extent to which a treated patient will be more likely to experience benefit as compared to an untreated patient. Due to variation in treatment response across patients, uncovering factors that influence cost-effectiveness can assist policy makers in population-level decisions regarding resource allocation. In this paper, we introduce a regression framework for NBS in order to estimate covariate-specific NBS and find determinants of variation in NBS. Our approach is able to accommodate informative cost censoring through inverse probability weighting techniques, and addresses confounding through a semiparametric standardization procedure. Through simulations, we show that NBS regression performs well in a variety of common scenarios. We apply our proposed regression procedure to a realistic simulated data set as an illustration of how our approach could be used to investigate the association between cancer stage, comorbidities and cost-effectiveness when comparing adjuvant radiation therapy and chemotherapy in post-hysterectomy endometrial cancer patients.

Read more
Methodology

A robust multivariate linear non-parametric maximum likelihood model for ties

Statistical analysis in applied research, across almost every field (e.g., biomedical, economics, computer science, and psychological) makes use of samples upon which the explicit error distribution of the dependent variable is unknown or, at best, difficult to linearly model. Yet, these assumptions are extremely common. Unknown distributions are of course biased when incorrectly specified, compromising the generalisability of our interpretations -- the linearly unbiased Euclidean distance is very difficult to correctly identify upon finite samples and therefore results in an estimator which is neither unbiased nor maximally informative when incorrectly applied. The alternative common solution to the problem however, the use of non-parametric statistics, has its own fundamental flaws. In particular, these flaws revolve around the problem of order-statistics and the estimation in the presence of ties, which often removes the introduction of multiple independent variables and the estimation of interactions. We introduce a competitor to the Euclidean norm, the Kemeny norm, which we prove to be a valid Banach space, and construct a multivariate linear expansion of the Kendall-Theil-Sen estimator, which performs without compromising the parameter space extensibility, and establish its linear maximum likelihood properties. Empirical demonstrations upon both simulated and empirical data shall be used to demonstrate these properties, such that the new estimator is nearly equivalent in power for the glm upon Gaussian data, but grossly superior for a vast array of analytic scenarios, including finite ordinal sum-score analysis, thereby aiding in the resolution of replication in the Applied Sciences.

Read more
Methodology

A semi-analytical solution to the maximum likelihood fit of Poisson data to a linear model using the Cash statistic

[ABRIDGED] The Cash statistic, also known as the C stat, is commonly used for the analysis of low-count Poisson data, including data with null counts for certain values of the independent variable. The use of this statistic is especially attractive for low-count data that cannot be combined, or re-binned, without loss of resolution. This paper presents a new maximum-likelihood solution for the best-fit parameters of a linear model using the Poisson-based Cash statistic. The solution presented in this paper provides a new and simple method to measure the best-fit parameters of a linear model for any Poisson-based data, including data with null counts. In particular, the method enforces the requirement that the best-fit linear model be non-negative throughout the support of the independent variable. The method is summarized in a simple algorithm to fit Poisson counting data of any size and counting rate with a linear model, by-passing entirely the use of the traditional χ 2 statistic.

Read more
Methodology

A simulation study of semiparametric estimation in copula models based on minimum Alpha-Divergence

The purpose of this paper is to introduce two semiparametric methods for the estimation of copula parameter. These methods are based on minimum Alpha-Divergence between a non-parametric estimation of copula density using local likelihood probit transformation method and a true copula density function. A Monte Carlo study is performed to measure the performance of these methods based on Hellinger distance and Neyman divergence as special cases of Alpha-Divergence. Simulation results are compared to the Maximum Pseudo-Likelihood (MPL) estimation as a conventional estimation method in well-known bivariate copula models. These results show that the proposed method based on Minimum Pseudo Hellinger Distance estimation has a good performance in small sample size and weak dependency situations. The parameter estimation methods are applied to a real data set in Hydrology.

Read more
Methodology

A simulation-extrapolation approach for the mixture cure model with mismeasured covariates

We consider survival data from a population with cured subjects in the presence of mismeasured covariates. We use the mixture cure model to account for the individuals that will never experience the event and at the same time distinguish between the effect of the covariates on the cure probabilities and on survival times. In particular, for practical applications, it seems of interest to assume a logistic form of the incidence and a Cox proportional hazards model for the latency. To correct the estimators for the bias introduced by the measurement error, we use the simex algorithm, which is a very general simulation based method. It essentially estimates this bias by introducing additional error to the data and then recovers bias corrected estimators through an extrapolation approach. The estimators are shown to be consistent and asymptotically normally distributed when the true extrapolation function is known. We investigate their finite sample performance through a simulation study and apply the proposed method to analyse the effect of the prostate specific antigen (PSA) on patients with prostate cancer.

Read more
Methodology

A stable and adaptive polygenic signal detection method based on repeated sample splitting

Focusing on polygenic signal detection in high dimensional genetic association studies of complex traits, we develop an adaptive test for generalized linear models to accommodate different alternatives. To facilitate valid post-selection inference for high dimensional data, our study here adheres to the original sampling-splitting principle but does so, repeatedly, to increase stability of the inference. We show the asymptotic null distributions of the proposed test for both fixed and diverging number of variants. We also show the asymptotic properties of the proposed test under local alternatives, providing insights on why power gain attributed to variable selection and weighting can compensate for efficiency loss due to sample splitting. We support our analytical findings through extensive simulation studies and two applications. The proposed procedure is computationally efficient and has been implemented as the R package DoubleCauchy.

Read more
Methodology

A test for comparing conditional ROC curves with multidimensional covariates

The comparison of Receiver Operating Characteristic (ROC) curves is frequently used in the literature to compare the discriminatory capability of different classification procedures based on diagnostic variables. The performance of these variables can be sometimes influenced by the presence of other covariates, and thus they should be taken into account when making the comparison. A new non-parametric test is proposed here for testing the equality of two or more dependent ROC curves conditioned to the value of a multidimensional covariate. Projections are used for transforming the problem into a one-dimensional approach easier to handle. Simulations are carried out to study the practical performance of the new methodology. A real data set of patients with Pleural Effusion is analysed to illustrate this procedure.

Read more
Methodology

Accounting for correlated horizontal pleiotropy in two-sample Mendelian randomization using correlated instrumental variants

Mendelian randomization (MR) is a powerful approach to examine the causal relationships between health risk factors and outcomes from observational studies. Due to the proliferation of genome-wide association studies (GWASs) and abundant fully accessible GWASs summary statistics, a variety of two-sample MR methods for summary data have been developed to either detect or account for horizontal pleiotropy, primarily based on the assumption that the effects of variants on exposure ({\gamma}) and horizontal pleiotropy ({\alpha}) are independent. This assumption is too strict and can be easily violated because of the correlated horizontal pleiotropy (CHP). To account for this CHP, we propose a Bayesian approach, MR-Corr2, that uses the orthogonal projection to reparameterize the bivariate normal distribution for {\gamma} and {\alpha}, and a spike-slab prior to mitigate the impact of CHP. We develop an efficient algorithm with paralleled Gibbs sampling. To demonstrate the advantages of MR-Corr2 over existing methods, we conducted comprehensive simulation studies to compare for both type-I error control and point estimates in various scenarios. By applying MR-Corr2 to study the relationships between pairs in two sets of complex traits, we did not identify the contradictory causal relationship between HDL-c and CAD. Moreover, the results provide a new perspective of the causal network among complex traits. The developed R package and code to reproduce all the results are available at this https URL.

Read more
Methodology

Accounting for not-at-random missingness through imputation stacking

Not-at-random missingness presents a challenge in addressing missing data in many health research applications. In this paper, we propose a new approach to account for not-at-random missingness after multiple imputation through weighted analysis of stacked multiple imputations. The weights are easily calculated as a function of the imputed data and assumptions about the not-at-random missingness. We demonstrate through simulation that the proposed method has excellent performance when the missingness model is correctly specified. In practice, the missingness mechanism will not be known. We show how we can use our approach in a sensitivity analysis framework to evaluate the robustness of model inference to different assumptions about the missingness mechanism, and we provide R package StackImpute to facilitate implementation as part of routine sensitivity analyses. We apply the proposed method to account for not-at-random missingness in human papillomavirus test results in a study of survival for patients diagnosed with oropharyngeal cancer.

Read more

Ready to get started?

Join us today