Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bart Michiels is active.

Publication


Featured researches published by Bart Michiels.


Statistica Neerlandica | 1998

Monotone missing data and pattern-mixture models

Geert Molenberghs; Bart Michiels; Michael G. Kenward; Peter J. Diggle

It is shown that the classical taxonomy of missing data models, namely missing completely at random, missing at random and informative missingness, which has been developed almost exclusively within a selection modelling framework, can also be applied to pattern-mixture models. In particular, intuitively appealing identifying restrictions are proposed for a pattern-mixture MAR mechanism.


Communications in Statistics-theory and Methods | 1997

Protective Estimation of Longitudinal Categorical Data With Nonrandom Dropout

Bart Michiels; Geert Molenberghs

Partially observed longitudinal categorical data, where the partial classification arises due to monotone dropout, are analyzed using a protective estimator, which was first suggested by Brown (Biometrics, 1990) for normally distributed data. It is appropriate when dropout depends on the unobserved outcomes only, a particular type of nonignorable nonresponse. Estimation of measurement parameters is possible, without explicitly modelling the dropout process. Necessary and sufficient conditions are derived in order to have a unique solution in the interior of the parameter space. It is shown that precision estimates can be based on the delta method, the EM algorithm, and on multiple imputation. The relative merits of these techniques are discussed and they are contrasted with direct likelihood estimation and with pseudo-likelihood estimation. The method is illustrated using data taken from a psychiatric study.


Communications in Statistics-theory and Methods | 1999

A pattern-mixture odds ratio model for incomplete categorical data

Bart Michiels; Geert Molenberghs; Stuart R. Lipsitz

Most models for incomplete data are formulated within the selection model framework. Pattern-mixture models are increasingly seen as a viable alternative, both from an interpretational as well as from a computational point of view (Little 1993, Hogan and Laird 1997, Ekholm and Skinner 1998). Whereas most applications are either for continuous normally distributed data or for simplified categorical settings such as contingency tables, we show how a multivariate odds ratio model (Molenberghs and Lesaffre 1994, 1998) can be used to fit pattern-mixture models to repeated binary outcomes with continuous covariates. Apart from point estimation, useful methods for interval estimation are presented and data from a clinical study are analyzed to illustrate the methods.


Biometrical Journal | 1998

Pseudo-likelihood for combined selection and pattern-mixture models for incomplete data

Geert Molenberghs; Bart Michiels; Michael G. Kenward

In this paper we develop pseudo-likelihood methods for the estimation of parameters in a model that is specified in terms of both selection modelling and pattern-mixture modelling quantities. Two cases are considered: (1) the model is specified directly from a joint model for the measurement and dropout processes; (2) conditional models for the measurement process given dropout and vice versa are specified directly. In the latter case, compatibility constraints to ensure the existence of a joint density are derived. The method is applied to data from a psychiatric study, where a bivariate therapeutic outcome is supplemented with covariate information.


Behavior Research Methods | 2017

Confidence intervals for single-case effect size measures based on randomization test inversion

Bart Michiels; Mieke Heyvaert; Ann Meulders; Patrick Onghena

In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 – α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.


Journal de la Societe Francaise de Statistique & Revue de Statistique Appliquee | 2004

Pattern‐Mixture Models

Geert Molenberghs; Herbert Thijs; Bart Michiels; Geert Verbeke; Michael G. Kenward

Whereas most models for incomplète longitudinal data are formulated within the sélection model framework, pattern-mixture models hâve gained considérable interest in récent years. We outline several stratégies to fit pattern-mixture models, including the so-called identifying-restrictions stratégies. Multiple imputation is used to apply thèse stratégies to real sets of data. Our ideas are exemplified using qualityof-life data from a longitudinal study on metastatic breast cancer patients and using a longitudinal clinical trial in Alzheimer patients.


Behavior Research Methods | 2018

The conditional power of randomization tests for single-case effect sizes in designs with randomized treatment order: A Monte Carlo simulation study

Bart Michiels; Mieke Heyvaert; Patrick Onghena

The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT’s CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.


Behavior Research Methods | 2018

Nonparametric meta-analysis for single-case research: Confidence intervals for combined effect sizes

Bart Michiels; Patrick Onghena

In this article we present a nonparametric technique for meta-analyzing randomized single-case experiments by using inverted randomization tests to calculate nonparametric confidence intervals for combined effect sizes (CICES). Over the years, several proposals for single-case meta-analysis have been made, but most of these proposals assume either specific population characteristics (e.g., heterogeneity of variances or normality) or independent observations. However, such assumptions are seldom plausible in single-case research. The CICES technique does not require such assumptions, but only assumes that the combined effect size of multiple randomized single-case experiments can be modeled as a constant difference in the phase means. CICES can be used to synthesize the results from various single-case alternation designs, single-case phase designs, or a combination of the two. Furthermore, the technique can be used with different standardized or unstandardized effect size measures. In this article, we explain the rationale behind the CICES technique and provide illustrations with empirical as well as hypothetical datasets. In addition, we discuss the strengths and weaknesses of this technique and offer some possibilities for future research. We have implemented the CICES technique for single-case meta-analysis in a freely available R function.


Behavior Research Methods | 2018

Randomized single-case AB phase designs: Prospects and pitfalls

Bart Michiels; Patrick Onghena

Single-case experimental designs (SCEDs) are increasingly used in fields such as clinical psychology and educational psychology for the evaluation of treatments and interventions in individual participants. The AB phase design, also known as the interrupted time series design, is one of the most basic SCEDs used in practice. Randomization can be included in this design by randomly determining the start point of the intervention. In this article, we first introduce this randomized AB phase design and review its advantages and disadvantages. Second, we present some data-analytical possibilities and pitfalls related to this design and show how the use of randomization tests can mitigate or remedy some of these pitfalls. Third, we demonstrate that the Type I error of randomization tests in randomized AB phase designs is under control in the presence of unexpected linear trends in the data. Fourth, we report the results of a simulation study investigating the effect of unexpected linear trends on the power of the randomization test in randomized AB phase designs. The implications of these results for the analysis of randomized AB phase designs are discussed. We conclude that randomized AB phase designs are experimentally valid, but that the power of these designs is sufficient only for large treatment effects and large sample sizes. For small treatment effects and small sample sizes, researchers should turn to more complex phase designs, such as randomized ABAB phase designs or randomized multiple-baseline designs.


Biostatistics | 2002

Strategies to fit pattern‐mixture models

Herbert Thijs; Geert Molenberghs; Bart Michiels; Geert Verbeke; Desmond Curran

Collaboration


Dive into the Bart Michiels's collaboration.

Top Co-Authors

Avatar

Geert Molenberghs

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Patrick Onghena

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stuart R. Lipsitz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Geert Verbeke

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mieke Heyvaert

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Desmond Curran

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge