Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maaike Ugille is active.

Publication


Featured researches published by Maaike Ugille.


Journal of Experimental Education | 2014

Three-level Analysis of Single-Case Experimental Data: Empirical Validation

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatments effect on the time trend, the number of studies, cases and measurements, and the between-case and between-study variance. The study shows that the three-level approach results in unbiased estimates of both kinds of treatment effects. To have reasonable power for testing the treatment effects, the authors recommend researchers to use a homogeneous set of studies and to involve a minimum of 30 studies. The number of measurements and cases is of less importance.


Behavior Research Methods | 2012

Multilevel meta-analysis of single-subject experimental designs: A simulation study

Maaike Ugille; Mariola Moeyaert; S. Natasha Beretvas; John M. Ferron; Wim Van Den Noortgate

One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel meta-analysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.


Multivariate Behavioral Research | 2013

The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included (Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.


Behavior Modification | 2014

The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.


Journal of Experimental Education | 2014

Bias Corrections for Standardized Effect Size Estimates Used With Single-Subject Experimental Designs

Maaike Ugille; Mariola Moeyaert; S. Natasha Beretvas; John M. Ferron; Wim Van Den Noortgate

A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect sizes are adjusted using Hedges’ small sample bias correction. Next, the within-subject standard deviation is estimated by a 2-level model per study or by using a regression model with the subjects identified using dummy predictor variables. The effect sizes are corrected using an iterative raw data parametric bootstrap procedure. The results indicate that the first and last approach succeed in reducing the bias of the fixed effects estimates. Given the difference in complexity, we recommend the first approach.


Journal of Experimental Education | 2017

Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

Mieke Heyvaert; Mariola Moeyaert; Paul Verkempynck; Wim Van Den Noortgate; Marlies Vervloet; Maaike Ugille; Patrick Onghena

ABSTRACT This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test p values (RTcombiP). Four factors were manipulated: mean intervention effect, number of cases included in a study, number of measurement occasions for each case, and between-case variance. Under the simulated conditions, Type I error rate was under control at the nominal 5% level for both HLM and RTcombiP. Furthermore, for both procedures, a larger number of combined cases resulted in higher statistical power, with many realistic conditions reaching statistical power of 80% or higher. Smaller values for the between-case variance resulted in higher power for HLM. A larger number of data points resulted in higher power for RTcombiP.


School Psychology Quarterly | 2015

Estimating intervention effects across different types of single-subject experimental designs: empirical illustration.

Mariola Moeyaert; Maaike Ugille; John M. Ferron; Patrick Onghena; Mieke Heyvaert; S. Natasha Beretvas; Wim Van Den Noortgate

The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs.


Behavior Research Methods | 2013

Modeling External Events in the Three-Level Analysis of Multiple-Baseline across Participants Designs: a Simulation Study

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

In this study, we focus on a three-level meta-analysis for combining data from studies using multiple-baseline across-participants designs. A complicating factor in such designs is that results might be biased if the dependent variable is affected by not explicitly modeled external events, such as the illness of a teacher, an exciting class activity, or the presence of a foreign observer. In multiple-baseline designs, external effects can become apparent if they simultaneously have an effect on the outcome score(s) of the participants within a study. This study presents a method for adjusting the three-level model to external events and evaluates the appropriateness of the modified model. Therefore, we use a simulation study, and we illustrate the new approach with real data sets. The results indicate that ignoring an external event effect results in biased estimates of the treatment effects, especially when there is only a small number of studies and measurement occasions involved. The mean squared error, as well as the standard error and coverage proportion of the effect estimates, is improved with the modified model. Moreover, the adjusted model results in less biased variance estimates. If there is no external event effect, we find no differences in results between the modified and unmodified models.


International Journal of Social Research Methodology | 2017

Methods for dealing with multiple outcomes in meta-analysis: a comparison between averaging effect sizes, robust variance estimation and multilevel meta-analysis

Mariola Moeyaert; Maaike Ugille; S. Natasha Beretvas; John M. Ferron; Rommel Bunuan; Wim Van Den Noortgate

Abstract This study investigates three methods to handle dependency among effect size estimates in meta-analysis arising from studies reporting multiple outcome measures taken on the same sample. The three-level approach is compared with the method of robust variance estimation, and with averaging effects within studies. A simulation study is performed, and the fixed and random effect estimates of the three methods are compared with each other. Both the robust variance estimation and three-level approach result in unbiased estimates of the fixed effects, corresponding standard errors and variances. Averaging effect sizes results in overestimated standard errors when the effect sizes within studies are truly independent. Although the robust variance and three-level approach are more complicated to use, they have the advantage that they do not require an estimate of the correlation between outcomes, and they still result in unbiased parameter estimates.


Journal of Experimental Education | 2016

The Misspecification of the Covariance Structures in Multilevel Models for Single-Case Data: A Monte Carlo Simulation Study

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest lies in the between-study variance estimate, including at least 30 studies is warranted. Modeling covariance does not result in less biased between-study variance estimates as the between-study covariance estimate is biased. When the research interest lies in the between-case covariance, the model including covariance results in unbiased between-case variance estimates. The three-level model appears to be less appropriate for estimating between-study variance if fewer than 30 studies are included.

Collaboration


Dive into the Maaike Ugille's collaboration.

Top Co-Authors

Avatar

Mariola Moeyaert

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Wim Van Den Noortgate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

John M. Ferron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

S. Natasha Beretvas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Mieke Heyvaert

Research Foundation - Flanders

View shared research outputs
Top Co-Authors

Avatar

Patrick Onghena

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Patrick Onghena

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Marlies Vervloet

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Paul Verkempynck

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Eun Kyeng Baek

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge