Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariola Moeyaert is active.

Publication


Featured researches published by Mariola Moeyaert.


Journal of School Psychology | 2014

From a single-level analysis to a multilevel analysis of single-case experimental designs ☆

Mariola Moeyaert; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.


Journal of Experimental Education | 2014

Three-level Analysis of Single-Case Experimental Data: Empirical Validation

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatments effect on the time trend, the number of studies, cases and measurements, and the between-case and between-study variance. The study shows that the three-level approach results in unbiased estimates of both kinds of treatment effects. To have reasonable power for testing the treatment effects, the authors recommend researchers to use a homogeneous set of studies and to involve a minimum of 30 studies. The number of measurements and cases is of less importance.


Psychological Methods | 2014

Estimating causal effects from multiple-baseline studies: Implications for design and analysis.

John M. Ferron; Mariola Moeyaert; Wim Van Den Noortgate; S. Natasha Beretvas

Traditionally, average causal effects from multiple-baseline data are estimated by aggregating individual causal effect estimates obtained through within-series comparisons of treatment phase trajectories to baseline extrapolations. Concern that these estimates may be biased due to event effects, such as history and maturation, motivates our proposal of a between-series estimator that contrasts participants in the treatment to those in the baseline phase. Accuracy of the new method was assessed and compared in a series of simulation studies where participants were randomly assigned to intervention start points. The within-series estimator was found to have greater power to detect treatment effects but also to be biased due to event effects, leading to faulty causal inferences. The between-series estimator remained unbiased and controlled the Type I error rate independent of event effects. Because the between-series estimator is unbiased under different assumptions, the 2 estimates complement each other, and the difference between them can be used to detect inaccuracies in the modeling assumptions. The power to detect inaccuracies associated with event effects was found to depend on the size and type of event effect. We empirically illustrate the methods using a real data set and then discuss implications for researchers planning multiple-baseline studies.


Behavior Research Methods | 2012

Multilevel meta-analysis of single-subject experimental designs: A simulation study

Maaike Ugille; Mariola Moeyaert; S. Natasha Beretvas; John M. Ferron; Wim Van Den Noortgate

One way to combine data from single-subject experimental design studies is by performing a multilevel meta-analysis, with unstandardized or standardized regression coefficients as the effect size metrics. This study evaluates the performance of this approach. The results indicate that a multilevel meta-analysis of unstandardized effect sizes results in good estimates of the effect. The multilevel meta-analysis of standardized effect sizes, on the other hand, is suitable only when the number of measurement occasions for each subject is 20 or more. The effect of the treatment on the intercept is estimated with enough power when the studies are homogeneous or when the number of studies is large; the power of the effect on the slope is estimated with enough power only when the number of studies and the number of measurement occasions are large.


Neuropsychological Rehabilitation | 2014

The use of multilevel analysis for integrating single-case experimental design results within a study and across studies

Eun Kyeng Baek; Mariola Moeyaert; Merlande Petit-Bois; S. Natasha Beretvas; Wim Van Den Noortgate; John M. Ferron

The use of multilevel models as a method for synthesising single-case experimental design results is receiving increased consideration. In this article we discuss the potential advantages and limitations of the multilevel modelling approach. We present a basic two-level model where observations are nested within cases, and then discuss extensions of the basic model to accommodate trends, moderators of the intervention effect, non-continuous outcomes, heterogeneity, autocorrelation, the nesting of cases within studies, and more complex single-case design types. We then consider methods for standardising the effect estimates and alternative approaches to estimating the models. These modelling and analysis options are followed by an illustrative example.


Multivariate Behavioral Research | 2013

The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included (Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.


Behavior Modification | 2014

The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

Mariola Moeyaert; Maaike Ugille; John M. Ferron; S. Natasha Beretvas; Wim Van Den Noortgate

The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.


Behavior Modification | 2016

Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs:

Mariola Moeyaert; Daniel M. Maggin; Jay Verkuilen

Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong.


Journal of Experimental Education | 2014

Bias Corrections for Standardized Effect Size Estimates Used With Single-Subject Experimental Designs

Maaike Ugille; Mariola Moeyaert; S. Natasha Beretvas; John M. Ferron; Wim Van Den Noortgate

A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect sizes are adjusted using Hedges’ small sample bias correction. Next, the within-subject standard deviation is estimated by a 2-level model per study or by using a regression model with the subjects identified using dummy predictor variables. The effect sizes are corrected using an iterative raw data parametric bootstrap procedure. The results indicate that the first and last approach succeed in reducing the bias of the fixed effects estimates. Given the difference in complexity, we recommend the first approach.


Journal of Experimental Education | 2017

Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

Mieke Heyvaert; Mariola Moeyaert; Paul Verkempynck; Wim Van Den Noortgate; Marlies Vervloet; Maaike Ugille; Patrick Onghena

ABSTRACT This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test p values (RTcombiP). Four factors were manipulated: mean intervention effect, number of cases included in a study, number of measurement occasions for each case, and between-case variance. Under the simulated conditions, Type I error rate was under control at the nominal 5% level for both HLM and RTcombiP. Furthermore, for both procedures, a larger number of combined cases resulted in higher statistical power, with many realistic conditions reaching statistical power of 80% or higher. Smaller values for the between-case variance resulted in higher power for HLM. A larger number of data points resulted in higher power for RTcombiP.

Collaboration


Dive into the Mariola Moeyaert's collaboration.

Top Co-Authors

Avatar

Wim Van Den Noortgate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

John M. Ferron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

S. Natasha Beretvas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Maaike Ugille

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Patrick Onghena

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Mieke Heyvaert

Research Foundation - Flanders

View shared research outputs
Top Co-Authors

Avatar

Laleh Jamshidi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Wim Van den Noortgate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eun Kyeng Baek

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge