Rumen Manolov
University of Barcelona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rumen Manolov.
Behavior Modification | 2008
Rumen Manolov; Antonio Solanas
Generalization from single-case designs can be achieved by replicating individual studies across different experimental units and settings. When replications are available, their findings can be summarized using effect size measurements and integrated through meta-analyses. Several procedures are available for quantifying the magnitude of treatment effect in N = 1 designs, and some of them are studied in this article. Monte Carlo simulations were used to generate different data patterns (trend, level change, and slope change). The experimental conditions simulated were defined by the degrees of serial dependence and phase length. Out of all the effect size indices studied, the percentage of nonoverlapping data and standardized mean difference proved to be less affected by autocorrelation and to perform better for shorter data series. The regression-based procedures proposed specifically for single-case designs did not differentiate between data patterns as well as did simpler indices.
Behavior Modification | 2010
Antonio Solanas; Rumen Manolov; Patrick Onghena
The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series before assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and illustrated. A simulation study is carried out to explore the bias and precision of the estimators and compare them to an analytical procedure matching the data simulation model. The experimental conditions include 2 data generation models, several degrees of serial dependence, trend, and level and/or slope change. The results suggest that the level and slope change estimates provided by the procedure are unbiased for all levels of serial dependence tested and trend is effectively controlled for. The efficiency of the slope change estimator is acceptable, whereas the variance of the level change estimator may be problematic for highly negatively autocorrelated data series.
American Journal of Occupational Therapy | 2016
Robyn Tate; Michael Perdices; Ulrike Rosenkoetter; William R. Shadish; Sunita Vohra; David H. Barlow; Robert H. Horner; Alan E. Kazdin; Thomas R. Kratochwill; Skye McDonald; Margaret Sampson; Larissa Shamseer; Leanne Togher; Richard W. Albin; Catherine L. Backman; Jacinta Douglas; Jonathan Evans; David L. Gast; Rumen Manolov; Geoffrey Mitchell; Lyndsey Nickels; Jane Nikles; Tamara Ownsworth; Miranda Rose; Christopher H. Schmid; Barbara A. Wilson
Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist, and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In Behavioral interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.
Journal of School Psychology | 2013
Rumen Manolov; Antonio Solanas
The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The comparison is carried out in the context of generated data representing a variety of patterns including both independent and serially related measurements arising from different underlying processes. Heterogeneity in autocorrelation and data variability was also included, as well as different types of trend, and slope and level changes. The results suggest that the two techniques perform adequately for a wide range of conditions and that researchers can use both of them with certain guarantees. The regression-based procedure offers more efficient estimates, whereas the proposed non-regression procedure is more sensitive to intervention effects. Considering current and previous findings, some tentative recommendations are offered to applied researchers in order to help choosing among the plurality of single-case data analysis techniques.
Behavior Research Methods | 2009
Rumen Manolov; Antonio Solanas
In the present study, we proposed a modification in one of the most frequently applied effect-size procedures in single-case data analysis: the percentage of nonoverlapping data. In contrast with other techniques, the calculus and interpretation of this procedure are straightforward and can be easily complemented by visual inspection of the graphed data. Although the percentage of nonoverlapping data has been found to perform reasonably well in N = 1 data, the magnitude of effect estimates that it yields can be distorted by trend and autocorrelation. Therefore, the data-correction procedure focuses on removing the baseline trend from data prior to estimating the change produced in the behavior as a result of intervention. A simulation study was carried out in order to compare the original and the modified procedures in several experimental conditions. The results suggest that the new proposal is unaffected by trend and autocorrelation and that it can be used in case of unstable baselines and sequentially related measurements.
Neuropsychological Rehabilitation | 2014
Rumen Manolov; David L. Gast; Michael Perdices; Jonathan Evans
In this editorial discussion we reflect on the issues addressed by, and arising from, the papers in this special issue on Single-Case Experimental Design (SCED) study methodology. We identify areas of consensus and disagreement regarding the conduct and analysis of SCED studies. Despite the long history of application of SCEDs in studies of interventions in clinical and educational settings, the field is still developing. There is an emerging consensus on methodological quality criteria for many aspects of SCEDs, but disagreement on what are the most appropriate methods of SCED data analysis. Our aim is to stimulate this ongoing debate and highlight issues requiring further attention from applied researchers and methodologists. In addition we offer tentative criteria to support decision-making in relation to the selection of analytical techniques in SCED studies. Finally, we stress that large-scale interdisciplinary collaborations, such as the current Special Issue, are necessary if SCEDs are going to play a significant role in the development of the evidence base for clinical practice.
Neuropsychological Rehabilitation | 2014
Jonathan Evans; David L. Gast; Michael Perdices; Rumen Manolov
This paper introduces the Special Issue of Neuropsychological Rehabilitation on Single Case Experimental Design (SCED) methodology. SCED studies have a long history of use in evaluating behavioural and psychological interventions, but in recent years there has been a resurgence of interest in SCED methodology, driven in part by the development of standards for conducting and reporting SCED studies. Although there is consensus on some aspects of SCED methodology, the question of how SCED data should be analysed remains unresolved. This Special Issues includes two papers discussing aspects of conducting SCED studies, five papers illustrating use of SCED methodology in clinical practice, and nine papers that present different methods of SCED data analysis. A final Discussion paper summarises points of agreement, highlights areas where further clarity is needed, and ends with a set of resources that will assist researchers conduct and analyse SCED studies.
Archives of Scientific Psychology | 2016
Robyn Tate; Michael Perdices; Ulrike Rosenkoetter; William R. Shadish; Sunita Vohra; David H. Barlow; Robert H. Horner; Alan E. Kazdin; Thomas R. Kratochwill; Skye McDonald; Margaret Sampson; Larissa Shamseer; Leanne Togher; Richard W. Albin; Catherine L. Backman; Jacinta Douglas; Jonathan Evans; David L. Gast; Rumen Manolov; Geoffrey Mitchell; Lyndsey Nickels; Jane Nikles; Tamara Ownsworth; Miranda Rose; Christopher H. Schmid; Barbara A. Wilson
We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016 ) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts. SCIENTIFIC ABSTRACT Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012 ). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008 ) provides suitable guidance for reporting between-groups intervention studies in the behavioural sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015 ; Vohra et al., 2015 ), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioural sciences. We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016 ) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.
Journal of Experimental Education | 2009
Rumen Manolov; Antonio Solanas; Isis Bulté; Patrick Onghena
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. The authors studied robustness and power under several experimental conditions—different autocorrelation levels and different effect sizes as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. The authors obtained satisfactory Type II error rates only for large treatment effects. The relation between the lengths of the four phases appeared to be an important factor for the robustness and power of the randomization test.
Behavior Modification | 2014
Rumen Manolov; Vicenta Sierra; Antonio Solanas; Juan Botella
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.