Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James E. Pustejovsky is active.

Publication


Featured researches published by James E. Pustejovsky.


Journal of School Psychology | 2014

Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

William R. Shadish; Larry V. Hedges; James E. Pustejovsky

This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.


Cancer | 2015

A meta-analytic approach to examining the correlation between religion/spirituality and mental health in cancer.

John M. Salsman; James E. Pustejovsky; Heather Jim; Alexis R. Munoz; Thomas V. Merluzzi; Login S. George; Crystal L. Park; Suzanne C. Danhauer; Allen C. Sherman; Mallory A. Snyder; George Fitchett

Religion and spirituality (R/S) are patient‐centered factors and often are resources for managing the emotional sequelae of the cancer experience. Studies investigating the correlation between R/S (eg, beliefs, experiences, coping) and mental health (eg, depression, anxiety, well being) in cancer have used very heterogeneous measures and have produced correspondingly inconsistent results. A meaningful synthesis of these findings has been lacking; thus, the objective of this review was to conduct a meta‐analysis of the research on R/S and mental health. Four electronic databases were systematically reviewed, and 2073 abstracts met initial selection criteria. Reviewer pairs applied standardized coding schemes to extract indices of the correlation between R/S and mental health. In total, 617 effect sizes from 148 eligible studies were synthesized using meta‐analytic generalized estimating equations, and subgroup analyses were performed to examine moderators of effects. The estimated mean correlation (Fisher z) was 0.19 (95% confidence interval [CI], 0.16‐0.23), which varied as a function of R/S dimensions: affective R/S (z = 0.38; 95% CI, 0.33‐0.43), behavioral R/S (z = 0.03; 95% CI, −0.02‐0.08), cognitive R/S (z = 0.10; 95% CI, 0.06‐0.14), and ‘other’ R/S (z = 0.08; 95% CI, 0.03‐0.13). Aggregate, study‐level demographic and clinical factors were not predictive of the relation between R/S and mental health. There was little indication of publication or reporting biases. The correlation between R/S and mental health generally was positive. The strength of that correlation was modest and varied as a function of the R/S dimensions and mental health domains assessed. The identification of optimal R/S measures and more sophisticated methodological approaches are needed to advance research. Cancer 2015;121:3769–3778.


Journal of Educational and Behavioral Statistics | 2014

Design-Comparable Effect Sizes in Multiple Baseline Designs A General Modeling Framework

James E. Pustejovsky; Larry V. Hedges; William R. Shadish

In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general framework for defining effect sizes in multiple baseline designs that are directly comparable to the standardized mean difference from a between-subjects randomized experiment. The target, design-comparable effect size parameter can be estimated using restricted maximum likelihood together with a small sample correction analogous to Hedges’s g. The approach is demonstrated using hierarchical linear models that include baseline time trends and treatment-by-time interactions. A simulation compares the performance of the proposed estimator to that of an alternative, and an application illustrates the model-fitting process.


Journal of Educational and Behavioral Statistics | 2015

Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression

Elizabeth Tipton; James E. Pustejovsky

Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance estimation (RVE) provides a method for pooling dependent effects, even when information on the exact dependence structure is not available. When the number of studies is small or moderate, however, test statistics and confidence intervals based on RVE can have inflated Type I error. This article describes and investigates several small-sample adjustments to F-statistics based on RVE. Simulation results demonstrate that one such test, which approximates the test statistic using Hotelling’s T2 distribution, is level-α and uniformly more powerful than the others. An empirical application demonstrates how results based on this test compare to the large-sample F-test.


Psychological Methods | 2015

Measurement-comparable effect sizes for single-case studies of free-operant behavior.

James E. Pustejovsky

Single-case research comprises a set of designs and methods for evaluating the effects of interventions, practices, or programs on individual cases, through comparison of outcomes measured at different points in time. Although there has long been interest in meta-analytic techniques for synthesizing single-case research, there has been little scrutiny of whether proposed effect sizes remain on a directly comparable metric when outcomes are measured using different operational procedures. Much of single-case research focuses on behavioral outcomes in free-operant contexts, which may be measured using a variety of different direct observation procedures. This article describes a suite of effect sizes for quantifying changes in free-operant behavior, motivated by an alternating renewal process model that allows measurement comparability to be established in precise terms. These effect size metrics have the advantage of comporting with how direct observation data are actually collected and summarized. Effect size estimators are proposed that are applicable when the behavior being measured remains stable within a given treatment condition. The methods are illustrated by 2 examples, including a re-analysis of a systematic review of the effects of choice-making opportunities on problem behavior.


Remedial and Special Education | 2017

Functional Assessment–Based Interventions for Students With or At-Risk for High-Incidence Disabilities: Field Testing Single-Case Synthesis Methods:

Eric Alan Common; Kathleen Lynne Lane; James E. Pustejovsky; Austin H. Johnson; Liane Elizabeth Johl

This systematic review investigated one systematic approach to designing, implementing, and evaluating functional assessment–based interventions (FABI) for use in supporting school-age students with or at-risk for high-incidence disabilities. We field tested several recently developed methods for single-case design syntheses. First, we appraised the quality of individual studies and the overall body of work using Council for Exceptional Children’s standards. Next, we calculated and meta-analyzed within-case and between-case effect sizes. Results indicated that studies were of high methodological quality, with nine studies identified as being methodologically sound and demonstrating positive outcomes across 14 participants. However, insufficient evidence was available to classify the evidence base for FABIs due to small number of participants within (fewer than recommended three) and across (fewer than recommended 20) studies. Nonetheless, average within-case effect sizes were equivalent to increases of 118% between baseline and intervention phases. Finally, potential moderating variables were examined. Limitations and future directions are discussed.


Remedial and Special Education | 2017

A Meta-Analysis of School-Based Group Contingency Interventions for Students with Challenging Behavior: An Update.

Daniel M. Maggin; James E. Pustejovsky; Austin H. Johnson

Group contingencies are recognized as a potent intervention for addressing challenging student behavior in the classroom, with research reviews supporting the use of this intervention platform going back more than four decades. Over this time period, the field of education has increasingly emphasized the role of research evidence for informing practice, as reflected in the increased use of systematic reviews and meta-analyses. In the current article, we continue this trend by applying recently developed between-case effect size measures and transparent visual analysis procedures to synthesize an up-to-date set of group contingency studies that used single-case designs. Results corroborated recent systematic reviews by indicating that group contingencies are generally effective—particularly for addressing challenging behavior in general education classrooms. However, our review highlights the need for more research on students with disabilities and the need to collect and report information about participants’ functional level.


Remedial and Special Education | 2017

Technology-Aided Instruction and Intervention for Students With ASD: A Meta-Analysis Using Novel Methods of Estimating Effect Sizes for Single-Case Research:

Erin E. Barton; James E. Pustejovsky; Daniel M. Maggin; Brian Reichow

The adoption of methods and strategies validated through rigorous, experimentally oriented research is a core professional value of special education. We conducted a systematic review and meta-analysis examining the experimental literature on Technology-Aided Instruction and Intervention (TAII) using research identified as part of the National Autism Professional Development Project. We applied novel between-case effect size methods to the TAII single-case research base. In addition, we used meta-analytic methodologies to examine the methodological quality of the research, calculate average effect sizes to quantify the level of evidence for TAII, and compare effect sizes across single-case and group-based experimental research. Results identified one category of TAII—computer-assisted instruction—as an evidence-based practice across both single-case and group studies. The remaining two categories of TAII—augmentative and alternative communication and virtual reality—were not identified as evidence-based using What Works Clearinghouse summary ratings.


Multivariate Behavioral Research | 2015

Four Methods for Analyzing Partial Interval Recording Data, with Application to Single-Case Research.

James E. Pustejovsky; Daniel M. Swan

Partial interval recording (PIR) is a procedure for collecting measurements during direct observation of behavior. It is used in several areas of educational and psychological research, particularly in connection with single-case research. Measurements collected using partial interval recording suffer from construct invalidity because they are not readily interpretable in terms of the underlying characteristics of the behavior. Using an alternating renewal process model for the behavior under observation, we demonstrate that ignoring the construct invalidity of PIR data can produce misleading inferences, such as inferring that an intervention reduces the prevalence of an undesirable behavior when in fact it has the opposite effect. We then propose four different methods for analyzing PIR summary measurements, each of which can be used to draw inferences about interpretable behavioral parameters. We demonstrate the methods by applying them to data from two single-case studies of problem behavior.


Journal of School Psychology | 2018

Using response ratios for meta-analyzing single-case designs with behavioral outcomes

James E. Pustejovsky

Methods for meta-analyzing single-case designs (SCDs) are needed to inform evidence-based practice in clinical and school settings and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using systematic direct observation, which typically take the form of rates or proportions. For studies that use such measures, one simple and intuitive way to quantify effect sizes is in terms of proportionate change from baseline, using an effect size known as the log response ratio. This paper describes methods for estimating log response ratios and combining the estimates using meta-analysis. The methods are based on a simple model for comparing two phases, where the level of the outcome is stable within each phase and the repeated outcome measurements are independent. Although auto-correlation will lead to biased estimates of the sampling variance of the effect size, meta-analysis of response ratios can be conducted with robust variance estimation procedures that remain valid even when sampling variance estimates are biased. The methods are demonstrated using data from a recent meta-analysis on group contingency interventions for student problem behavior.

Collaboration


Dive into the James E. Pustejovsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allen C. Sherman

University of Arkansas for Medical Sciences

View shared research outputs
Top Co-Authors

Avatar

Crystal L. Park

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Fitchett

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Heather Jim

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge