Jessaca Spybrook
Western Michigan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jessaca Spybrook.
Evaluation Review | 2013
Carl D. Westine; Jessaca Spybrook; Joseph A. Taylor
Background: Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. Objectives: This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Research Design: Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Measures: Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163–.172, and reading, .099–.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Conclusion: Science educational researchers should utilize design parameters derived from science achievement outcomes.
Journal of Teacher Education | 2012
Lindsay Clare Matsumura; Helen Garnier; Jessaca Spybrook
This study examines the effect of a comprehensive literacy-coaching program focused on enacting a discussion-based approach to reading comprehension instruction (content-focused coaching [CFC]) on the quality of classroom text discussions over 2 years. The study used a cluster-randomized trial in which schools were assigned to either CFC or standard practice in the district for literacy coaching. Observers rated classroom text discussions significantly higher in CFC schools. Teachers in the CFC schools participated more frequently in coaching activities that emphasized planning and reflecting on instruction, enacting lessons in their classrooms, building knowledge of the theory underlying effective pedagogy, and differentiating instruction than did the teachers in the comparison condition. Qualitative analyses of coach interviews identified substantive differences in the professional development support available to coaches, scope of coaches’ job responsibilities, and focus of coaching resources in the CFC schools and comparison schools.
Journal of Experimental Education | 2014
Jessaca Spybrook
The Institute of Education Sciences has funded more than 100 experiments to evaluate educational interventions in an effort to generate scientific evidence of program effectiveness on which to base education policy and practice. In general, these studies are designed with the goal of having adequate statistical power to detect the average treatment effect. However, the average treatment effect may be less informative if the treatment effects vary substantially from site to site or if the intervention effects differ across context or subpopulations. This article considers the precision of studies to detect different types of treatment effect heterogeneity. Calculations are demonstrated using a set of Institute of Education Sciences funded cluster randomized trials. Strategies for planning future studies with adequate precision for estimating treatment effect heterogeneity are discussed.
International Journal of Research & Method in Education | 2016
Jessaca Spybrook; Ran Shi; Benjamin Kelcey
ABSTRACT This article examines the statistical precision of cluster randomized trials (CRTs) funded by the Institute of Education Sciences (IES). Specifically, it compares the total number of clusters randomized and the minimum detectable effect size (MDES) of two sets of studies, those funded in the early years of IES (2002–2004) and those funded in the recent years (2011–2013). Overall, the average precision in terms of MDES of studies in the recent cohort was more than double that of the early cohort (i.e. 0.48 compared to 0.23). The findings suggest a consistent and substantial increase in the precision of CRTs funded by IES in the past decade which is a critical step towards designing studies that have the potential to yield high-quality evidence about the effectiveness of educational interventions.
Early Education and Development | 2014
Hope K. Gerde; Nell K. Duke; Annie M. Moses; Jessaca Spybrook; Meagan K. Shedd
Research Findings: Examining the effects of professional development of the early childhood workforce that fit within the constraints of government policy is crucial for identifying types and amounts of effective training and informing child care policy. The present study used a cluster-randomized trial to evaluate the effects of a professional development program for child care providers designed to meet the criteria for 2 state-level policies: (a) that child care providers working in licensed centers engage in 10 hr of professional development annually and (b) that all licensed child care settings provide 30 min of developmentally appropriate literacy activity daily. Results indicated that 10 hr of professional development focused on literacy was effective for significantly improving the literacy practices and knowledge of child care providers. However, it was not effective in eliciting substantial growth in child literacy outcomes, at least in the short term. The lack of child outcomes illustrates the importance of measuring professional development effects at both the provider and child levels. Practice or Policy: This study illustrates the importance of critically questioning and analyzing state policy, particularly dosage. In practice, dosage is an influential factor in how professional development is selected by programs and providers, because most policies only specify a required number of hours to be completed. The design of policy, which can influence both provider practice and child outcomes, relies upon alignment between early childhood research and policy.
Journal of Research on Educational Effectiveness | 2008
Jessaca Spybrook
Abstract This study examines the reporting of power analyses in the group randomized trials funded by the Institute of Education Sciences from 2002 to 2006. A detailed power analysis provides critical information that allows reviewers to (a) replicate the power analysis and (b) assess whether the parameters used in the power analysis are reasonable. Without a detailed power analysis, reviewers may have difficultly evaluating the accuracy of the power analysis and underpowered studies may inadvertently pass through the review process with a recommendation for funding. This study reveals that sample sizes are reported with high consistency; however, other important design parameters, including intraclass correlations, covariate-outcome correlations, and the percentage of variance explained by blocking are not reported with regularity. An analysis of reporting trends over time reveals that the reporting of intraclass correlations and covariate-outcome correlations dramatically increased over time. The reporting of blocking information was still extremely limited, even in the more recent studies.
Journal of Research on Educational Effectiveness | 2017
Howard S. Bloom; Jessaca Spybrook
ABSTRACT Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment (intent-to-treat), a cross-site standard deviation of the effects of program assignment, and a difference between the cross-site mean effects of program assignment for two subpopulations of sites. However, to capitalize on this opportunity will require adequately powering future trials to estimate these parameters. To help researchers do so, we present a simple approach for computing the minimum detectable values of these parameters for different sample designs. The article then uses this approach to illustrate for each parameter, the precision trade-off between increasing the number of study sites and increasing site sample size. Findings are presented for multisite trials that randomize individual sample members and for multisite trials that randomize intact groups or clusters of sample members.
AERA Open | 2016
Jessaca Spybrook; Carl D. Westine; Joseph A. Taylor
The Common Guidelines for Education Research and Development were created as a joint effort between the Institute of Education Science and the National Science Foundation in an effort to streamline education research and contribute to an accumulation of knowledge that will lead to improved student outcomes. One type of research that emerged in the guidelines is impact research. In order to achieve the level of rigor expected for an impact study, it is common that a research team will employ a cluster randomized trial (CRT). This article provides empirical estimates of design parameters necessary for planning adequately powered CRTs focused on science achievement. Examples of how to use these parameters to improve the design of science impact studies are discussed.
Journal of Child and Adolescent Psychopharmacology | 2009
Charles A. Henry; David Shervin; Ann M. Neumeyer; Ronald J. Steingard; Jessaca Spybrook; Roula Choueiri; Margaret L. Bauman
BACKGROUND Youths with pervasive developmental disorders (PDDs) often have symptoms that fail to respond to selective serotonin reuptake inhibitor (SSRI) treatment. These children may be given a subsequent trial of another SSRI. This study reports on the outcome of PDD youths who received a second SSRI trial after an initial treatment failure. METHODS Clinic charts were reviewed for 22 outpatient youths with a DSM-IV diagnosis of a PDD who were treated with an SSRI after an initial failure with a previous SSRI. Response for the second SSRI trial was determined using the Clinical Global Impressions-Improvement Scale (CGI-I). Treatment indications, symptom severity, demographic data, and side effects were recorded. RESULTS For the second SSRI trial, 31.8% of the subjects were rated as much improved on the CGI-I scale and determined to be responders, with 68.2% of the subjects demonstrating activation side effects. 90% of subjects demonstrated activation side effects when data from both SSRI trials were combined. There were no statistically significant associations between outcome of the second SSRI trial and clinical/demographic variables. CONCLUSIONS A second trial of an SSRI after an initial SSRI treatment failure was often unsuccessful in children and adolescents with PDDs. Activation side effects were common. Because alternative treatments in this population are limited, a second trial of an SSRI may still be considered. The study was limited by its retrospective design and by its small sample size.
Journal of Educational and Behavioral Statistics | 2016
Jessaca Spybrook; Benjamin Kelcey; Nianbo Dong
Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the “what works” question. However, program effects may vary by individual characteristics or by context, making it important to also consider power to detect moderator effects. This article presents a framework for calculating statistical power for moderator effects at all levels for two- and three-level CRTs. Annotated R code is included to make the calculations accessible to researchers and increase the regularity in which a priori power analyses for moderator effects in CRTs are conducted.