Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Deke is active.

Publication


Featured researches published by John Deke.


Economic Inquiry | 2008

AFTER‐SCHOOL PROGRAM EFFECTS ON BEHAVIOR: RESULTS FROM THE 21ST CENTURY COMMUNITY LEARNING CENTERS PROGRAM NATIONAL EVALUATION

Susanne James-Burdumy; Mark Dynarski; John Deke

This paper presents evidence on after-school programs effects on behavior from the national evaluation of the U.S. Department of Educations 21st Century Community Learning Centers after-school program. Findings come from both of the studys components: (1) an elementary school component based on random assignment of 2,308 students in 12 school districts and (2) a middle school component based on a matched comparison design including 4,264 students in 32 districts. Key findings include higher levels of negative behavior for elementary students and some evidence of higher levels of negative behaviors for middle school students. (JEL I21)


Educational Evaluation and Policy Analysis | 2007

When Elementary Schools Stay Open Late: Results From the National Evaluation of the 21st Century Community Learning Centers Program

Susanne James-Burdumy; Mark Dynarski; John Deke

This article presents evidence from a national evaluation of the effects of 21st Century Community Learning Center afterschool programs. The study was conducted in 12 school districts and 26 after-school centers, at which 2,308 elementary school students who were interested in attending a center were randomly assigned either to the treatment or control group. The findings indicate that the programs affected the type of care and supervision students received after school, with parents less likely to be caring for their child and other adults more likely, but there was no statistically significant effect on the incidence of self-care. Students in the program reported feeling safer after school, but their academic outcomes were not affected and they had more incidents of negative behavior.


Economics of Education Review | 2003

A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas

John Deke

Abstract This paper uses a policy change involving statewide school district refinancing in Kansas during the early 1990s to identify the relationship between per-pupil expenditure and the probability that a student will progress to higher education. Kansas’s school district refinancing was intended to equalize per-pupil expenditure conditional on a measure of cost developed by the state. This equalization process resulted in a change in spending that, although not completely exogenous, is understood well enough that an unbiased estimate of the effect of spending on students’ post-secondary destinations can be made. Since additional years of education are known to increase lifetime earnings, a simple cost-benefit analysis of a hypothetical spending increase is considered.


Journal of Research on Educational Effectiveness | 2012

Effectiveness of Four Supplemental Reading Comprehension Interventions

Susanne James-Burdumy; John Deke; Russell Gersten; Julieta Lugo-Gil; Rebecca Newman-Gonchar; Joseph Dimino; Kelly Haymond; Albert Yung-Hsu Liu

Abstract This article presents evidence from a large-scale randomized controlled trial of the effects of four supplemental reading comprehension curricula (Project CRISS, ReadAbout, Read for Real, and Reading for Knowledge) on students’ understanding of informational text. Across 2 school years, the study included 10 school districts, more than 200 schools, and more than 10,000 fifth-grade students. Schools interested in implementing 1 of the 4 supplemental curricula were randomly assigned to 1 of 4 treatment groups or to a control group. The impact analyses in the studys first year revealed a statistically significant negative impact of Reading for Knowledge on students’ reading comprehension scores and no other significant impacts. The impact of ReadAbout was positive and significant in the studys second year among teachers with 1 year of experience using the intervention.


Evaluation Review | 2017

The WWC Attrition Standard: Sensitivity to Assumptions and Opportunities for Refining and Adapting to New Contexts.

John Deke; Hanley Chiang

Background: To limit the influence of attrition bias in assessments of intervention effectiveness, several federal evidence reviews have established a standard for acceptable levels of sample attrition in randomized controlled trials. These evidence reviews include the What Works Clearinghouse (WWC), the Home Visiting Evidence of Effectiveness Review, and the Teen Pregnancy Prevention Evidence Review. We believe the WWC attrition standard may constitute the first use of model-based, empirically supported bounds on attrition bias in the context of a federally sponsored systematic evidence review. Meeting the WWC attrition standard (or one of the attrition standards based on the WWC standard) is now an important consideration for researchers conducting studies that could potentially be reviewed by the WWC (or other evidence reviews). Objectives: The purpose of this article is to explain the WWC attrition model, how that model is used to establish attrition bounds, and to assess the sensitivity of attrition bounds to key parameter values. Research Design: Results are based on equations derived in the article and values generated by applying those equations to a range of parameter values. Results: The authors find that the attrition boundaries are more sensitive to the maximum level of bias that an evidence review is willing to tolerate than to other parameters in the attrition model. Conclusions: The authors conclude that the most productive refinements to existing attrition standards may be with respect to the definition of “maximum tolerable bias.”


Journal of Research on Educational Effectiveness | 2014

Effectiveness of Supplemental Educational Services

John Deke; Brian Gill; Lisa Dragoset; Karen Bogen

Abstract: One of the modifications of the Elementary and Secondary Education Act (known as the No Child Left Behind Act) gave parents of low-income students in low-performing schools a choice of Supplemental Educational Services (SEdS). SEdS include tutoring or other academic support services offered outside the regular school day, at no charge to students or their families, by public or private organizations that have been approved by the state. We examine the impacts of SEdS on test scores among 24,000 students in school districts where SEdS were oversubscribed in Florida, Ohio, and Connecticut. Oversubscribed school districts are required to make SEdS available to the lowest achieving students among eligible applicants, creating the opportunity to estimate impacts using a regression discontinuity design, which relies on districts’ use of a continuous measure of prior academic achievement to determine which eligible applicants will be offered services. We find no impact of SEdS on student test scores.


Evaluation Review | 2017

Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions

Jaime Thomas; Sarah A. Avellar; John Deke; Philip Gleason

Background: Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. Objective: To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? Methods: We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study—Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods—matching and regression adjustment—to account for preexisting differences between intervention and comparison groups. Results: A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.


Evaluation Review | 2016

Design and Analysis Considerations for Cluster Randomized Controlled Trials That Have a Small Number of Clusters

John Deke

Background: Cluster randomized controlled trials (CRCTs) often require a large number of clusters in order to detect small effects with high probability. However, there are contexts where it may be possible to design a CRCT with a much smaller number of clusters (10 or fewer) and still detect meaningful effects. Objectives: The objective is to offer recommendations for best practices in design and analysis for small CRCTs. Research design: I use simulations to examine alternative design and analysis approaches. Specifically, I examine (1) which analytic approaches control Type I errors at the desired rate, (2) which design and analytic approaches yield the most power, (3) what is the design effect of spurious correlations, and (4) examples of specific scenarios under which impacts of different sizes can be detected with high probability. Results/Conclusions: I find that (1) mixed effects modeling and using Ordinary Least Squares (OLS) on data aggregated to the cluster level both control the Type I error rate, (2) randomization within blocks is always recommended, but how best to account for blocking through covariate adjustment depends on whether the precision gains offset the degrees of freedom loss, (3) power calculations can be accurate when design effects from small sample, spurious correlations are taken into account, and (4) it is very difficult to detect small effects with just four clusters, but with six or more clusters, there are realistic circumstances under which small effects can be detected with high probability.


Mathematica Policy Research Reports | 2005

When Schools Stay Open Late: The National Evaluation of the 21st-Century Community Learning Centers Program, First Year Findings

Mark Dynarski; Susanne James-Burdumy; Mary T. Moore; Linda Rosenberg; John Deke; Wendy Mansfield


Mathematica Policy Research Reports | 2009

An Evaluation of Teachers Trained Through Different Routes to Certification

Jill Constantine; Daniel Player; Tim Silva; Kristin Hallgren; Mary Grider; John Deke

Collaboration


Dive into the John Deke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa Dragoset

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Dynarski

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julieta Lugo-Gil

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan M. Hershey

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Brian Gill

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Karen Bogen

Mathematica Policy Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge