Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Z. Schochet is active.

Publication


Featured researches published by Peter Z. Schochet.


Developmental Psychology | 2005

The Effectiveness of Early Head Start for 3-Year-Old Children and Their Parents: Lessons for Policy and Programs

John M. Love; Ellen Eliason Kisker; Christine Ross; Helen Raikes; Jill Constantine; Kimberly Boller; Jeanne Brooks-Gunn; Rachel Chazan-Cohen; Louisa Tarullo; Christy Brady-Smith; Allison Sidle Fuligni; Peter Z. Schochet; Diane Paulsell; Cheri A. Vogel

Early Head Start, a federal program begun in 1995 for low-income pregnant women and families with infants and toddlers, was evaluated through a randomized trial of 3,001 families in 17 programs. Interviews with primary caregivers, child assessments, and observations of parent-child interactions were completed when children were 3 years old. Caregivers were diverse in race-ethnicity, language, and other characteristics. Regression-adjusted impact analyses showed that 3-year-old program children performed better than did control children in cognitive and language development, displayed higher emotional engagement of the parent and sustained attention with play objects, and were lower in aggressive behavior. Compared with controls, Early Head Start parents were more emotionally supportive, provided more language and learning stimulation, read to their children more, and spanked less. The strongest and most numerous impacts were for programs that offered a mix of home-visiting and center-based services and that fully implemented the performance standards early.


Journal of Educational and Behavioral Statistics | 2008

Statistical Power for Random Assignment Evaluations of Education Programs.

Peter Z. Schochet

This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the literature. Focusing on standardized test scores of elementary school students, this article discusses appropriate precision standards and, for each design, the required number of schools to achieve those standards using empirical values of intraclass correlations, regression R 2 values, and other parameters. Clustering effects vary by design but are typically large. Thus, large school samples are required for education trials, and many evaluations will only have sufficient power to detect precise impacts for relatively large subgroups of sites.


Journal of Educational and Behavioral Statistics | 2009

Statistical power for regression discontinuity designs in education evaluations.

Peter Z. Schochet

This article examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and hierarchical linear modeling literature, and the empirical work focuses on common designs used in education research to test intervention effects on student test scores. The main conclusion is that three to four times larger samples are typically required under RD than experimental clustered designs to produce impacts with the same level of statistical precision. Thus, the viability of using RD designs for new impact evaluations of educational interventions may be limited and will depend on the point of treatment assignment, the availability of pretests, and key research questions.


Evaluation Review | 2009

An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations.

Peter Z. Schochet

In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying confirmatory and exploratory analyses in study protocols, delineating confirmatory outcome domains, conducting t tests on composite domain outcomes, and applying multiplicity corrections to composites across domains to obtain summative impact evidence. The article presents statistical background and discusses multiplicity issues for subgroup analyses, designs with multiple treatments, and reporting.


Journal of Educational and Behavioral Statistics | 2013

What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

Peter Z. Schochet; Hanley S. Chiang

This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and Empirical Bayes estimators. Empirical results suggest that value-added estimates are likely to be noisy using the amount of data that are typically used in practice. Type I and II error rates for comparing a teacher’s performance to the average are likely to be about 25% with 3 years of data and 35% with 1 year of data. Corresponding error rates for overall false positive and negative errors are 10% and 20%, respectively. Lower error rates can be achieved if schools are the performance unit. The results suggest that policymakers must carefully consider likely system error rates when using value-added estimates to make high-stakes decisions regarding educators.


Journal of Educational and Behavioral Statistics | 2013

Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference

Peter Z. Schochet

This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the finite-population model) or randomly selected from a vaguely defined universe (the super-population model). Both approaches allow for heterogeneity of treatment effects. Appropriate estimation methods and asymptotic moments are discussed for each model using simple differences-in-means estimators and those that include baseline covariates. An empirical application using a large-scale education RCT shows that the choice of the finite- or super-population approach can matter. Thus, the choice of framework and sensitivity analyses should be specified and justified in the analysis protocols.


Journal of Human Resources | 1998

The Dynamics of Receipt of Aid to Families with Dependent Children among Teenage Parents in Inner Cities

Philip Gleason; Anu Rangarajan; Peter Z. Schochet

This study examines the dynamics of AFDC receipt among 2,325 teenage mothers living in three inner-city areas who began receiving AFDC for the first time. We find that inner-city teenage mothers have longer welfare spells and higher recidivism rates than other groups of women receiving welfare. We find, however, that the factors affecting the length of their welfare spells and their reentry rates are similar to those of broader groups of welfare recipients. Teenage mothers with higher skill levels are more likely to exit welfare via work and are less likely to return to welfare.


Journal of Educational and Behavioral Statistics | 2011

Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs.

Peter Z. Schochet; Hanley S. Chiang

In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the identification and estimation of the CACE parameter for two-level clustered RCTs. The article also provides simple asymptotic variance formulas for CACE impact estimators measured in nominal and standard deviation units. In the empirical work, data from 10 large RCTs are used to compare significance findings using correct CACE variance estimators and commonly used approximations that ignore the estimation error in service receipt rates and outcome standard deviations. The key finding is that the variance corrections have very little effect on the standard errors of standardized CACE impact estimators. Across the examined outcomes, the correction terms typically raise the standard errors by less than 1%, and change p values at the fourth or higher decimal place. Manuscript received April 16, 2010 Revision received April 26, 2010 Accepted May 24, 2010


Journal of Educational and Behavioral Statistics | 2011

Do Typical RCTS of Education Interventions Have Sufficient Statistical Power for Linking Impacts on Teacher Practice and Student Achievement Outcomes

Peter Z. Schochet

For RCTs of education interventions, it is often of interest to estimate associations between student and mediating teacher practice outcomes, to examine the extent to which the study’s conceptual model is supported by the data, and to identify specific mediators that are most associated with student learning. This article develops statistical power formulas for such exploratory analyses under clustered school-based RCTs using ordinary least squares (OLS) and instrumental variable (IV) estimators and uses these formulas to conduct a simulated power analysis. The power analysis finds that for currently available mediators, the OLS approach will yield precise estimates of associations between teacher practice measures and student test score gains only if the sample contains about 150 to 200 study schools. The IV approach, which can adjust for potential omitted variable and simultaneity biases, has very little statistical power for mediator analyses. For typical RCT evaluations, these results may have design implications for the scope of the data collection effort for obtaining costly teacher practice mediators.


Journal of Research on Educational Effectiveness | 2013

Statistical Power for School-Based RCTs with Binary Outcomes

Peter Z. Schochet

Abstract This article develops a new approach for calculating appropriate sample sizes for school-based randomized control trials (RCTs) with binary outcomes using logit models with and without baseline covariates. The theoretical analysis develops sample size formulas for clustered designs where random assignment is at the school or teacher level using generalized estimating equation methods. The article focuses on the impact parameter pertaining to rates and proportions rather than to the log odds of response, which has been the focus of the previous literature. The article also compiles intraclass correlations (ICCs) for the clustered design for a range of binary outcomes using data from seven education RCTs. These ICCs and the power formulas are then used to conduct a power analysis using a provided SAS macro; the key finding is that sample sizes of 40 to 60 schools that are typically included in clustered RCTs for student test score or behavioral scale outcomes will often be insufficient for binary outcomes. A key reason is that the potential for precision gains from regression adjustment is likely to be smaller for binary outcomes.

Collaboration


Dive into the Peter Z. Schochet's collaboration.

Top Co-Authors

Avatar

John Burghardt

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Sheena McConnell

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Christine Ross

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Diane Paulsell

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

John M. Love

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Kimberly Boller

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Anu Rangarajan

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeanne Brooks-Gunn

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge