Eric Isenberg
Mathematica Policy Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Isenberg.
Peabody Journal of Education | 2007
Eric Isenberg
This article discusses quantitative research on homeschooling, including the available data, pitfalls of using the data, estimates of the number of homeschooled children, part-time homeschooling, and why families homeschool. I compare research on homeschooling to research on charter schools, voucher programs, and private schools.
Mathematica Policy Research Reports | 2017
Heinrich Hock; Eric Isenberg
ABSTRACT As states and districts incorporate value-added estimates into multiple-measures systems of teacher evaluation, it has become increasingly important to understand how to model value added when the same student is taught the same subject by multiple teachers. Roster data on teacher–student links that have been checked and confirmed by the teachers themselves show levels of co-teaching far beyond what appear in administrative data. Therefore, to help states and districts overcome a potential limitation in the use of value added, we propose and examine three methods estimating teacher value added when students are co-taught: the Partial Credit Method, Teacher Team Method, and Full Roster Method. The Partial Credit Method apportions responsibility between teachers according to the fraction of the year a student spent with each. This method, however, has practical problems limiting its usefulness. The Teacher Team Method and Full Roster Method presume that co-teachers share joint responsibility for the achievement of their shared students. We explore the properties of these methods and compare empirical estimates. Both methods produce similar estimates of teacher value added, but the Full Roster Method can be more easily implemented in practice.
Mathematica Policy Research Reports | 2016
Mariesa Herrmann; Elias Walsh; Eric Isenberg
ABSTRACT It is common in the implementation of teacher accountability systems to use empirical Bayes shrinkage to adjust teacher value-added estimates by their level of precision. Because value-added estimates based on fewer students and students with “hard-to-predict” achievement will be less precise, the procedure could have differential impacts on the probability that the teachers of fewer students or students with hard-to-predict achievement will be assigned consequences. This article investigates how shrinkage affects the value-added estimates of teachers of hard-to-predict students. We found that teachers of students with low prior achievement and who receive free lunch tend to have less precise value-added estimates. However, in our sample, shrinkage had no statistically significant effect on the relative probability that teachers of hard-to-predict students received consequences.
Mathematica Policy Research Reports | 2015
Elias Walsh; Eric Isenberg
We compare teacher evaluation scores from a typical value-added model to results from the Colorado Growth Model (CGM), which 16 states currently use or plan to use as a component of their teacher performance evaluations. The CGM assigns a growth percentile to each student by comparing each students achievement to that of other students with similar past test scores. The median (or average) growth percentile of a teachers students provides the measure of teacher effectiveness. The CGM does not account for other student background characteristics and excludes other features included in many value-added models used by states and school districts. Using data from the District of Columbia Public Schools (DCPS), we examine changes in evaluation scores across the two methods for all teachers and for teacher subgroups. We find that use of growth percentiles in place of value added would have altered evaluation consequences for 14% of DCPS teachers. Most differences in evaluation scores based on the two methods are not related to the characteristics of students’ teachers.
Journal of Research on Educational Effectiveness | 2015
Eric Isenberg; Elias Walsh
Abstract We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the Full Roster Method (FRM) that is feasible and practical, but it effectively counts co-taught students more than once—these students receive a full weight with each of their teachers, so such students receive extra weight when calculating the relationship between student characteristics and achievement. The improvement, known as the Full Roster-Plus Method, allows co-taught students to receive full weight with their teachers, but all students contribute equally to the calculation of the relationship between student characteristics and achievement. To investigate how the application of this method empirically changes value-added estimates, we use data from District of Columbia Public Schools. We find that there are very small empirical differences between the two methods.
Journal of Research on Educational Effectiveness | 2015
Eric Isenberg; Bing-ru Teh; Elias Walsh
Abstract Researchers often presume that it is better to use administrative data from grades 4 and 5 than data from grades 6 through 8 for conducting research on teacher effectiveness that uses value-added models because (1) elementary school teachers teach all subjects to their students in self-contained classrooms and (2) classrooms are more homogenous at the elementary school level. We examined the first issue by using data on teacher–student links in which teachers of mathematics and/or English/language arts had verified the subjects and students they taught. We compared these data to teacher–student links from the original administrative data. Results show that instruction is often departmentalized in these grades. About one in six elementary school teachers in the original data was linked to a subject that he or she did not teach. To examine the second issue, we computed the variation in baseline student achievement within classes, between classes at the same school, and between schools. We found more within-school variation in pretest scores in middle school grades but an offsetting amount of between-school variation in upper elementary grades.
Mathematica Policy Research Reports | 2010
Steven Glazerman; Eric Isenberg; Sarah Dolfin; Martha Bleeker; Amy Johnson; Mary Grider; Matthew Jacobus
Mathematica Policy Research Reports | 2009
Eric Isenberg; Steven Glazerman; Martha Bleeker; Amy Johnson; Julieta Lugo-Gil; Mary Grider; Sarah Dolfin; Edward Britton
National Center for Education Evaluation and Regional Assistance | 2009
Eric Isenberg; Steven Glazerman; Martha Bleeker; Amy Johnson; Julieta Lugo-Gil; Mary Grider; Sarah Dolfin; Edward Britton
Mathematica Policy Research Reports | 2013
Eric Isenberg; Jeffrey Max; Philip Gleason; Liz Potamites; Robert Santillano; Heinrich Hock; Michael Hansen