Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kate Sullivan is active.

Publication


Featured researches published by Kate Sullivan.


Journal of Research on Educational Effectiveness | 2014

Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling.

Elizabeth Tipton; Larry V. Hedges; Michael Vaden-Kiernan; Geoffrey D. Borman; Kate Sullivan; Sarah Caverly

Abstract Randomized experiments are often seen as the “gold standard” for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined populations or on how units should be selected into an experiment to facilitate generalization. This article addresses the problem of sample selection in experiments by providing a method for selecting the sample so that the population and sample are similar in composition. The method begins by requiring that the inference population and eligibility criteria for the study are well defined before study recruitment begins. When the inference population and population of eligible units differs, the article provides a method for sample recruitment based on stratified selection on a propensity score. The article situates the problem within the example of how to select districts for two scale-up experiments currently in recruitment.


Journal of Research on Educational Effectiveness | 2016

Site Selection in Experiments: An Assessment of Site Recruitment and Generalizability in Two Scale-Up Studies.

Elizabeth Tipton; Lauren Fellers; Sarah Caverly; Michael Vaden-Kiernan; Geoffrey D. Borman; Kate Sullivan; Veronica Ruiz de Castilla

ABSTRACT Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides the next step in this literature—a template for assessing generalizability after a study is completed. In this template, first records from the recruitment process are analyzed, comparing differences between those who agreed to be in the study and those who did not. Second, the final sample is compared to the original inference population and different possible subsets, with the goal of determining where the results best generalize (and where they do not). Throughout, these methods are situated in the post hoc analysis of results from two scale-up studies. The article ends with a discussion of the use of these methods more generally when reporting results from randomized trials.


Journal of Research on Educational Effectiveness | 2018

Findings From a Multiyear Scale-Up Effectiveness Trial of Open Court Reading

Michael Vaden-Kiernan; Geoffrey D. Borman; Sarah Caverly; Nance Bell; Kate Sullivan; Veronica Ruiz de Castilla; Grace Fleming; Debra Rodriguez; Chad Henry; Tracy Long; Debra Hughes Jones

ABSTRACT This multiyear scale-up effectiveness study of Open Court Reading (OCR) involved approximately 4,500 students and more than 1,000 teachers per year in Grades K–5 from 49 elementary schools in seven districts across the country. Using a school-level cluster randomized trial design, we assessed the implementation and effectiveness of Open Court Reading over two years. Implementation study results demonstrated adequate to high levels of fidelity across the treatment schools. Intent-to-treat analyses revealed no statistically significant main effects on students’ reading performance in Year 1 and a small negative effect (d = – .09) in Year 2. There were positive impacts for particular subgroups, including kindergarten (d = .12) and Hispanic (d = .10) students in the first year. However, there were negative impacts for first grade (d = –.13), females (d = –.11), students who were not eligible for free or reduced-price lunch (d = –.19), and non-English language learners (d = –.10) in the second year of the study. Thus, relative to the “business-as-usual” reading curricula, no positive overall impacts of OCR and mixed impacts for student subgroups were found.


Economics of Education Review | 2018

Can UTeach? Assessing the relative effectiveness of STEM teachers

Ben Backes; Dan Goldhaber; Whitney Cade; Kate Sullivan; Melissa Dodson


Society for Research on Educational Effectiveness | 2016

Implementation Work at Scale: An Examination of the Fidelity of Implementation Study of the Scale-Up Effectiveness Trial of Open Court Reading.

Kate Sullivan; Nance Bell; Debra Hughes Jones; Sarah Caverly; Michael Vaden-Kiernan


Society for Research on Educational Effectiveness | 2016

Findings from a Multi-Year Scale-Up Effectiveness Trial of Open-Court Reading (Imagine It!).

Michael Vaden-Kiernan; Geoffrey D. Borman; Sarah Caverly; Nance Bell; Veronica Ruiz de Castilla; Kate Sullivan; Grace Fleming


National Center for Analysis of Longitudinal Data in Education Research (CALDER) | 2016

Can UTeach? Assessing the Relative Effectiveness of STEM Teachers. Working Paper 173.

Ben Backes; Dan Goldhaber; Whitney Cade; Kate Sullivan; Melissa Dodson


Society for Research on Educational Effectiveness | 2015

Findings from a Multi-Year Scale-up Effectiveness Trial of Everyday Mathematics.

Michael Vaden-Kiernan; Geoffrey D. Borman; Sarah Caverly; Nance Bell; Veronica Ruiz de Castilla; Kate Sullivan; Debra Rodriguez


Society for Research on Educational Effectiveness | 2015

Preliminary Findings from a Multi-Year Scale-Up Effectiveness Trial of Everyday Mathematics.

Michael Vaden-Kiernan; Geoffrey D. Borman; Sarah Caverly; Nance Bell; Veronica Ruiz de Castilla; Kate Sullivan


Society for Research on Educational Effectiveness | 2015

Preliminary Findings from a Multi-Year Scale-Up Effectiveness Trial of Open-Court Reading (Imagine It!).

Geoffrey D. Borman; Michael Vaden-Kiernan; Sarah Caverly; Nance Bell; Veronica Ruiz de Castilla; Kate Sullivan

Collaboration


Dive into the Kate Sullivan's collaboration.

Top Co-Authors

Avatar

Michael Vaden-Kiernan

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Sarah Caverly

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Geoffrey D. Borman

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Nance Bell

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Veronica Ruiz de Castilla

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Backes

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Dan Goldhaber

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Debra Rodriguez

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Grace Fleming

American Institutes for Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge