Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vivian C. Wong is active.

Publication


Featured researches published by Vivian C. Wong.


Journal of Research on Educational Effectiveness | 2017

Empirical Performance of Covariates in Education Observational Studies

Vivian C. Wong; Jeffrey C. Valentine; Kate Miller-Bains

ABSTRACT This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong theory of treatment selection. Overall, the review demonstrates that although the pretest often reduces bias in observational studies, it does not always eliminate it. Its performance depends on the pretests correlation with treatment selection and the outcome, and whether preintervention trends are present. We also find that although local comparisons are prioritized for matching, its performance depends on whether comparable no-treatment cases are available. Otherwise, local comparisons may produce badly biased results. In cases where researchers have a strong theory of selection and rich covariate sets, observational methods perform well, but additional replication studies are needed. Finally, observational methods that rely on demographic covariates without a theory of selection rarely produce unbiased treatment effects. The article concludes by offering education researchers empirically based guidance on covariate selection in observational studies.


Evaluation Review | 2018

Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings

Vivian C. Wong; Peter M. Steiner

Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods. The goals of the WSC are to determine whether the nonexperiment can replicate results from a randomized experiment (which provides the causal benchmark estimate), and the contexts and conditions under which these methods work in practice. This article presents a coherent theory of the design and implementation of WSCs for evaluating NE methods. It introduces and identifies the multiple purposes of WSCs, required design components, common threats to validity, design variants, and causal estimands of interest in WSCs. It highlights two general approaches for empirical evaluations of methods in field settings, WSC designs with independent and dependent benchmark and NE arms. This article highlights advantages and disadvantages for each approach, and conditions and contexts under which each approach is optimal for addressing methodological questions.


Educational Researcher | 2017

Did States Use Implementation Discretion to Reduce the Stringency of NCLB? Evidence from a Database of State Regulations.

Vivian C. Wong; Coady Wing; David A. Martin; Anandita Krishnamachari

When No Child Left Behind (NCLB) became law in 2002, it was viewed as an effort to create uniform standards for students and schools across the country. More than a decade later, we know surprisingly little about how states actually implemented NCLB and the extent to which state implementation decisions managed to undo the centralizing objectives of the law. This paper introduces a state-level measure of NCLB stringency that helps shed light on these issues. The measure is available for 49 states and the District of Columbia and covers most years under NCLB (2003–2011). Importantly, the measure does not depend on population characteristics of the state. It varies only because of state-level decisions about rule exemptions, standards, and proficiency trajectories. Overall, we find that while NCLB was successful in encouraging states to adopt higher and more consistent performance standards for schools, it also provided much more flexibility and customization in state-level accountability policies than is generally realized.


Evaluation Review | 2018

Assessing Correspondence Between Experimental and Nonexperimental Estimates in Within-Study Comparisons

Peter M. Steiner; Vivian C. Wong


Evaluation Review | 2018

What Can Be Learned From Empirical Evaluations of Nonexperimental Methods

Vivian C. Wong; Peter M. Steiner; Kylie L. Anglin


2017 APPAM Fall Research Conference | 2017

Using Within-Study Comparison Approaches to Examine Systematic Variation and Generalization of Treatment Effects

Vivian C. Wong


2017 APPAM Fall Research Conference | 2017

Assessing Correspondence in (Design)-Replication Studies

Vivian C. Wong


2017 APPAM Fall Research Conference | 2017

Do Students Respond to Accountability Pressures? Evidence from NCLB Implementation Details

Vivian C. Wong


Society for Research on Educational Effectiveness | 2016

Do Schools Respond to Pressure? Evidence from NCLB Implementation Details.

Vivian C. Wong; Coady Wing; David A. Martin


2015 Fall Conference: The Golden Age of Evidence-Based Policy | 2015

Methods for Assessing Correspondence in Non-Experimental and Benchmark Results in within-Study Comparison Designs: Results from an Evaluation of Repeated Measures Approaches

Vivian C. Wong

Collaboration


Dive into the Vivian C. Wong's collaboration.

Top Co-Authors

Avatar

Peter M. Steiner

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Coady Wing

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge