Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth Tipton is active.

Publication


Featured researches published by Elizabeth Tipton.


Psychological Bulletin | 2013

The Malleability of Spatial Skills: A Meta-analysis of Training Studies

David H. Uttal; Nathaniel Meadow; Elizabeth Tipton; Linda Liu Hand; Alison R. Alden; Christopher M. Warren; Nora S. Newcombe

Having good spatial skills strongly predicts achievement and attainment in science, technology, engineering, and mathematics fields (e.g., Shea, Lubinski, & Benbow, 2001; Wai, Lubinski, & Benbow, 2009). Improving spatial skills is therefore of both theoretical and practical importance. To determine whether and to what extent training and experience can improve these skills, we meta-analyzed 217 research studies investigating the magnitude, moderators, durability, and generalizability of training on spatial skills. After eliminating outliers, the average effect size (Hedgess g) for training relative to control was 0.47 (SE = 0.04). Training effects were stable and were not affected by delays between training and posttesting. Training also transferred to other spatial tasks that were not directly trained. We analyzed the effects of several moderators, including the presence and type of control groups, sex, age, and type of training. Additionally, we included a theoretically motivated typology of spatial skills that emphasizes 2 dimensions: intrinsic versus extrinsic and static versus dynamic (Newcombe & Shipley, in press). Finally, we consider the potential educational and policy implications of directly training spatial skills. Considered together, the results suggest that spatially enriched education could pay substantial dividends in increasing participation in mathematics, science, and engineering.


Research Synthesis Methods | 2010

Robust variance estimation in meta-regression with dependent effect size estimates

Larry V. Hedges; Elizabeth Tipton; Matthew C. Johnson

Conventional meta-analytic techniques rely on the assumption that effect size estimates from different studies are independent and have sampling distributions with known conditional variances. The independence assumption is violated when studies produce several estimates based on the same individuals or there are clusters of studies that are not independent (such as those carried out by the same investigator or laboratory). This paper provides an estimator of the covariance matrix of meta-regression coefficients that are applicable when there are clusters of internally correlated estimates. It makes no assumptions about the specific form of the sampling distributions of the effect sizes, nor does it require knowledge of the covariance structure of the dependent estimates. Moreover, this paper demonstrates that the meta-regression coefficients are consistent and asymptotically normally distributed and that the robust variance estimator is valid even when the covariates are random. The theory is asymptotic in the number of studies, but simulations suggest that the theory may yield accurate results with as few as 20-40 studies. Copyright


Research Synthesis Methods | 2014

Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS.

Emily E. Tanner-Smith; Elizabeth Tipton

Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates.


Psychological Methods | 2015

Small sample adjustments for robust variance estimation with meta-regression.

Elizabeth Tipton

Although primary studies often report multiple outcomes, the covariances between these outcomes are rarely reported. This leads to difficulties when combining studies in a meta-analysis. This problem was recently addressed with the introduction of robust variance estimation. This new method enables the estimation of meta-regression models with dependent effect sizes, even when the dependence structure is unknown. Although robust variance estimation has been shown to perform well when the number of studies in the meta-analysis is large, previous simulation studies suggest that the associated tests often have Type I error rates that are much larger than nominal. In this article, I introduce 6 estimators with better small sample properties and study the effectiveness of these estimators via 2 simulation studies. The results of these simulations suggest that the best estimator involves correcting both the residuals and degrees of freedom used in the robust variance estimator. These studies also suggest that the degrees of freedom depend on not only the number of studies but also the type of covariates in the meta-regression. The fact that the degrees of freedom can be small, even when the number of studies is large, suggests that these small-sample corrections should be used more generally. I conclude with an example comparing the results of a meta-regression with robust variance estimation with the results from the corrected estimator.


Journal of Educational and Behavioral Statistics | 2013

Improving Generalizations From Experiments Using Propensity Score Subclassification Assumptions, Properties, and Contexts

Elizabeth Tipton

As a result of the use of random assignment to treatment, randomized experiments typically have high internal validity. However, units are very rarely randomly selected from a well-defined population of interest into an experiment; this results in low external validity. Under nonrandom sampling, this means that the estimate of the sample average treatment effect calculated in the experiment can be a biased estimate of the population average treatment effect. This article explores the use of the propensity score subclassification estimator as a means for improving generalizations from experiments. It first lays out the assumptions necessary for generalizations, then investigates the amount of bias reduction and average variance inflation that is likely when compared to a conventional estimator. It concludes with a discussion of issues that arise when the population of interest is not well represented by the experiment, and an example.


Journal of Research on Educational Effectiveness | 2014

Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling.

Elizabeth Tipton; Larry V. Hedges; Michael Vaden-Kiernan; Geoffrey D. Borman; Kate Sullivan; Sarah Caverly

Abstract Randomized experiments are often seen as the “gold standard” for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined populations or on how units should be selected into an experiment to facilitate generalization. This article addresses the problem of sample selection in experiments by providing a method for selecting the sample so that the population and sample are similar in composition. The method begins by requiring that the inference population and eligibility criteria for the study are well defined before study recruitment begins. When the inference population and population of eligible units differs, the article provides a method for sample recruitment based on stratified selection on a propensity score. The article situates the problem within the example of how to select districts for two scale-up experiments currently in recruitment.


Research Synthesis Methods | 2013

Robust variance estimation in meta‐regression with binary dependent effects

Elizabeth Tipton

Dependent effect size estimates are a common problem in meta-analysis. Recently, a robust variance estimation method was introduced that can be used whenever effect sizes in a meta-analysis are not independent. This problem arises, for example, when effect sizes are nested or when multiple measures are collected on the same individuals. In this paper, we investigate the robustness of this method in small samples when the effect size of interest is the risk difference, log risk ratio, or log odds ratio. This simulation study examines the accuracy of 95% confidence intervals constructed using the robust variance estimator across a large variety of parameter values. We report results for both estimations of the mean effect (intercept) and of a slope. The results indicate that the robust variance estimator performs well even when the number of studies is as small as 10, although coverage is generally less than nominal in the slope estimation case. Throughout, an example based on a meta-analysis of cognitive behavior therapy is used for motivation. Copyright


Journal of Educational and Behavioral Statistics | 2014

How Generalizable Is Your Experiment? An Index for Comparing Experimental Samples and Populations:

Elizabeth Tipton

Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference population of interest. In this article, we provide a generalizability index that can be used to assess the degree of similarity between the sample of units in an experiment and one or more inference populations on a set of selected covariates. The index takes values between 0 and 1 and indicates both when a sample is like a miniature of the population and how well reweighting methods may perform when differences exist. Results of simulation studies are provided that develop rules of thumb for interpretation as well as an example.


Journal of Educational and Behavioral Statistics | 2015

Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression

Elizabeth Tipton; James E. Pustejovsky

Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance estimation (RVE) provides a method for pooling dependent effects, even when information on the exact dependence structure is not available. When the number of studies is small or moderate, however, test statistics and confidence intervals based on RVE can have inflated Type I error. This article describes and investigates several small-sample adjustments to F-statistics based on RVE. Simulation results demonstrate that one such test, which approximates the test statistic using Hotelling’s T2 distribution, is level-α and uniformly more powerful than the others. An empirical application demonstrates how results based on this test compare to the large-sample F-test.


Evaluation Review | 2013

Stratified sampling using cluster analysis: a sample selection strategy for improved generalizations from experiments.

Elizabeth Tipton

Background: An important question in the design of experiments is how to ensure that the findings from the experiment are generalizable to a larger population. This concern with generalizability is particularly important when treatment effects are heterogeneous and when selecting units into the experiment using random sampling is not possible—two conditions commonly met in large-scale educational experiments. Method: This article introduces a model-based balanced-sampling framework for improving generalizations, with a focus on developing methods that are robust to model misspecification. Additionally, the article provides a new method for sample selection within this framework: First units in an inference population are divided into relatively homogenous strata using cluster analysis, and then the sample is selected using distance rankings. Result: In order to demonstrate and evaluate the method, a reanalysis of a completed experiment is conducted. This example compares samples selected using the new method with the actual sample used in the experiment. Results indicate that even under high nonresponse, balance is better on most covariates and that fewer coverage errors result. Conclusion: The article concludes with a discussion of additional benefits and limitations of the method.

Collaboration


Dive into the Elizabeth Tipton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoffrey D. Borman

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Pustejovsky

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Kate Sullivan

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Michael Vaden-Kiernan

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Caverly

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar

Christopher D. Wilson

Biological Sciences Curriculum Study

View shared research outputs
Researchain Logo
Decentralizing Knowledge