Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaime Thomas is active.

Publication


Featured researches published by Jaime Thomas.


Evaluation Review | 2017

External Validity: The Next Step for Systematic Reviews?

Sarah A. Avellar; Jaime Thomas; Rebecca Kleinman; Emily Sama-Miller; Sara E. Woodruff; Rebecca Coughlin; T’Pring R. Westbrook

Background: Systematic reviews—which identify, assess, and summarize existing research—are usually designed to determine whether research shows that an intervention has evidence of effectiveness, rather than whether an intervention will work under different circumstances. The reviews typically focus on the internal validity of the research and do not consistently incorporate information on external validity into their conclusions. Objectives: In this article, we focus on how systematic reviews address external validity. Methods: We conducted a brief scan of 19 systematic reviews and a more in-depth examination of information presented in a systematic review of home visiting research. Results: We found that many reviews do not provide information on generalizability, such as statistical representativeness, but focus on factors likely to increase heterogeneity (e.g., numbers of studies or settings) and report on context. The latter may help users decide whether the research characteristics—such as sample demographics or settings—are similar to their own. However, we found that differences in reporting, such as which variables are included and how they are measured, make it difficult to summarize across studies or make basic determinations of sample characteristics, such as whether the majority of a sample was unemployed or married. Conclusion: Evaluation research and systematic reviews would benefit from reporting guidelines for external validity to ensure that key information is reported across studies.


Evaluation Review | 2017

A Trusted Source of Information

Diane Paulsell; Jaime Thomas; Shannon Monahan; Neil Seftor

Background: Systematic reviews sponsored by federal departments or agencies play an increasingly important role in disseminating information about evidence-based programs and have become a trusted source of information for administrators and practitioners seeking evidence-based programs to implement. These users vary in their knowledge of evaluation methods and their ability to interpret systematic review findings. They must consider factors beyond program effectiveness when selecting an intervention, such as the relevance of the intervention to their target population, community context, and service delivery system; readiness for replication and scale-up; and the ability of their service delivery system or agency to implement the intervention. Objective: To support user decisions about adopting evidence-based practices, this article discusses current systematic review practices and alternative approaches to synthesizing and presenting findings and providing information. Method: We reviewed the publicly available information on review methodology and findings for eight federally funded systematic reviews in the labor, education, early childhood, mental health/substance abuse, family support, and criminal justice topic areas. Conclusion: The eight federally sponsored evidence reviews we examined all provide information that can help users to interpret findings on evidence of effectiveness and to make adoption decisions. However, they are uneven in the amount, accessibility, and consistency of information they report. For all eight reviews, there is room for improvement in supporting users’ adoption decisions through more detailed, accessible, and consistent information in these areas.


Evaluation Review | 2017

Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions

Jaime Thomas; Sarah A. Avellar; John Deke; Philip Gleason

Background: Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. Objective: To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? Methods: We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study—Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods—matching and regression adjustment—to account for preexisting differences between intervention and comparison groups. Results: A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.


Early Education and Development | 2017

Ecomapping as a Research Tool for Informal Child Care

Cleo Jacobs Johnson; Jaime Thomas; Kimberly Boller

ABSTRACT Research Findings: Many young children spend crucial developmental years in informal, home-based child care (HBCC) settings, but parents and others share concerns about HBCC quality. We applied the ecomap method in a descriptive study of racially, ethnically, and linguistically diverse informal caregivers and parents to capture their informal caregiving arrangements, social networks, and social supports. For the parents in our study, informal child care was a flexible, affordable, and readily accessible child care option. For caregivers, informal care provided opportunities to help family and friends and to earn extra income. Social networks were characterized largely by relationships with family and friends, and social supports tended to be strong and mutually beneficial. Practice or Policy: We found that ecomapping is an engaging and flexible method of capturing and understanding complex child care arrangements and social networks in HBCC settings. As a research tool, ecomapping can help improve the quality of care in informal settings by identifying and meeting the needs of parents and HBCC providers and informing program development and delivery strategies.


Evaluation Review | 2018

The Sequential Scale-Up of an Evidence-Based Intervention: A Case Study:

Jaime Thomas; Thomas D. Cook; Alice Klein; Prentice Starkey; Lydia DeFlorio

Policy makers face dilemmas when choosing a policy, program, or practice to implement. Researchers in education, public health, and other fields have proposed a sequential approach to identifying interventions worthy of broader adoption, involving pilot, efficacy, effectiveness, and scale-up studies. In this article, we examine a scale-up of an early math intervention to the state level, using a cluster randomized controlled trial. The intervention, Pre-K Mathematics, has produced robust positive effects on children’s math ability in prior pilot, efficacy, and effectiveness studies. In the current study, we ask if it remains effective at a larger scale in a heterogeneous collection of pre-K programs that plausibly represent all low-income families with a child of pre-K age who live in California. We find that Pre-K Mathematics remains effective at the state level, with positive and statistically significant effects (effect size on the Early Childhood Longitudinal Study, Birth Cohort Mathematics Assessment = .30, p < .01). In addition, we develop a framework of the dimensions of scale-up to explain why effect sizes might decrease as scale increases. Using this framework, we compare the causal estimates from the present study to those from earlier, smaller studies. Consistent with our framework, we find that effect sizes have decreased over time. We conclude with a discussion of the implications of our study for how we think about the external validity of causal relationships.


Archive | 2015

Toddlers in Early Head Start: A Portrait of 2-Year-Olds, Their Families, and the Programs Serving Them

Amy Madigan; Cheri A. Vogel; Pia Caronongan; Jaime Thomas; Eileen Bandel; Yange Xue; Juliette Henke; Nikki Aikens; Kimberly Boller; Lauren Murphy


Economics of Education Review | 2012

Combination classes and educational achievement

Jaime Thomas


Mathematica Policy Research Reports | 2015

Setting the Stage: The Importance of Informal Child Care in California (Issue Brief)

Jaime Thomas; Kimberly Boller; Cleo Jacobs Johnson; Madeline Young; Mindy Hu


Mathematica Policy Research Reports | 2015

Toddlers in Early Head Start: A Portrait of 3-Year-Olds, Their Families, and the Programs Serving Them. Volume I: Age 3 Report

Cheri A. Vogel; Pia Caronongan; Yange Xue; Jaime Thomas; Eileen Bandel; Nikki Aikens; Kimberly Boller; Lauren Murphy


National Center for Education Evaluation and Regional Assistance | 2017

School Improvement Grants: Implementation and Effectiveness. NCEE 2017-4013.

Lisa Dragoset; Jaime Thomas; Mariesa Herrmann; John Deke; Susanne James-Burdumy; Andrea Boyle; Rachel Upton; Courtney Tanenbaum; Jessica Giffin

Collaboration


Dive into the Jaime Thomas's collaboration.

Top Co-Authors

Avatar

Kimberly Boller

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Cheri A. Vogel

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Nikki Aikens

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Yange Xue

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Deke

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Sarah A. Avellar

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Shannon Monahan

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Diane Paulsell

Mathematica Policy Research

View shared research outputs
Top Co-Authors

Avatar

Lisa Dragoset

Mathematica Policy Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge