Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Therese D. Pigott is active.

Publication


Featured researches published by Therese D. Pigott.


Psychological Methods | 2004

The power of statistical tests for moderators in meta-analysis.

Larry V. Hedges; Therese D. Pigott

Calculation of the statistical power of statistical tests is important in planning and interpreting the results of research studies, including meta-analyses. It is particularly important in moderator analyses in meta-analysis, which are often used as sensitivity analyses to rule out moderator effects but also may have low statistical power. This article describes how to compute statistical power of both fixed- and mixed-effects moderator tests in meta-analysis that are analogous to the analysis of variance and multiple regression analysis for effect sizes. It also shows how to compute power of tests for goodness of fit associated with these models. Examples from a published meta-analysis demonstrate that power of moderator tests and goodness-of-fit tests is not always high.


Journal of Educational and Behavioral Statistics | 2010

How Many Studies Do You Need?: A Primer on Statistical Power for Meta-Analysis

Jeffrey C. Valentine; Therese D. Pigott; Hannah R. Rothstein

In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide some suggestions for thinking about these parameters, in particular for the random effects variance component. The authors also show how the typically uninformative retrospective power analysis can be made more informative. The authors then discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone. Finally, the authors take up the question “How many studies do you need to do a meta-analysis?” and show that, given the need for a conclusion, the answer is “two studies,” because all other synthesis techniques are less transparent and/or are less likely to be valid. For systematic reviewers who choose not to conduct a quantitative synthesis, the authors provide suggestions for both highlighting the current limitations in the research base and for displaying the characteristics and results of studies that were found to meet inclusion criteria.


Educational Research and Evaluation | 2001

A Review of Methods for Missing Data

Therese D. Pigott

This paper reviews methods for handling missing data in a research study. Many researchers use ad hoc methods such as complete case analysis, available case analysis (pairwise deletion), or single-value imputation. Though these methods are easily implemented, they require assumptions about the data that rarely hold in practice. Model-based methods such as maximum likelihood using the EM algorithm and multiple imputation hold more promise for dealing with difficulties caused by missing data. While model-based methods require specialized computer programs and assumptions about the nature of the missing data, these methods are appropriate for a wider range of situations than the more commonly used ad hoc methods. The paper provides an illustration of the methods using data from an intervention study designed to increase students’ ability to control their asthma symptoms.


British Journal of Development Psychology | 2007

Fine motor skills and mathematics achievement in East Asian American and European American kindergartners and first graders

Zupei Luo; Paul E. Jose; Carol S. Huntsinger; Therese D. Pigott

This study examined whether fine motor skills were related to the initial scores and growth rate of mathematics achievement in American kindergartners and first graders. Participants were 244 East Asian American and 9,816 European American children from the US-based Early Childhood Longitudinal Study (ECLS-K). To control sampling bias, two subsamples of European Americans were matched to the East Asian American sample by socio-economic status or fine motor skills, using propensity score matching. Results showed that East Asian American children possessed more advanced mathematics achievement and fine motor skills. The construct of fine motor skills significantly predicted mathematics achievement over time, and further, it significantly mediated the relationship between ethnic group status and mathematics achievement.


Educational Researcher | 2013

Outcome-Reporting Bias in Education Research

Therese D. Pigott; Jeffrey C. Valentine; Joshua R. Polanin; Ryan T. Williams; Dericka D. Canada

Outcome-reporting bias occurs when primary studies do not include information about all outcomes measured in a study. When studies omit findings on important measures, efforts to synthesize the research using systematic review techniques will be biased and interpretations of individual studies will be incomplete. Outcome-reporting bias has been well documented in medicine and has been shown to lead to inaccurate assessments of the effects of medical treatments and, in some cases, to omission of reports of harms. This study examines outcome-reporting bias in educational research by comparing the reports of educational interventions from dissertations to their published versions. We find that nonsignificant outcomes were 30% more likely to be omitted from a published study than statistically significant ones.


Journal of Early Childhood Research | 2005

head start children’s transition to kindergarten: evidence from the early childhood longitudinal study

Therese D. Pigott; Marla Susman Israel

It has been acknowledged that children from poverty begin school missing many of the prerequisites for school success. The 1963 US initiative, Head Start, is the major federal program aimed at providing children in poverty the experiences necessary to start school on an equal footing with their same-age peers. This article uses data from the Early Childhood Longitudinal Study (ECLS) to examine the reading and math assessment scores of Head Start children as compared to their same-age peers at kindergarten entry. The data suggests that while Head Start children score higher than their same-age peers when compared to non-Head Start children from the same socio-economic status, there is still a gap between Head Start children and their peers in schools with higher social economic standing. The article brings an interdisciplinary focus to the issue of how ‘peer’ is defined for disadvantaged children when examining achievement gaps and relative program effectiveness.


Evaluation & the Health Professions | 2001

Missing Predictors in Models of Effect Size

Therese D. Pigott

Missing data occur frequently in meta-analysis. Reviewers inevitably face decisions about how to handle missing data, especially when predictors in a model of effect size are missing from some of the identified studies. Commonly used methods for missing data such as complete case analysis and mean substitution often yield biased estimates. This article briefly reviews the particular problems missing predictors cause in a meta-analysis, discusses the properties of commonly used missing data methods, and provides suggestions for ways to handle missing predictors when estimating effect size models. Maximum likelihood methods for multivariate normal data and multiple imputation hold the most promise for handling missing predictors in meta-analysis. These two model-based methods apply to a broad set of data situations, are based on sound statistical theory, and utilize all information available to obtain efficient estimators.


Research on Social Work Practice | 2012

Validation of the Employment Hope Scale Measuring Psychological Self-Sufficiency Among Low-Income Jobseekers

Philip Young P. Hong; Joshua R. Polanin; Therese D. Pigott

Objectives: The Employment Hope scale (EHS) was designed to measure the empowerment-based self-sufficiency (SS) outcome among low-income job-seeking clients. This measure captures the psychological SS dimension as opposed to the more commonly used economic SS in workforce development and employment support practice. The study validates the EHS and reports its psychometric properties. Method: An exploratory factor analysis (EFA) was conducted using an agency data from the Cara Program in Chicago, United States. The principal axis factor extraction process was employed to identify the factor structure. Results: EFA resulted in a 13-item two-factor structure with Factor 1 representing “Psychological Empowerment” and Factor 2 representing “Goal-Oriented Pathways.” Both factors had high internal consistency reliability and construct validity. Conclusions: While findings may be preliminary, this study found the EHS to be a reliable and valid measure, demonstrating its utility in assessing psychological SS as an empowerment outcome among low-income jobseekers.


Research Synthesis Methods | 2010

An alternative to R2 for assessing linear models of effect size

Ariel M. Aloe; Betsy Jane Becker; Therese D. Pigott

Reviewers often use regression models in meta-analysis (‘meta-regressions’) to examine the relationships between effect sizes and study characteristics. In this paper, we propose and illustrate the use of an index (R) that expresses the amount of variance in the outcome that is explained by the meta-regression model. The values of R2 obtained from the standard computer output for linear models of effect size in the meta-analysis context are typically too small, because the typical R2 considers sampling variance to be unexplained whereas in meta-analysis it can be quantified. Although the idea of removing the unexplainable variance from the estimator of variance accounted for in meta-analysis is not new (Cook et al., 1992; Raudenbush, 1994) we explicitly define four estimators of variance explained, and illustrate via two examples that the typical R2 obtained in a linear model of effect size is always lower than our indices. Thus, the typical R2 underestimates the explanatory power of linear models of effect sizes. Our four estimators improve upon typical weighted R2 values. Copyright


Journal of Nursing Measurement | 2004

The asthma belief survey: development and testing.

Barbara Velsor-Friedrich; Therese D. Pigott; Brenda Srof; Robin Froman

Accurate evaluation of asthma self-efficacy is essential to the effective management of asthma. This article describes the development and testing of the Asthma Belief Survey (ABS). The instrument is a 15-item tool that uses a 5-point self-report scale to measure asthma self-efficacy in relation to daily asthma maintenance and an asthma crisis. This instrument was tested with a sample of 79 African American school children, who attended eight inner-city elementary schools. The mean age of the sample was 11.05 years with a range of 8 to 14 years. The majority of students had been diagnosed with asthma prior to the age of 5 years. The Asthma Belief Survey demonstrated good psychometric properties: good Cronbachs alpha reliability coefficient (.83), coherence as a single scale measuring childrens self-efficacy in treating their own asthma, and significant relationships with scales of asthma knowledge (r = .51, p < .000) and asthma self-care practices (r = .52, p < .001). The Asthma Belief Survey has sound reliability and validity evidence to support its use to measure a childs asthma self-management self-efficacy. The practitioner can use this instrument to assess a childs self-efficacy in the areas of asthma health maintenance and avoidance of asthma episodes.

Collaboration


Dive into the Therese D. Pigott's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan T. Williams

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffery J. Bulanda

Northeastern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge