Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack L. Vevea is active.

Publication


Featured researches published by Jack L. Vevea.


Psychological Methods | 1998

Fixed- and Random-Effects Models in Meta-Analysis

Larry V. Hedges; Jack L. Vevea

There are 2 families of statistical procedures in meta-analysis: fixed- and randomeffects procedures. They were developed for somewhat different inference goals: making inferences about the effect parameters in the studies that have been observed versus making inferences about the distribution of effect parameters in a population of studies from a random sample of studies. The authors evaluate the performance of confidence intervals and hypothesis tests when each type of statistical procedure is used for each type of inference and confirm that each procedure is best for making the kind of inference for which it was designed. Conditionally random-effects procedures (a hybrid type) are shown to have properties in between those of fixed- and random-effects procedures. The use of quantitative methods to summarize the results of several empirical research studies, or metaanalysis, is now widely used in psychology, medicine, and the social sciences. Meta-analysis usually involves describing the results of each study by means of a numerical index (an estimate of effect size, such as a correlation coefficient, a standardized mean difference, or an odds ratio) and then combining these estimates across studies to obtain a summary. Two somewhat different statistical models have been developed for inference about average effect size from a collection of studies, called the fixed-effects and random-effects models. (A third alternative, the mixedeffects model, arises in conjunction with analyses involving study-level covariates or moderator variables, which we do not consider in this article; see Hedges, 1992.) Fixed-effects models treat the effect-size parameters as fixed but unknown constants to be estimated and usually (but not necessarily) are used in conjunction with assumptions about the homogeneity of effect parameters (see, e.g., Hedges, 1982; Rosenthal & Rubin, 1982). Random-effects models treat the effectsize parameters as if they were a random sample from


Cognitive Psychology | 2010

Sources of Variability in Children’s Language Growth

Janellen Huttenlocher; Heidi Waterfall; Marina Vasilyeva; Jack L. Vevea; Larry V. Hedges

The present longitudinal study examines the role of caregiver speech in language development, especially syntactic development, using 47 parent-child pairs of diverse SES background from 14 to 46 months. We assess the diversity (variety) of words and syntactic structures produced by caregivers and children. We use lagged correlations to examine language growth and its relation to caregiver speech. Results show substantial individual differences among children, and indicate that diversity of earlier caregiver speech significantly predicts corresponding diversity in later child speech. For vocabulary, earlier child speech also predicts later caregiver speech, suggesting mutual influence. However, for syntax, earlier child speech does not significantly predict later caregiver speech, suggesting a causal flow from caregiver to child. Finally, demographic factors, notably SES, are related to language growth, and are, at least partially, mediated by differences in caregiver speech, showing the pervasive influence of caregiver speech on language growth.


Psychological Bulletin | 2003

Beyond the group mind: a quantitative review of the interindividual-intergroup discontinuity effect.

Tim Wildschut; Brad Pinter; Jack L. Vevea; Chester A. Insko; John Schopler

This quantitative review of 130 comparisons of interindividual and intergroup interactions in the context of mixed-motive situations reveals that intergroup interactions are generally more competitive than interindividual interactions. The authors identify 4 moderators of this interindividual-intergroup discontinuity effect, each based on the theoretical perspective that the discontinuity effect flows from greater fear and greed in intergroup relative to interindividual interactions. Results reveal that each moderator shares a unique association with the magnitude of the discontinuity effect. The discontinuity effect is larger when (a) participants interact with an opponent whose behavior is unconstrained by the experimenter or constrained by the experimenter to be cooperative rather than constrained by the experimenter to be reciprocal, (b) group members make a group decision rather than individual decisions, (c) unconstrained communication between participants is present rather than absent, and (d) conflict of interest is severe rather than mild.


Journal of Experimental Psychology: General | 2000

Why do categories affect stimulus judgment

Janellen Huttenlocher; Larry V. Hedges; Jack L. Vevea

The authors tested a model of category effects on stimulus judgment. The model holds that the goal of stimulus judgment is to achieve high accuracy. For this reason, people place inexactly represented stimuli in the context of prior information, captured in categories, combining inexact fine-grain stimulus values with prior (category) information. This process can be likened to a Bayesian statistical procedure designed to maximize the average accuracy of estimation. If people follow the proposed procedure to maximize accuracy, their estimates should be affected by the distribution of instances in a category. In the present experiments, participants reproduced one-dimensional stimuli. Different prior distributions were presented. The experiments verified that peoples stimulus estimates are affected by variations in a prior distribution in such a manner as to increase the accuracy of their stimulus reproductions.


Developmental Psychology | 2007

The varieties of speech to young children.

Janellen Huttenlocher; Marina Vasilyeva; Heidi Waterfall; Jack L. Vevea; Larry V. Hedges

This article examines caregiver speech to young children. The authors obtained several measures of the speech used to children during early language development (14-30 months). For all measures, they found substantial variation across individuals and subgroups. Speech patterns vary with caregiver education, and the differences are maintained over time. While there are distinct levels of complexity for different caregivers, there is a common pattern of increase across age within the range that characterizes each educational group. Thus, caregiver speech exhibits both long-standing patterns of linguistic behavior and adjustment for the interlocutor. This information about the variability of speech by individual caregivers provides a framework for systematic study of the role of input in language acquisition.


Psychometrika | 1995

A GENERAL LINEAR MODEL FOR ESTIMATING EFFECT SIZE IN THE PRESENCE OF PUBLICATION BIAS

Jack L. Vevea; Larry V. Hedges

When the process of publication favors studies with smallp-values, and hence large effect estimates, combined estimates from many studies may be biased. This paper describes a model for estimation of effect size when there is selection based on one-tailedp-values. The model employs the method of maximum likelihood in the context of a mixed (fixed and random) effects general linear model for effect sizes. It offers a test for the presence of publication bias, and corrected estimates of the parameters of the linear model for effect magnitude. The model is illustrated using a well-known data set on the benefits of psychotherapy.


Journal of Educational and Behavioral Statistics | 1996

Estimating Effect Size under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model.

Larry V. Hedges; Jack L. Vevea

When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We investigate a selection model based on one-tailed p values in the context of a random effects model. The procedure both models the selection process and corrects for the consequences of selection on estimates of the mean and variance of effect parameters. A test of the statistical significance of selection is also provided. The small sample properties of the method are evaluated by means of simulations, and the asymptotic theory is found to be reasonably accurate under correct model specification and plausible conditions. The method substantially reduces bias due to selection when model specification is correct, but the variance of estimates is increased; thus mean squared error is reduced only when selection produces substantial bias. The robustness of the method to violations of assumptions about the form of the distribution of the random effects is also investigated via simulation, and the model-corrected estimates of the mean effect are generally found to be much less biased than the uncorrected estimates. The significance test for selection bias, however, is found to be highly nonrobust, rejecting at up to 10 times the nominal rate when there is no selection but the distribution of the effects is incorrectly specified.


Journal of Personality and Social Psychology | 2002

The "I," the "we," and the "when": a meta-analysis of motivational primacy in self-definition.

Lowell Gaertner; Constantine Sedikides; Jack L. Vevea; Jonathan Iuzzini

What is the primary motivational basis of self-definition? The authors meta-analytically assessed 3 hypotheses: (a) The individual self is motivationally primary, (b) the collective self is motivationally primary, and (c) neither self is inherently primary; instead, motivational primacy depends on which self becomes accessible through contextual features. Results identified the individual self as the primary motivational basis of self-definition. People react more strongly to threat and enhancement of the individual than the collective self. Additionally, people more readily deny threatening information and more readily accept enhancing information when it pertains to the individual rather than the collective self, regardless of contextual influences. The individual self is the psychological home base, a stable system that can react flexibly to contextual influences.


Journal of Applied Behavior Analysis | 2016

A survey of publication practices of single-case design researchers when treatments have small or large effects.

William R. Shadish; Nicole A. M. Zelinsky; Jack L. Vevea; Thomas R. Kratochwill

The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.


Journal of Educational and Behavioral Statistics | 2006

An Empirical Bayes Approach to Subscore Augmentation: How Much Strength Can We Borrow?

Michael C. Edwards; Jack L. Vevea

This article examines a subscore augmentation procedure. The approach uses empirical Bayes adjustments and is intended to improve the overall accuracy of measurement when information is scant. Simulations examined the impact of the method on subscale scores in a variety of realistic conditions. The authors focused on two popular scoring methods: summed scores and item response theory scale scores for summed scores. Simulation conditions included number of subscales, length (hence, reliability) of subscales, and the underlying correlations between scales. To examine the relative performance of the augmented scales, the authors computed root mean square error, reliability, percentage correctly identified as falling within specific proficiency ranges, and the percentage of simulated individuals for whom the augmented score was closer to the true score than was the nonaugmented score. The general findings and limitations of the study are discussed and areas for future research are suggested.

Collaboration


Dive into the Jack L. Vevea's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lowell Gaertner

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol M. Woods

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brad Pinter

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Chester A. Insko

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge