Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey D. Kromrey is active.

Publication


Featured researches published by Jeffrey D. Kromrey.


Journal of Experimental Education | 1996

Determining the Efficacy of Intervention: The Use of Effect Sizes for Data Analysis in Single-Subject Research

Jeffrey D. Kromrey; Lynn Foster-Johnson

Abstract Use of the effect size as a descriptive statistic for single-subject research is presented. A brief review of visual and statistical analysis techniques commonly used in single-subject methods is provided, and the limitations of each are noted. Effect sizes are presented as statistics that can augment the interpretation of results as well as provide additional information about the effectiveness of interventions. Four types of treatment effects are presented, with corresponding case studies illustrating the computation and interpretation of the effect size for each. An appendix includes the case study data and a sample computer program for computing the effect sizes described.


Educational and Psychological Measurement | 2003

Another Look at Technology Use in Classrooms: The Development and Validation of an Instrument To Measure Teachers' Perceptions.

Kristine Y. Hogarty; Thomas R. Lang; Jeffrey D. Kromrey

This article describes the development and initial validation of scores from a survey designed to measure teachers’ reported use of technology in their classrooms. Based on data obtained from a sample of approximately 2,000 practicing teachers, factor analytic and correlational methods were used to obtain evidence of the validity of scores derived from responses to the instrument. In addition, analyses of Web and paper versions of the survey suggest relatively minor differences in responses, although the response rates for the paper version were substantially higher. The results were interpreted in terms of the utility of the instrument for measuring the confluence of factors that are critical for inquiry related to technology use in classrooms.


Multivariate Behavioral Research | 2010

A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology.

Guy Cafri; Jeffrey D. Kromrey; Michael T. Brannick

This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohens (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birges (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta-analyses in light of the findings are provided.


Sex Roles | 1999

The Perception of Sexual Harassment in Higher Education: Impact of Gender and Attractiveness.

Michela A. LaRocca; Jeffrey D. Kromrey

This experimental study used an ambiguous sexualharassment scenario, and manipulated gender and level ofphysical attractiveness within a perpetrator/victimdyad. The purpose of this study was to examine the perceptions of sexual harassment of maleand female students as well as perceptions ofperpetrator and victim character traits. Twohundred-ninety six male and 295 female undergraduate andgraduate students at a large urban university were askedto read the scenario and describe behavior and charactertraits for perpetrator and victim using a seven-pointsemantic differential scale. Eighty-four percent (n = 495) of the sample were White, 5.3% (n =31) were African American, 5.9% (n = 39) were ofHispanic origin, and 4.7% (n = 28) marked other forrace/ethnicity. Results indicate that female studentsperceived the scenario as more sexually harassing thanmale students. However, both men and women judged femaleperpetrators less harshly than male perpetrators. Bothmen and women were influenced by perpetrator attractiveness: they perceived an attractiveopposite gender perpetrator as less harassing than asame gender attractive perpetrator.


Journal of Experimental Education | 2003

The Functioning of Single-Case Randomization Tests with and without Random Assignment.

John M. Ferron; Lynn Foster-Johnson; Jeffrey D. Kromrey

Abstract The authors used Monte Carlo methods to examine the Type I error rates for randomization tests applied to single-case data arising from ABAB designs involving random, systematic, or response-guided assignment of interventions. Six randomization tests were examined (permuting blocks of 1, 2, 3, or 5 observations, and randomly selecting intervention triplets so that each phase has at least 3 or 5 observations). When the design included randomization, the Type I error rate was controlled. When the design was systematic or guided by the absolute value of the slope, the tests permuting blocks tended to be liberal with positive autocorrelation, whereas those based on the random selection of intervention triplets tended to be conservative across levels of autocorrelation.


Educational and Psychological Measurement | 2006

On Knowing What We Do Not Know: An Empirical Comparison of Methods to Detect Publication Bias in Meta-Analysis

Jeffrey D. Kromrey; Gianna Rendina-Gobioff

The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg’s rank correlation, Egger’s regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each meta-analysis, sample sizes of primary studies, population variances in primary studies, magnitude of population effect size, and magnitude of selection bias. Results were evaluated in terms of Type I error control and statistical power. Results suggest poor Type I error control in many conditions for all of the methods examined. One exception was the Begg’s rank correlation method using sample size rather than the estimated variance. Statistical power was typically very low for conditions in which Type I error rates were adequately controlled, although power increased with larger sample sizes in the primary studies and larger numbers of studies in the meta-analysis.


Journal of Experimental Education | 1995

Power and Type I Error Rates of New Pairwise Multiple Comparison Procedures Under Heterogeneous Variances

Jeffrey D. Kromrey; Michela A. La Rocca

Abstract The Type I error rates and statistical power of 9 selected multiple comparison procedures were compared in a Monte Carlo study. Data were generated for 3-, 4-, and 5-group ANOVA models, from simulated populations with both homogeneous and heterogeneous variances. A variety of patterns of population means were examined, including completely null, partial-null, and multiple-null patterns. None of the procedures was robust to violations of the assumption of variance homogeneity at nominal alpha levels lower than .10, even with equal sample sizes. The Dunn procedure and modified Bonferroni procedures showed better robustness properties than the Tukey procedure and recent modifications of it did. Power comparisons, conducted at a nominal alpha level of .10, showed the Peritz, Ryan, and Fisher-Hayter tests to be consistently most powerful across the variance conditions examined. However, power differences among these three procedures were minimal.


Educational and Psychological Measurement | 2010

A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

Jeanine L. Romano; Jeffrey D. Kromrey; Susan T. Hibbard

The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in the accuracy and precision of the eight methods examined were negligible in many conditions. For the breadth of conditions examined in this simulation study, the methods that proved to be the most accurate were those proposed by Bonett and Fisher. Larger samples sizes and larger coefficient alphas also resulted in better interval coverage, whereas smaller numbers of items resulted in poorer interval coverage.


Educational and Psychological Measurement | 2001

INITIAL DEVELOPMENT AND SCORE VALIDATION OF THE ADOLESCENT ANGER RATING SCALE

Deanna McKinnie Burney; Jeffrey D. Kromrey

The Adolescent Anger Rating Scale (AARS) was designed to (a) measure two distinct types of anger: instrumental and reactive, and (b) assist researchers and practitioners in identifying specific types of anger in adolescents. The present study investigated the construct validity of AARS scores. Seven hundred ninety-two 12-to 19-year-old adolescents in Grades 7 through 12 participated in the study. Factor analysis yielded three factors: Instrumental Anger, Reactive Anger, and Anger Control. Moderate to moderately high Cronbach alphas and test-retest reliability coefficients indicated that scores from the AARS are internally consistent and stable when measuring anger subtypes. Discriminant validity evidence supported the AARS scores’ ability to measure specific types of anger different from constructs of anger measured by the Multidimensional Anger Inventory (MAI).


Journal of Research in Childhood Education | 1992

The Effect of Phonemic Awareness on the Literacy Development of First Grade Children in a Traditional or a Whole Language Classroom

Priscilla L. Griffith; Janell P. Klesius; Jeffrey D. Kromrey

Abstract This study examined the acquisition of decoding and spelling skills and writing fluency of children with various levels of beginning-of-the-year phonemic awareness. First grade children who began school high and low in phonemic awareness received either whole language or traditional basal instruction. The whole language curriculum included the shared-book experience and extensive writing activities; the traditional basal curriculum included explicit phonics instruction, but very little writing. Beginning-of-the-year level of phonemic awareness was more important than method of instruction in literacy acquisition. High phonemic awareness children outperformed low phonemic awareness children on all of the literacy measures. The role that writing using invented spelling may play in helping low phonemic awareness children understand the alphabetic principle is discussed.

Collaboration


Dive into the Jeffrey D. Kromrey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melinda R. Hess

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

John M. Ferron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Ann E. Barron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Constance V. Hines

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas R. Lang

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Diep Nguyen

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Yi-Hsin Chen

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Amy Hilbelink

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge