Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen Olejnik is active.

Publication


Featured researches published by Stephen Olejnik.


Psychological Methods | 2003

Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs

Stephen Olejnik; James Algina

The editorial policies of several prominent educational and psychological journals require that researchers report some measure of effect size along with tests for statistical significance. In analysis of variance contexts, this requirement might be met by using eta squared or omega squared statistics. Current procedures for computing these measures of effect often do not consider the effect that design features of the study have on the size of these statistics. Because research-design features can have a large effect on the estimated proportion of explained variance, the use of partial eta or omega squared can be misleading. The present article provides formulas for computing generalized eta and omega squared statistics, which provide estimates of effect size that are comparable across a variety of research designs.


Review of Educational Research | 1998

Statistical Practices of Educational Researchers: An Analysis of their ANOVA, MANOVA, and ANCOVA Analyses

H. J. Keselman; Carl J. Huberty; Lisa M. Lix; Stephen Olejnik; Robert A. Cribbie; Barbara Donahue; Rhonda K. Kowalchuk; Laureen L. Lowman; Martha D. Petoskey; Joanne C. Keselman; Joel R. Levin

Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.


American Educational Research Journal | 2003

Vocabulary Tricks: Effects of Instruction in Morphology and Context on Fifth-Grade Students’Ability to Derive and Infer Word Meanings

James F. Baumann; Elizabeth Carr Edwards; Eileen M. Boland; Stephen Olejnik; Edward J. Kameenui

This quasi-experimental study compared the effects of morphemic and contextual analysis instruction (MC) with the effects of textbook vocabulary instruction (TV) that was integrated into social studies textbook lessons. The participants were 157 students in eight fifth-grade classrooms. The results indicated that (a) TV students were more successful at learning textbook vocabulary; (b) MC students were more successful at inferring the meanings of novel affixed words; (c) MC students were more successful at inferring the meanings of morphologically and contextually decipherable words on a delayed test but not on an immediate test; and (d) the groups did not differ on a comprehension measure or a social studies learning measure. The results were interpreted as support for teaching specific vocabulary and morphemic analysis, with some evidence for the efficacy of teaching contextual analysis.


Journal of Educational and Behavioral Statistics | 1997

Multiple Testing and Statistical Power with Modified Bonferroni Procedures.

Stephen Olejnik; Jianmin Li; Suchada Supattathum; Carl J. Huberty

The difference in statistical power between the original Bonferroni and five modified Bonferroni procedures that control the overall Type I error rate is examined in the context of a correlation matrix where multiple null hypotheses, H 0 : ρ ij = 0 for all i ≠ j, are tested. Using 50 real correlation matrices reported in educational and psychological journals, a difference in the number of hypotheses rejected of less than 4% was observed among the procedures. When simulated data were used, very small differences were found among the six procedures in detecting at least one true relationship, but in detecting all true relationships the power of the modified Bonferroni procedures exceeded that of the original Bonferroni procedure by at least .18 and by as much as .55 when all null hypotheses were false. The power difference decreased as the number of true relationships decreased. Power differences obtained for the average power were of a much smaller magnitude but still favored the modified Bonferroni procedures. For the five modified Bonferroni procedures, power differences less than .05 were typically observed; the Holm procedure had the lowest power, and the Rom procedure had the highest.


Journal of Experimental Education | 1984

Planning Educational Research: Determining the Necessary Sample Size.

Stephen Olejnik

AbstractIn planning a research study, investigators are frequently uncertain regarding the minimal number of subjects needed to adequately test a hypothesis of interest. The present paper discusses the sample size problem and four factors which affect its solution: significance level, statistical power, analysis procedure, and effect size. The interrelationship between these factors is discussed and demonstrated by calculating minimal sample size requirements for a variety of research conditions.


Applied Psychological Measurement | 1997

The Power of Rasch Person-Fit Statistics in Detecting Unusual Response Patterns:

Mao-neng Fred Li; Stephen Olejnik

Five Rasch person-fit indexes were compared on their ability to detect spuriously high and low nonsystematic response patterns. The moderating effects of test dimensionality, type of misfit, and test length were also investigated. Results indicated that (1) all fit indexes were not significantly correlated with Rasch trait estimates; (2) their sampling distributions deviated significantly from the standard normal distribution; (3) using adjusted cutoff criteria to identify misfit, ECI2z, ECI4z, Iz, and WSR-C performed equally well in the detection of misfit regardless of test dimensionality, type of misfit, and test length (however, only lz is recommended for the spuriously high response patterns for a two-dimensional test); (4) the false positive rate for each index was less than the nominal .05 level; (5) Rasch person-fit indexes are more sensitive to spuriously high response patterns than to spuriously low response patterns on a two-dimensional test, but when the test is unidimensional they demonstrate equal sensitivity to both types of misfit; and (6) the detectability of misfit increases with test length.


Journal of Experimental Education | 1992

Identifying Latent Variables Measured by the Learning and Study Strategies Inventory (LASSI)

Stephen Olejnik; Sherrie L. Nist

AbstractThe Learning and Study Strategies Inventory (LASSI) is examined through both exploratory and confirmatory factor analyses. Two independent samples of college freshmen completed the LASSI. Data from the first sample of 264 students were used for estimating reliability and for identifying the structural measurement model. The second sample of 143 students provided data to test the proposed model through a confirmatory factor analysis. A three-factor model was suggested in the first set of analyses, and evidence supporting the proposed model is provided in the confirmatory analysis. The three latent variables are labeled values-related activities, goal orientation, and cognitive activities. Interrelations among the latent variables are examined, and the usefulness of the LASSI for future studies testing adult learning models is discussed.


Multivariate Behavioral Research | 2003

Sample Size Tables for Correlation Analysis with Applications in Partial Correlation and Multiple Regression Analysis.

James Algina; Stephen Olejnik

Tables for selecting sample size in correlation studies are presented. Some of the tables allow selection of sample size so that r (or r², depending on the statistic the researcher plans to interpret) will be within a target interval around the population parameter with probability .95. The intervals are ±.05, ±.10, ±.15, and ±.20 around the population parameter. Other tables allow selection of sample size to meet a target for power when conducting a .05 test of the null hypothesis that a correlation coefficient is zero. Applications of the tables in partial correlation and multiple regression analyses are discussed. SAS and SPSS computer programs are made available to permit researchers to select sample size for levels of accuracy, probabilities, and parameter values and for Type I error rates other than those used in constructing the tables.


Journal of Educational and Behavioral Statistics | 1984

PARAMETRIC ANCOVA AND THE RANK TRANSFORM ANCOVA WHEN THE DATA ARE CONDITIONALLY NON-NORMAL AND HETEROSCEDASTIC

Stephen Olejnik; James Algina

Parametric analysis of covariance was compared to analysis of covariance with data transformed using ranks. Using a computer simulation approach, the two strategies were compared in terms of the proportion of Type I errors made and statistical power when the conditional distribution of errors was normal and homoscedastic, normal and heteroscedastic, non-normal and homoscedastic, and non-normal and heteroscedastic. The results indicated that parametric ANCOVA was robust to violations of either normality or homoscedasticity. However, when both assumptions were violated, the observed α levels underestimated the nominal α level when sample sizes were small and α = .05. Rank ANCOVA led to a slightly liberal test of the hypothesis when the covariate was non-normal, the sample size was small, and the errors were heteroscedastic. Practical significant power differences favoring the rank ANCOVA procedures were observed with moderate sample sizes and a variety of conditional distributions.


Journal of Educational and Behavioral Statistics | 1987

Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale

Stephen Olejnik; James Algina

Estimated Type I error rates and power are reported for the Brown-Forsythe, O’Brien, Klotz, and Siegel-Tukey procedures. The effect of aligning the data, by using deviations from group means or group medians, is investigated for the latter two tests. Normal and non-normal distributions, equal and unequal sample-size combinations, and equal and unequal means are investigated for a two-group design. No test is robust and most powerful for all distributions, however, using O’Brien’s procedure will avoid the possibility of a liberal test and provide power almost as large as what would be provided by choosing the most powerful test for each distribution type. Using the Brown-Forsythe procedure with heavy-tailed distributions and O’Brien’s procedure for other distributions will increase power modestly and maintain robustness. Using the mean-aligned Klotz test or the unaligned Klotz test with appropriate distributions can increase power, but only at the risk of increased Type I error rates if the tests are not accurately matched to the distribution type.

Collaboration


Dive into the Stephen Olejnik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Ming Luh

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge