Joanne C. Keselman
University of Manitoba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joanne C. Keselman.
Review of Educational Research | 1998
H. J. Keselman; Carl J. Huberty; Lisa M. Lix; Stephen Olejnik; Robert A. Cribbie; Barbara Donahue; Rhonda K. Kowalchuk; Laureen L. Lowman; Martha D. Petoskey; Joanne C. Keselman; Joel R. Levin
Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.
Review of Educational Research | 1996
Lisa M. Lix; Joanne C. Keselman; H. J. Keselman
The presence of variance heterogeneity and nonnormality in educational and psychological data may frequently invalidate the use of the analysis of variance (ANOVA) F test in one-way independent groups designs. This article offers recommendations to applied researchers on the use of various parametric and nonparametric alternatives to the F test under assumption violation conditions. Meta-analytic techniques were used to summarize the statistical robustness literature on the Type I error properties of the Brown-Forsythe (Brown & Forsythe, 1974), James (1951) second-order, Kruskal-Wallis (Kruskal & Wallis, 1952), and Welch (1951) tests. Two variables, based on the theoretical work of Box (1954), are shown to be highly effective in deciding when a particular alternative procedure should be adopted. Based on the meta-analysis findings, it is recommended that researchers gain a clear understanding of the nature of their data before conducting statistical analyses. Of all of the procedures, the James and Welch ...
Psychological Bulletin | 1991
H. J. Keselman; Joanne C. Keselman; Juliet Popper Shaffer
Four pairwise multiple comparison procedures for achieving approximate familywise Type I error control were investigated when multisample sphericity was violated. The test statistic in all cases was the ratio of the corresponding sample mean difference divided by an estimate of its variance. Bonferroni, Studentized range, and Studentized maximum modulus critical values, each with Satterthwaite degrees of freedom, and an analog of the Cochran critical value were used with the test statistic
Journal of Educational and Behavioral Statistics | 1988
H. J. Keselman; Joanne C. Keselman
Two Tukey multiple comparison procedures as well as a Bonferroni and multivariate approach were compared for their rates of Type I error and any-pairs power when multisample sphericity was not satisfied and the design was unbalanced. Pairwise comparisons of unweighted and weighted repeated measures means were computed. Results indicated that heterogenous covariance matrices in combination with unequal group sizes resulted in substantially inflated rates of Type I error for all MCPs involving comparisons of unweighted means. For tests of weighted means, both the Bonferroni and a multivariate critical value limited the number of Type I errors; however, the Bonferroni procedure provided a more powerful test, particularly when the number of repeated measures treatment levels was large.
Psychological Bulletin | 1991
H. J. Keselman; Joanne C. Keselman; Paul A. Games
This article argues that the most reasonable and cautious definition of error rate in the multiple comparisons problem is the maximum familywise rate of Type I error (MFWER), that is, the maximum error rate attainable under all possible null hypotheses. This article shows how the original formulations of Fishers least significant difference (LSD) and the Newman-Keuls procedures, which define the error rate with respect to only the complete null hypothesis, do not limit the MFWER to the level of significance. Modified LSD and Newman-Keuls procedures that do limit the MFWER are presented. Finally, additional multiple comparisons procedures that limits the MFWER and are more powerful than currently used tets are enumerated
Educational and Psychological Measurement | 1987
Joanne C. Keselman; H. J. Keselman
The power to detect main and interaction effects in a factorial design was determined when the Bonferroni method was used to control the overall rate of Type I error at a conventional five percent level. For sample sizes typical of educational research, the power of this procedure is shown to be considerably less than that of recommended standards. Alternative applications of the Bonfer-roni procedure are illustrated and discussed.
British Journal of Mathematical and Statistical Psychology | 1990
Joanne C. Keselman; H. J. Keselman
British Journal of Mathematical and Statistical Psychology | 1996
Joanne C. Keselman; Lisa M. Lix; H. J. Keselman
Statistics in Medicine | 1984
H. J. Keselman; Joanne C. Keselman
Psychophysiology | 1988
H. J. Keselman; Joanne C. Keselman