Rhonda K. Kowalchuk
University of Manitoba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rhonda K. Kowalchuk.
Review of Educational Research | 1998
H. J. Keselman; Carl J. Huberty; Lisa M. Lix; Stephen Olejnik; Robert A. Cribbie; Barbara Donahue; Rhonda K. Kowalchuk; Laureen L. Lowman; Martha D. Petoskey; Joanne C. Keselman; Joel R. Levin
Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.
Communications in Statistics - Simulation and Computation | 1998
H. J. Keselman; James Algina; Rhonda K. Kowalchuk; Russell D. Wolfinger
The mixed model approach to the analysis of repeated measurements allows users to model the covariance structure of their data. That is, rather than using a univariate or a multivariate test statistic for analyzing effects, tests that assume a particular form for the covariance structure, the mixed model approach allows the data to determine the appropriate structure. Using the appropriate covariance structure should result in more powerful tests of the repeated measures effects according to advocates of the mixed model approach. SAS’ (SAS Institute, 1996) mixed model program, PROC MIXED, provides users with two information Criteria for selecting the ‘best’ covariance structure, Akaike (1974) and Schwarz (1978). Our study compared these log likelihood tests to see how effective they would be for detecting various population covariance structures. In particular, the criteria were compared in nonspherical repeated measures designs having equal/unequal group sizes and covariance matrices when data were both ...
British Journal of Mathematical and Statistical Psychology | 1999
H. J. Keselman; James Algina; Rhonda K. Kowalchuk; Russell D. Wolfinger
Looney & Stanleys (1989) recommendations regarding analysis strategies for repeated measures designs containing between-subjects grouping variables and within-subjects repeated measures variables were re-examined and compared to recent analysis strategies. That is, corrected degrees of freedom univariate tests, multivariate tests, mixed model tests, and tests due to Keselman, Carriere & Lix (1993) and to Algina (1994), Huynh (1978) and Lecoutre (1991) were compared for rates of Type I error in unbalanced non-spherical repeated measures designs having varied covariance structures and no missing data on the within-subjects variable. Heterogeneous within-subjects and heterogeneous within- and between-subjects structures were investigated along with multivariate non-normality. Results indicated that the tests due to Keselman et al. and Algina, Huynh and Lecoutre provided effective Type I error control whereas the default mixed model approach computed with PROC MIXED (SAS Institute, 1995) generally did not. Based on power differences, we recommend that applied researchers adopt the Welch-James type test described by Keselman et al.
Communications in Statistics-theory and Methods | 1999
H. J. Keselman; James Algina; Rhonda K. Kowalchuk; Russell D. Wolfinger
Mixed-model analysis is the newest approach to the analysis of repeated measurements. The approach is supposed to be advantageous (i.e., efficient and powerful) because it allows users to model the covariance structure of their data prior to assessing treatment effects. The statistics for this method are based on an F-distribution with degrees of freedom often just approximated by the residual degrees of freedom. However, previous results indicated that these statistics can produce biased Type I error rates under conditions believed to characterize behavioral science research, This study investigates a more complex degrees of freedom method based on Satterthwaites technique of matching moments. The resulting mixed-model F-tests are compared with a Welch-James-type test which has been found to be generally robust to assumption violations. Simulation results do not universally favor one approach over the other, although additional considerations are discussed outlining the relative merits of each approach.
Psychometrika | 1998
H. J. Keselman; Rhonda K. Kowalchuk; Lisa M. Lix
Three approaches to the analysis of main and interaction effect hypotheses in nonorthogonal designs were compared in a 2×2 design for data that was neither normal in form nor equal in variance. The approaches involved either least squares or robust estimators of central tendency and variability and/or a test statistic that either pools or does not pool sources of variance. Specifically, we compared the ANOVA F test which used trimmed means and Winsorized variances, the Welch-James test with the usual least squares estimators for central tendency and variability and the Welch-James test using trimmed means and Winsorized variances. As hypothesized, we found that the latter approach provided excellent Type I error control, whereas the former two did not.
British Journal of Mathematical and Statistical Psychology | 2000
H. J. Keselman; Rhonda K. Kowalchuk; James Algina; Lisa M. Lix; Rand R. Wilcox
Non-normality and covariance heterogeneity between groups affect the validity of the traditional repeated measures methods of analysis, particularly when group sizes are unequal. A non-pooled Welch-type statistic (WJ) and the Huynh Improved General Approximation (IGA) test generally have been found to be effective in controlling rates of Type I error in unbalanced non-spherical repeated measures designs even though data are non-normal in form and covariance matrices are heterogeneous. However, under some conditions of departure from multisample sphericity and multivariate normality their rates of Type I error have been found to be elevated. Westfall and Youngs results suggest that Type I error control could be improved by combining bootstrap methods with methods based on trimmed means. Accordingly, in our investigation we examined four methods for testing for main and interaction effects in a between- by within-subjects repeated measures design: (a) the IGA and WJ tests with least squares estimators based on theoretically determined critical values; (b) the IGA and WJ tests with least squares estimators based on empirically determined critical values; (c) the IGA and WJ tests with robust estimators based on theoretically determined critical values; and (d) the IGA and WJ tests with robust estimators based on empirically determined critical values. We found that the IGA tests were always robust to assumption violations whether based on least squares or robust estimators or whether critical values were obtained through theoretical or empirical methods. The WJ procedure, however, occasionally resulted in liberal rates of error when based on least squares estimators but always proved robust when applied with robust estimators. Neither approach particularly benefited from adopting bootstrapped critical values. Recommendations are provided to researchers regarding when each approach is best.
Multivariate Behavioral Research | 2003
Rhonda K. Kowalchuk; H. J. Keselman; James Algina
The Welch-James (WJ) and the Huynh Improved General Approximation (IGA) tests for interaction were examined with respect to Type I error in a between- by within-subjects repeated measures design when data were non-normal, non-spherical and heterogeneous, particularly when group sizes were unequal. The tests were computed with aligned ranks and compared to the use of least squares and robust estimators (i.e., trimmed means and Winsorized variances/covariances). Critical values were either obtained theoretically or through a bootstrapping method. The IGA and WJ procedures based on aligned ranks always provided a valid test of a repeated measures interaction effect when group sizes were equal and covariance matrices across groups were homogeneous. On the other hand, the use of aligned ranks did not provide a valid test for a repeated measures interaction when covariance matrices were non-spherical with unequal variances across the levels of the repeated measures factor combined with unequal covariance matrices across the grouping factor. The IGA and WJ procedures based on robust estimators provided a valid test of the interaction across investigated conditions, however under a heavy-tailed distribution, the IGA and WJ procedures based on least squares estimators showed better Type I error control than when based on robust estimators.
British Journal of Mathematical and Statistical Psychology | 2000
H. J. Keselman; Rhonda K. Kowalchuk; Robert J. Boik
In a previous paper, Boik presented an empirical Bayes (EB) approach to the analysis of repeated measurements. The EB approach is a blend of the conventional univariate and multivariate approaches. Specifically, in the EB approach, the underlying covariance matrix is estimated by a weighted sum of the univariate and multivariate estimators. In addition to demonstrating that his approach controls test size and frequently is more powerful than either the epsilon-adjusted univariate or multivariate approaches, Boik showed how conventional multivariate software can be used to conduct EB analyses. Our investigation examined the Type I error properties of the EB approach when its derivational assumptions were not satisfied as well as when other factors known to affect the conventional tests of significance were varied. For comparative purposes we also investigated procedures presented by Huynh and by Keselman, Carriere, and Lix, procedures designed for non-spherical data and covariance heterogeneity, as well as an adjusted univariate and multivariate test statistic. Our results indicate that when the response variable is normally distributed and group sizes are equal, the EB approach was robust to violations of its derivational assumptions and therefore is recommended due to the power findings reported by Boik. However, we also found that both the EB approach and the adjusted univariate and multivariate procedures were prone to depressed or elevated rates of Type I error when data were non-normally distributed and covariance matrices and group sizes were either positively or negatively paired with one another. On the other hand, the Huynh and Keselman et al. procedures were generally robust to these same pairings of covariance matrices and group sizes.
Biometrical Journal | 2002
H. J. Keselman; Rand R. Wilcox; Rhonda K. Kowalchuk; Stephen Olejnik
We compared three tests for mean equality: the Welch (1938) heteroscedastic statistic, the Zhou et al. (1997) test, derived to be used with skewed lognormal data, and Yuens (1974) procedure which uses robust estimators of central tendency and variability with the Welch test in order to combat the combined effects of nonnormality and variance heterogeneity. Over the 162 conditions of nonnormality and variance heterogeneity we investigated, only the Yuen procedure reliably controlled its rate of Type I error.
Communications in Statistics - Simulation and Computation | 2000
H. J. Keselman; Rand R. Wilcox; Jason Taylor; Rhonda K. Kowalchuk
Tests for mean equality proposed by Weerahandi (1995) and Chen and Chen (1998), tests that do not require equality of population variances, were examined when data were not only heterogeneous but, as well, nonnormal in unbalanced completely randomized designs. Furthermore, these tests were compared to a test examined by Lix and Keselman (1998), a test that uses a heteroscedastic statistic (i.e., Welch, 1951) with robust estimators (20% trimmed means and Winsorized variances). Our findings confirmed previously published data that the tests are indeed robust to variance heterogeneity when the data are obtained from normal populations. However, the Weerahandi (1995) and Chen and Chen (1998) tests were not found to be robust when data were obtained from nonnormal populations. Indeed, rates of Type I error were typically in excess of 10% and, at times, exceeded 50%. On the other hand, the statistic presented by Lix and Keselman (1998) was generally robust to variance heterogeneity and nonnormality.