Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where H. J. Keselman is active.

Publication


Featured researches published by H. J. Keselman.


Review of Educational Research | 1998

Statistical Practices of Educational Researchers: An Analysis of their ANOVA, MANOVA, and ANCOVA Analyses

H. J. Keselman; Carl J. Huberty; Lisa M. Lix; Stephen Olejnik; Robert A. Cribbie; Barbara Donahue; Rhonda K. Kowalchuk; Laureen L. Lowman; Martha D. Petoskey; Joanne C. Keselman; Joel R. Levin

Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.


Review of Educational Research | 1996

Consequences of Assumption Violations Revisited: A Quantitative Review of Alternatives to the One-Way Analysis of Variance F Test

Lisa M. Lix; Joanne C. Keselman; H. J. Keselman

The presence of variance heterogeneity and nonnormality in educational and psychological data may frequently invalidate the use of the analysis of variance (ANOVA) F test in one-way independent groups designs. This article offers recommendations to applied researchers on the use of various parametric and nonparametric alternatives to the F test under assumption violation conditions. Meta-analytic techniques were used to summarize the statistical robustness literature on the Type I error properties of the Brown-Forsythe (Brown & Forsythe, 1974), James (1951) second-order, Kruskal-Wallis (Kruskal & Wallis, 1952), and Welch (1951) tests. Two variables, based on the theoretical work of Box (1954), are shown to be highly effective in deciding when a particular alternative procedure should be adopted. Based on the meta-analysis findings, it is recommended that researchers gain a clear understanding of the nature of their data before conducting statistical analyses. Of all of the procedures, the James and Welch ...


Psychological Methods | 2003

Modern Robust Data Analysis Methods: Measures of Central Tendency

Rand R. Wilcox; H. J. Keselman

Various statistical methods, developed after 1970, offer the opportunity to substantially improve upon the power and accuracy of the conventional t test and analysis of variance methods for a wide range of commonly occurring situations. The authors briefly review some of the more fundamental problems with conventional methods based on means; provide some indication of why recent advances, based on robust measures of location (or central tendency), have practical value; and describe why modern investigations dealing with nonnormality find practical problems when comparing means, in contrast to earlier studies. Some suggestions are made about how to proceed when using modern methods.


Psychophysiology | 1998

Testing treatment effects in repeated measures designs : An update for psychophysiological researchers

H. J. Keselman

In 1987, Jennings enumerated data analysis procedures that authors must follow for analyzing effects in repeated measures designs when submitting papers to Psychophysiology. These prescriptions were intended to counteract the effects of nonspherical data, a condition know to produce biased tests of significance. Since this editorial policy was established, additional refinements to the analysis of these designs have appeared in print in a number of sources that are not likely to be routinely read by psychophysiological researchers. Accordingly, this paper includes additional procedures not previously enumerated in the editorial policy that can be used to analyze repeated measurements. Furthermore, I indicate how numerical solutions can easily be obtained.


Communications in Statistics - Simulation and Computation | 1998

A comparison of two approaches for selecting covariance structures in the analysis of repeated measurements

H. J. Keselman; James Algina; Rhonda K. Kowalchuk; Russell D. Wolfinger

The mixed model approach to the analysis of repeated measurements allows users to model the covariance structure of their data. That is, rather than using a univariate or a multivariate test statistic for analyzing effects, tests that assume a particular form for the covariance structure, the mixed model approach allows the data to determine the appropriate structure. Using the appropriate covariance structure should result in more powerful tests of the repeated measures effects according to advocates of the mixed model approach. SAS’ (SAS Institute, 1996) mixed model program, PROC MIXED, provides users with two information Criteria for selecting the ‘best’ covariance structure, Akaike (1974) and Schwarz (1978). Our study compared these log likelihood tests to see how effective they would be for detecting various population covariance structures. In particular, the criteria were compared in nonspherical repeated measures designs having equal/unequal group sizes and covariance matrices when data were both ...


British Journal of Mathematical and Statistical Psychology | 2002

Controlling the rate of Type I error over a large set of statistical tests.

H. J. Keselman; Robert A. Cribbie; Burt Holland

When many tests of significance are examined in a research investigation with procedures that limit the probability of making at least one Type I error--the so-called familywise techniques of control--the likelihood of detecting effects can be very low. That is, when familywise error controlling methods are adopted to assess statistical significance, the size of the critical value that must be exceeded in order to obtain statistical significance can be extremely large when the number of tests to be examined is also very large. In our investigation we examined three methods for increasing the sensitivity to detect effects when family size is large: the false discovery rate of error control presented by Benjamini and Hochberg (1995), a modified false discovery rate presented by Benjamini and Hochberg (2000) which estimates the number of true null hypotheses prior to adopting false discovery rate control, and a familywise method modified to control the probability of committing two or more Type I errors in the family of tests examined--not one, as is the case with the usual familywise techniques. Our results indicated that the level of significance for the two or more familywise method of Type I error control varied with the testing scenario and needed to be set on occasion at values in excess of 0.15 in order to control the two or more rate at a reasonable value of 0.01. In addition, the false discovery rate methods typically resulted in substantially greater power to detect non-null effects even though their levels of significance were set at the standard 0.05 value. Accordingly, we recommend the Benjamini and Hochberg (1995, 2000) methods of Type I error control when the number of tests in the family is large.


American Educational Research Journal | 1977

Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation:

Joanne C. Rogan; H. J. Keselman

Numerous investigations have examined the effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test and the prevailing conclusion has been that when sample sizes are equal, the ANOVA is robust to variance heterogeneity. However, Box (1954) reported a Type I error rate of .12, for a 5% nominal level, when unequal variances were paired with equal sample sizes. The present paper explored this finding, examining varying degrees and patterns of variance heterogeneity for varying sample sizes and number of treatment groups. The data indicate that the rate of Type 1 error varies as a function of the degree of variance heterogeneity and, consequently, it should not be assumed that the ANOVA F-test is always robust to variance heterogeneity when sample sizes are equal.


Psychological Methods | 2008

A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes.

H. J. Keselman; James Algina; Lisa M. Lix; Rand R. Wilcox; Kathleen N. Deering

Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. The authors describe a nonparametric bootstrap methodology that can provide improved Type I error control. In addition, the authors indicate how researchers can set robust confidence intervals around a robust effect size parameter estimate. In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods.


Psychological Methods | 2005

An Alternative to Cohen's Standardized Mean Difference Effect Size: A Robust Parameter and Confidence Interval in the Two Independent Groups Case.

James Algina; H. J. Keselman; Randall D. Penfield

The authors argue that a robust version of Cohens effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohens effect size. The authors investigated coverage probability for confidence intervals for the new effect size measure. The confidence intervals were constructed by using the noncentral t distribution and the percentile bootstrap. Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval.


Psychophysiology | 2003

A generally robust approach to hypothesis testing in independent and correlated groups designs

H. J. Keselman; Rand R. Wilcox; Lisa M. Lix

Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. These problems are vastly reduced when using a robust measure of location; incorporating bootstrap methods can result in additional benefits. This paper illustrates the use of trimmed means with an approximate degrees of freedom heteroskedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. As well, we indicate when a boostrap methodology can be effectively employed to provide improved Type I error control. We also illustrate, with examples from the psychophysiological literature, the use of a new computer program to obtain numerical results for these solutions.

Collaboration


Dive into the H. J. Keselman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rand R. Wilcox

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Lisa M. Lix

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul A. Games

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge