Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl J. Huberty is active.

Publication


Featured researches published by Carl J. Huberty.


Psychological Bulletin | 1989

Multivariate analysis versus multiple univariate analyses.

Carl J. Huberty; John D. Morris

The argument for preceding multiple analysis of variance (ANOVAS) with a multivariate analysis of variance (MANOVA) to control for Type I error is challenged. Several situations are discussed in which multiple ANOVAS might be conducted without the necessity of a preliminary MANOVA. Three reasons for considering a multivariate analysis are discussed: to identify outcome variable system constructs, to select variable subsets, and to determine variable relative worth. The analyses discussed in this article are those appropriate in research situations in which analysis of variance techniques are useful. These analyses are used to study the effects of treatment variables on outcome/response variables (in ex post facto as well as experimental studies). We speak of an univariate analysis of variance (ANOVA) when a single outcome variable is involved; when multiple outcome variables are involved, it is a multivariate analysis of variance (MANOVA). (Covariance analyses may also be included.) With multiple outcome variables, the typical analysis approach used in the group-comparison context, at least in the behavioral sciences, is to either (a) conduct multiple ANOVAs or (b) conduct a MANOVA followed by multiple ANOVAS. That these are two popular choices may be concluded from a survey of some prominent behavioral science journals. The 1986 issues


Review of Educational Research | 1998

Statistical Practices of Educational Researchers: An Analysis of their ANOVA, MANOVA, and ANCOVA Analyses

H. J. Keselman; Carl J. Huberty; Lisa M. Lix; Stephen Olejnik; Robert A. Cribbie; Barbara Donahue; Rhonda K. Kowalchuk; Laureen L. Lowman; Martha D. Petoskey; Joanne C. Keselman; Joel R. Levin

Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. Recommendations are offered to rectify these shortcomings.


Handbook of Applied Multivariate Statistics and Mathematical Modeling | 2000

Multivariate Analysis of Variance and Covariance

Carl J. Huberty; Martha D. Petoskey

Publisher Summary This chapter provides an overview of some of the conceptual details related to multivariate analysis of variance (MANOVA) and multivariate analysis of covariance (MANCOVA) using a design involving one or two grouping variables and a collection of response variables that may include some concomitant variables. The chapter begins with a discussion of the purposes of multivariate analyses and describe research situations found in the behavioral science literature that call for the use of a MANOVA or a MANCOVA. However, MANCOVA is more involved than MANOVA from three standpoints: substantive theory, study design, and data analysis. Next, the chapter describes some MANOVA design aspects with a focus on the initial choice of a response variable system and on sampling. It also discusses at length a number of suggested guidelines for data analysis strategies and for reporting and interpreting MANOVA results, and illustrates these guidelines using a research example. The chapter concludes with some recommended practices regarding the typical use of MANOVA and MANCOVA by applied researchers.


Educational and Psychological Measurement | 2002

A History of Effect Size Indices

Carl J. Huberty

Depending on how one interprets what an effect size index is, it may be claimed that its history started around 1940, or about 100 years prior to that. An attempt is made in this article to trace histories of a variety of effect size indices. Effect size bases discussed pertain to (a) relationship, (b) group differences, and (c) group overlap. Multivariable as well as univariate indices are considered in reviewing the histories.


Journal of Educational and Behavioral Statistics | 1997

Multiple Testing and Statistical Power with Modified Bonferroni Procedures.

Stephen Olejnik; Jianmin Li; Suchada Supattathum; Carl J. Huberty

The difference in statistical power between the original Bonferroni and five modified Bonferroni procedures that control the overall Type I error rate is examined in the context of a correlation matrix where multiple null hypotheses, H 0 : ρ ij = 0 for all i ≠ j, are tested. Using 50 real correlation matrices reported in educational and psychological journals, a difference in the number of hypotheses rejected of less than 4% was observed among the procedures. When simulated data were used, very small differences were found among the six procedures in detecting at least one true relationship, but in detecting all true relationships the power of the modified Bonferroni procedures exceeded that of the original Bonferroni procedure by at least .18 and by as much as .55 when all null hypotheses were false. The power difference decreased as the number of true relationships decreased. Power differences obtained for the average power were of a much smaller magnitude but still favored the modified Bonferroni procedures. For the five modified Bonferroni procedures, power differences less than .05 were typically observed; the Holm procedure had the lowest power, and the Rom procedure had the highest.


Educational Researcher | 1987

On Statistical Testing

Carl J. Huberty

The statistical testing approaches of Hypothesis Testing (Fisher) and Significance Testing (Neyman-Pearson) are briefly reviewed. Highly related notions of alpha-values, P-values, and magnitude-of-effect are discussed. A statistical testing approach that is a hybrid of Hypothesis Testing and Significance Testing is then advanced. This approach takes into consideration the two decision-error probabilities as well as an alternative hypothesis characterization of interest.


Journal of Experimental Education | 1993

Historical Origins of Statistical Testing Practices: The Treatment of Fisher versus Neyman-Pearson Views in Textbooks.

Carl J. Huberty

AbstractTextbook discussion of statistical testing is the topic of interest. Some 28 books published from 1910 to 1949, 19 books published from 1990 to 1992, plus five multiple-edition books were reviewed in terms of presentations of statistical testing. It was of interest to discover textbook coverage of the P-value (i.e., Fisher) and fixed-alpha (i.e., Neyman-Pearson) approaches to statistical testing. Also of interest in the review were some issues and concerns related to the practice and teaching of statistical testing: (a) levels of significance, (b) importance of effects, (c) statistical power and sample size, and (d) multiple testing. It is concluded that it is not statistical testing itself that is at fault; rather, some of the textbook presentation, teaching practices, and journal editorial reviewing may be questioned.


Educational and Psychological Measurement | 2000

Group Overlap as a Basis for Effect Size.

Carl J. Huberty; Laureen L. Lowman

The research content of interest herein is that of comparison of means. It is generally recognized that statistical test p values do not adequately reflect mean comparison assessments. What is desirable is some effect-size assessment. The typical effect-size indexes used in mean comparisons are restricted to the variance homogeneity condition. What is proposed here is the use of the group-overlap concept. Group overlap may be assessed via prediction of group assignment, that is, using predictive discriminant analysis. The effect-size index proposed is that of improvement-over-chance classification (I). The I index may be used in situations that are univariate, multivariate, homogeneous, heterogeneous, or any combination thereof. Some very tentative suggestions for cutoffs of I values to define index magnitude for some data situations are made.


Multivariate Behavioral Research | 1997

Behavioral Clustering of School Children.

Carl J. Huberty; Christine DiStefano; Randy W. Kamphaus

The intent of this article is to illustrate how a cluster analysis might be conducted, validated, and interpreted. Data normed for a behavioral assessment instrument with 14 scales on a sample drawn from a nationally representative pool of U.S. school children were utilized. The analysis discussed covers the cluster method, cluster typology, cluster validity, cluster structure, and prediction of cluster membership.


Measurement and Evaluation in Counseling and Development | 1989

An Introduction to Discriminant Analysis.

Carl J. Huberty; Richard M. Barton

An intent of this article is to provide an interpretation of “discriminant analysis” that includes two sets of techniques, predictive discriminant analysis and descriptive discriminant analysis. Th...

Collaboration


Dive into the Carl J. Huberty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John D. Morris

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne van Kleeck

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge