Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin K. Henson is active.

Publication


Featured researches published by Robin K. Henson.


Educational and Psychological Measurement | 2006

Use of Exploratory Factor Analysis in Published Research: Common Errors and Some Comment on Improved Practice

Robin K. Henson; J. Kyle Roberts

Given the proliferation of factor analysis applications in the literature, the present article examines the use of factor analysis in current published research across four psychological journals. Notwithstanding ease of analysis due to computers, the appropriate use of factor analysis requires a series of thoughtful researcher judgments. These judgments directly affect results and interpretations. The authors examine across studies (a) the decisions made while conducting exploratory factor analyses (N = 60) and (b) the information reported from the analyses. In doing so, they present a review of the current status of factor analytic practice, including comment on common errors in use and reporting. Recommendations are proffered for future practice as regards analytic decisions and reporting in empirical research.


Journal of Personality Assessment | 2005

Conducting and interpreting canonical correlation analysis in personality research: a user-friendly primer.

Alissa Sherry; Robin K. Henson

The purpose of this article is to reduce potential statistical barriers and open doors to canonical correlation analysis (CCA) for applied behavioral scientists and personality researchers. CCA was selected for discussion, as it represents the highest level of the general linear model (GLM) and can be rather easily conceptualized as a method closely linked with the more widely understood Pearson r correlation coefficient. An understanding of CCA can lead to a more global appreciation of other univariate and multivariate methods in the GLM. We attempt to demonstrate CCA with basic language, using technical terminology only when necessary for understanding and use of the method. We present an entire example of a CCA analysis using SPSS (Version 11.0) with personality data.


Educational and Psychological Measurement | 2001

A Reliability Generalization Study of the Teacher Efficacy Scale and Related Instruments.

Robin K. Henson; Lori R. Kogan; Tammi Vacha-Haase

Teacher efficacy has proven to be an important variable in teacher effectiveness. It is consistently related to positive teaching behaviors and student outcomes. However, the measurement of this construct is the subject of current debate, which includes critical examination of predominant instruments used to assess teacher efficacy. The present study extends this critical evaluation and examines sources of measurement error variance in the Teacher Efficacy Scale (TES), historically the most frequently used instrument in the area. Reliability generalization was used to characterize the typical score reliability for the TES and potential sources of measurement error variance across studies. Other related instruments were also examined as regards measurement integrity.


Teaching and Teacher Education | 2001

The effects of participation in teacher research on teacher efficacy

Robin K. Henson

Abstract An academic year-long teacher research initiative was implemented in an alternative education school in a large school district in the southwest United States. Quantitative and qualitative methodologies were utilized to examine participatory teacher research as an active, collaborative means of professional development for teachers, including its effect on teacher efficacy and empowerment. Results indicated growth in both general and personal teaching efficacies from pre- to posttest. Collaboration was consistently related to general teaching efficacy. Perceptions of school climate were related to personal teaching efficacy at pretest. No relationship was observed between empowerment and efficacy.


Frontiers in Psychology | 2012

Tools to Support Interpreting Multiple Regression in the Face of Multicollinearity

Amanda Kraha; Heather Turner; Kim Nimon; Linda Reichwein Zientek; Robin K. Henson

While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.


The Counseling Psychologist | 2006

Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research

Robin K. Henson

Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes that counseling psychologists use most frequently. To provide context, the author presents a brief history of statistical significance tests. Second, the author discusses the difference between statistical, practical, and clinical significance. Third, the author reviews and graphically demonstrates two common types of effect sizes, commenting on multivariate and corrected effect sizes. Fourth, the author emphasizes meta-analytic thinking and the potential role of confidence intervals around effect sizes. Finally, the author gives a hypothetical example of how to report and potentially interpret some effect sizes.


Educational and Psychological Measurement | 2002

Reliability generalization: Moving toward improved understanding and use of score reliability

Tammi Vacha-Haase; Robin K. Henson; John C. Caruso

Reliability generalization (RG) is a measurement meta-analytic method used to explore the variability in score reliability estimates and to characterize the possible sources of this variance. This article briefly summarizes some RG considerations. Included is a description of how reliability confidence intervals might be portrayed graphically. The article includes tabulations across various RG studies, including how frequently authors (a) report score reliabilities for their own data, (b) conduct reliability induction, or (c) do not even mention reliability.


Educational and Psychological Measurement | 2001

Measurement Error of Scores on the Mathematics Anxiety Rating Scale across Studies.

Mary Margaret Capraro; Robert M. Capraro; Robin K. Henson

The Mathematics Anxiety Rating Scale (MARS) was submitted to a reliability generalization analysis (RG) to characterize the variability of measurement error in MARS scores across administrations and identify possible study characteristics that are predictive of score reliability variations. In general, the MARS and its variants yielded scores with strong internal consistency and test-retest reliability estimates, although variation was observed. Adult samples were related to lower score reliability compared to other age groupings. Inclusion of total score standard deviation in the regression models resulted in roughly 25% increases in R 2 effects.


Educational and Psychological Measurement | 2002

Variability and Prediction of Measurement Error in Kolb’s Learning Style Inventory Scores a Reliability Generalization Study

Robin K. Henson; Dae-Yeop Hwang

The Learning Style Inventory (LSI) is a commonly employed measure of learning styles based on Kolb’s Experiential Learning Model. Nevertheless, the psychometric soundness of LSI scores has historically been critiqued. The present article extends this critique by conducting a reliability generalization study across studies and versions of the test. Results indicated that internal consistency and test-retest reliabilities for LSI scores fluctuate considerably and contribute to deleterious cumulative measurement error. Reliability variation was predictable by test version and several study features.


Educational Researcher | 2010

Methodology in Our Education Research Culture: Toward a Stronger Collective Quantitative Proficiency

Robin K. Henson; Darrell M. Hull; Cynthia Williams

How doctoral programs train future researchers in quantitative methods has important implications for the quality of scientifically based research in education. The purpose of this article, therefore, is to examine how quantitative methods are used in the literature and taught in doctoral programs. Evidence points to deficiencies in quantitative training and application in several areas: (a) methodological reporting problems, (b) researcher misconceptions and inaccuracies, (c) overreliance on traditional methods, and (d) a lack of coverage of modern advances. An argument is made that a culture supportive of quantitative methods is not consistently available to many applied education researchers. Collective quantitative proficiency is defined as a vision for a culture representative of broader support for quantitative methodology (statistics, measurement, and research design).

Collaboration


Dive into the Robin K. Henson's collaboration.

Top Co-Authors

Avatar

J. Kyle Roberts

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Kim Nimon

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Alissa Sherry

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amanda Kraha

University of North Texas

View shared research outputs
Researchain Logo
Decentralizing Knowledge