Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gordon P. Brooks is active.

Publication


Featured researches published by Gordon P. Brooks.


Educational and Psychological Measurement | 2010

Initial Scale Development: Sample Size for Pilot Studies

George A. Johanson; Gordon P. Brooks

Pilot studies are often recommended by scholars and consultants to address a variety of issues, including preliminary scale or instrument development. Specific concerns such as item difficulty, item discrimination, internal consistency, response rates, and parameter estimation in general are all relevant. Unfortunately, there is little discussion in the extant literature of how to determine appropriate sample sizes for these types of pilot studies. This article investigates the choice of sample size for pilot studies from a perspective particularly related to instrument development. Specific recommendations are made for researchers regarding how many participants they should use in a pilot study for initial scale development.


Western Journal of Communication | 1992

Functions of humor in conversation: Conceptualization and measurement

Elizabeth E. Graham; Michael J. Papa; Gordon P. Brooks

The purpose of this research was to explore humor from a functional perspective. Twenty‐four functions of humor were derived from prior literature. Items representing these 24 functions were subjected to factor analysis resulting in an 11‐item “Uses of Humor Index.”; Three primary factors emerged from this analysis: positive affect, expressiveness, and negative affect. Initial validation of the Uses of Humor Index was achieved via a peer evaluation, a measure of sense of humor, and assessment of interpersonal competence in naturalistic conversations. The implications of this study for future research concerning the use of humor in social interaction and the influence of humor on perceptions of interpersonal competence are discussed.


Educational and Psychological Measurement | 2012

Item Discrimination and Type I Error in the Detection of Differential Item Functioning

Yanju Li; Gordon P. Brooks; George A. Johanson

In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel–Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the focal and reference groups varies from zero to one standard deviation. The findings can be used as guides for alpha adjustment when using MH or LR methods when impact exists. A second purpose is to better understand the conditions that cause Type I error rates to inflate. The results indicate that inflation can be controlled even in the presence of large ability differences and with large samples.


Applied Psychological Measurement | 2003

TAP: Test Analysis Program:

Gordon P. Brooks; George A. Johanson

Courses in introductory educational measurement are often hampered by the lack of computer programs by which to analyze test data. To be sure, computer software exists (e.g., SPSS, SAS, Iteman) that performs these analyses; however, these programs come at a high cost and are not designed for instructional use. As a result, many practicing teachers who take the introductory educational measurement course learn about reliability and item analysis but are not able to continue to use these skills after the course ends. The Test Analysis Program (TAP), written in Borland Delphi Professional Version 6.0, performs classical test and item analyses under Windows 9x/NT/XP. In addition to performing test analyses, the TAP software includes certain features that will assist instructors of educational measurement in the classroom.


Journal of Mixed Methods Research | 2018

Mixing Interviews and Rasch Modeling Demonstrating a Procedure Used to Develop an Instrument That Measures Trust

Shannon David; John H. Hitchcock; Brian G. Ragan; Gordon P. Brooks; Chad Starkey

Developing psychometrically sound instruments can be difficult, especially if little is known about the constructs of interest. When constructs of interest are unclear, a mixed methods approach can be useful. Qualitative inquiry can be used to explore a construct’s meaning in a way that informs item writing and allows the strengths of one analysis method to compensate for the weaknesses of the other. Mixing method applications can be complex, however, there are few examples within the literature pertaining to the mix of interviews, Rasch modeling, and classical test theory. This article demonstrates how to mix qualitative inquiry with Rasch modeling (and classical test theory) in order to develop an instrument that measures a complex construct: patient trust.


Journal of Experimental Education | 2009

Power of Models in Longitudinal Study: Findings From a Full-Crossed Simulation Design

Hua Fang; Gordon P. Brooks; Maria L. Rizzo; Kimberly Andrews Espy; Robert S. Barcikowski

Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3 variance-covariance structures. The results from a full-crossed simulation design suggest that traditional repeated measures have significantly higher power than do hierarchical multivariate linear models for main effects, but they have significantly lower power for interaction effects in most situations. Significant power differences are also exhibited when power is compared across different covariance structures.


Perceptual and Motor Skills | 2008

Differential person functioning applied to baseball.

George A. Johanson; Gordon P. Brooks

The question of whether a baseball player generally hits better against a left-handed or right-handed pitcher is difficult to answer since handedness is only one of many possible attributes of pitchers. The concept of differential functioning from psychometrics is applied, considering both the effect of the handedness of the pitcher and his earned run average (the mean number of runs scored against a pitcher per 9 innings pitched excluding runs due to errors). Two interesting cases are examined, a left-handed batter and a switch-hitter. Suggestions for further research are offered.


Applied Psychological Measurement | 2002

Computer Program Exchange: MNDG: Multivariate Normal Data Generator

Gordon P. Brooks

Statistics courses often contain components that require students to perform statistical analyses using a computer software package and then to interpret the results of these analyses. Often, data from actual research projects are not available for students to use in such projects, making it necessary to generate data in some fashion. In these circumstances, data generators become valuable tools to be used in a variety of ways. For example, data can be generated to violate certain assumptions of statistical tests. Also, random data can be created with the same characteristics as those found in a research article (Barcikowski, Johanson, Rich, & Robey, 1989). Indeed, using realistic scenarios such as those provided by research papers has been advocated by Singer and Willett (1990). The students then analyze and interpret these data using a statistical computer package introduced in their course. The process enables the students to appreciate the complexity and appeal of practical, “real world” data analyses.


Archive | 2012

The PEAR Method for Sample Sizes in Multiple Linear Regression

Gordon P. Brooks; Robert S. Barcikowski


Mid-Western educational researcher | 1996

Precision Power and Its Application to the Selection of Regression Sample Sizes.

Gordon P. Brooks; Robert S. Barcikowski

Collaboration


Dive into the Gordon P. Brooks's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hua Fang

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Kimberly Andrews Espy

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Maria L. Rizzo

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge