Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George A. Johanson is active.

Publication


Featured researches published by George A. Johanson.


Educational and Psychological Measurement | 2010

Initial Scale Development: Sample Size for Pilot Studies

George A. Johanson; Gordon P. Brooks

Pilot studies are often recommended by scholars and consultants to address a variety of issues, including preliminary scale or instrument development. Specific concerns such as item difficulty, item discrimination, internal consistency, response rates, and parameter estimation in general are all relevant. Unfortunately, there is little discussion in the extant literature of how to determine appropriate sample sizes for these types of pilot studies. This article investigates the choice of sample size for pilot studies from a perspective particularly related to instrument development. Specific recommendations are made for researchers regarding how many participants they should use in a pilot study for initial scale development.


Assessment & Evaluation in Higher Education | 2004

Acquiescence as differential person functioning

George A. Johanson; Cynthia J. Osborn

Acquiescence, or acquiescent responding, is reviewed. A detection method using the concept of differential person functioning is illustrated with two data sets. The effects of acquiescence are shown to be different for each example. Implications for questionnaire and attitudinal scale development are discussed and an operational definition for acquiescent responding is suggested.


Journal of School Health | 2010

School-Based Screening of the Dietary Intakes of Third-Graders in Rural Appalachian Ohio

J. Hovland; Sara McLeod; Melani W. Duffrin; George A. Johanson; Darlene E. Berryman

BACKGROUND Children in Appalachia are experiencing high levels of obesity, in large measure because of inferior diets. This study screened the dietary intake of third graders residing in 3 rural Appalachian counties in Ohio and determined whether the Food, Math, and Science Teaching Enhancement Resource Initiative (FoodMASTER) curriculum improved their dietary intake. METHODS Dietary intake was measured for 238 third graders at the beginning of the 2007 to 2008 school year and for 224 third graders at the end of that year. The FoodMASTER curriculum was delivered to 204 students (test group). Intake was measured using the Block Food Frequency Questionnaire 2004. The final analysis included 138 students. RESULTS The FoodMASTER curriculum did not significantly affect the diets of the students in the test group, as no significant differences in intake of macronutrients, specific nutrients, or food groups were found between the test and control groups. Majorities of students did not meet the Recommended Dietary Allowance or Adequate Intakes for fiber, calcium, iron, vitamin A, and vitamin E. The students as a whole did not meet the MyPyramid recommendations for any food group, and nearly one fifth of their calories came from sweets. Significant differences in percentages of kilocalories from protein and sweets and in servings of fats, oils, and sweets were seen between groups of higher and lower socioeconomic status. CONCLUSIONS Energy-dense foods are replacing healthy foods in the diets of Ohio children living in rural Appalachia. The prevalence of poor dietary intake in Appalachia warrants further nutrition interventions involving programming for nutrition, such as future FoodMASTER curricula.


Educational and Psychological Measurement | 2012

Item Discrimination and Type I Error in the Detection of Differential Item Functioning

Yanju Li; Gordon P. Brooks; George A. Johanson

In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel–Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the focal and reference groups varies from zero to one standard deviation. The findings can be used as guides for alpha adjustment when using MH or LR methods when impact exists. A second purpose is to better understand the conditions that cause Type I error rates to inflate. The results indicate that inflation can be controlled even in the presence of large ability differences and with large samples.


Assessment & Evaluation in Higher Education | 2003

An Analysis of Sex-related Differential Item Functioning in Attitude Assessment

Hamzeh Dodeen; George A. Johanson

This study analyzes and classifies items that display sex-related Differential Item Functioning (DIF) in attitude assessment. It applies the Educational Testing Services (ETS) procedure that is used for classifying DIF items in testing to classify sex-related DIF items in attitude scales. A total of 982 items that measure attitudes from 23 real data sets were used in the analysis. Results showed that sex DIF is common in attitude scales: more than 27% of items showed DIF related to sex, 15% of the items exhibited moderate to large DIF, and the magnitudes of DIF against males and females were not equal.


Educational and Psychological Measurement | 2002

Differential Person Functioning.

George A. Johanson; Abdalla Alsmadi

The definitions, methods, and interpretations of differential item functioning are extended to the transpose of the usual person-item matrices. The primary purpose is to enhance diagnostic assessment in which individual differences in scores between content domains are clarified by conditioning the scores on item difficulty. Three examples are used to illustrate this approach with data from the mathematics section of the California Achievement Test using the Mantel-Haenszel procedure. The term differential person functioning is suggested.


International Journal of Human-computer Interaction | 2013

Development of a Weighted Heuristic for Website Evaluation for Older Adults

Kyle R. Lynch; Diana J. Schwerha; George A. Johanson

Older adults are the fastest growing population of Internet users. As websites acquire a greater number of older visitors, it is vital that they are designed with this demographic in mind. Older users typically have different user characteristics than younger users; they may have changes in perceptual abilities, motor skills, cognitive abilities, mental models, and confidence in the use of technology. This research documents the development of a new weighted heuristic measure for evaluating the usability of websites for older adults and its validation with performance testing. Results from a repeated measures analysis of variance indicated that websites with different heuristic classifications were significantly different with respect to performance metrics and System Usability Scale ratings. Conclusions point to the need for web design that takes into account preferences and abilities of older web users.


Evaluation Practice | 1997

Differential Item Functioning in Attitude Assessment.

George A. Johanson

Differential item functioning (DIF) is not often seen in the literature on attitude assessment. A brief discussion of DIF and methods of implementation is followed by an illustrative example from a program evaluation, using an attitude-towards-science scale with 1550 children in grades one through six. An item exhibiting substantial DIF with respect to gender was detected using the Mantel-Haenszel procedure. In a second example, data from workshop evaluations with 1682 adults were recoded to a binary format, and it was found that an item suspected of functioning George A. Johanson differentially with respect to age groups was, in fact, not doing so. Implications for evaluation practice are discussed.


Applied Psychological Measurement | 2012

Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression.

Brendan J. Morse; George A. Johanson; Rodger W. Griffeth

Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response theory (IRT) model can mitigate this effect under similar conditions. However, this work has thus far been limited to dichotomous data. The purpose of this study was to extend this investigation to multicategory (polytomous) data using the graded response model (GRM). Consistent with previous studies, inflated Type I error rates were observed under some conditions when polytomous number-correct scores were used, and were mitigated when the data were rescaled with the GRM. These results support the proposition that IRT-derived scores are more robust to spurious interaction effects in moderated statistical models than simple raw scores under certain conditions.


Applied Psychological Measurement | 2003

TAP: Test Analysis Program:

Gordon P. Brooks; George A. Johanson

Courses in introductory educational measurement are often hampered by the lack of computer programs by which to analyze test data. To be sure, computer software exists (e.g., SPSS, SAS, Iteman) that performs these analyses; however, these programs come at a high cost and are not designed for instructional use. As a result, many practicing teachers who take the introductory educational measurement course learn about reliability and item analysis but are not able to continue to use these skills after the course ends. The Test Analysis Program (TAP), written in Borland Delphi Professional Version 6.0, performs classical test and item analyses under Windows 9x/NT/XP. In addition to performing test analyses, the TAP software includes certain features that will assist instructors of educational measurement in the classroom.

Collaboration


Dive into the George A. Johanson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brendan J. Morse

Bridgewater State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge