Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Milholland is active.

Publication


Featured researches published by John E. Milholland.


Educational and Psychological Measurement | 1956

The assessment of partial knowledge

Clyde H. Coombs; John E. Milholland; Frank B. Womer

THE general acceptance of the multiple-choice type test item as the best one for objective measurement of aptitude or achievement does not imply that its merits are optimal. Any variation upon an already widely accepted and useful technique which indicates promise of improved measurement is deserving of further investigation. A response method3 for multiple-choice items which has certain theoretical advantages over the conventional response method is considered here, and this study is an empirical investigation of some of its relative merits. The conventional response method (C method) for multiplechoice items requires selecting and marking the answer from among the choices offered. In this study it was to pick one of four. The conventional item score in a power test is one point when the answer is chosen and zero when a distracter is chosen. Complete information leads to an item score of one and misinformation to a score of zero. Partial information may lead to a score of either one or zero. The inability of the


Educational and Psychological Measurement | 1955

Four Kinds of Reproducibility in Scale Analysis

John E. Milholland

a coefficient of reproducibility. It consists of the proportion of total responses which can be predicted from the scores, assuming that a unidimensional scale does in fact exist. For example, if there are 5 items, ordered I, 2, 3, 4, 5, a person with a score of 4 should answer items i, 2, 3, 4, positively; item 5, negatively. A pattern i 235 is considered as having one error, since changing the response to item 5 would change the pattern to 123, or changing the response to item 4 would change it to 12345, either of which is consistent with the scale. This method of counting errors gives a lower bound to the number of errors, since it is conceivable, in the example just given, that there may be errors in the responses to items i, 2, and 3. What is done in every case to obtain an error score is to count the smallest number of responses which must be changed in order to make the pattern fit the scale. Reproducibility is then computed as the complement of the ratio of error score to total number of responses. In the example above, there is one error in five responses, so the reproducibility for a person giving the pattern is .80. For the group of individuals the computational process is similar, the error scores and total number of responses being summed over all individuals before the division is carried out. The kind of coefficient just described makes the response of


Journal of School Psychology | 1975

Rater agreements in assigning stanford-binet items to Guilford's structure of intellect operations categories

Calvin O. Dyer; Cynthia Neigler; John E. Milholland

Summary: Nineteen school psychologists assigned the 142 items in the Form L-M of the Stanford-Binet Intelligence Scale to the five Operations categories of Guilfords Structure of Intellect model, following flow charts prepared for this purpose by Meeker (1965). On the average, one rater agreed with another on about half the items, and their modal assignments agreed with Meekers (1969) assignments on only 81 (57%) of the items. These levels of agreement are judged not to be high enough to justify classifying Stanford-Binet items in accordance with the Structure of Intellect Operations categories. For years the Stanford-Binet Intelligence Scale has been widely used by school psychologists and others in the field of psychological testing. Because of its established reputation, it is commonly used as a measure of intellectual ability. However, there are some psychologists who consider its single global measure of intelligence inadequate for the purpose of identifying and dealing with specific abilities in children and look for ways to penetrate beyond the single score. By contrast, Guilfords Structure of Intellect (Guilford, 1967) offers a detailed partition of intellectual abilities, classifying them under the dimensions of Operations, Content, and Products. His model can accomodate 120 different abilities, each specified by a trigraph consisting of a letter for one Operation, one Content, and one Product. Some psychological testers have been using performance on individual Stanford-Binet test items to assess specific intellectual abilities, although Terman and Merrill (1960) specifically warned against such a practice in their manual for the 1960 (Form L-M) revision of the test. Ramsey and Vane (1970) investigated Valetts (1963) scheme of item classification on the basis of five factors. They, however, found seven factors and also concluded that, contrary to Valetts assertion, performance on subtest items does not depend on only one aspect of intelligence. Darrah (Note 1)likewise concluded that a group factor structure was indicated. He identified five factors-one verbal, two memory (recall and concentration), a judgment and reasoning factor, and one more which was complex and not interpreted. The memory and reasoning factors that Darrah obtained agreed well with similar factors reported by


Journal of Educational Psychology | 1964

Dimensions of student evaluations of teaching.

Robert L. Isaacson; Wilbert J. McKeachie; John E. Milholland; Yi G. Lin; Margaret Hofeller; Karl L. Zinn


Journal of Educational Psychology | 1963

Correlation of teacher personality variables and student ratings.

Robert L. Isaacson; Wilbert J. McKeachie; John E. Milholland


Journal of Personality and Social Psychology | 1966

Student affiliation motives, teacher warmth, and academic achievement.

Wilbert J. McKeachie; Yi-Guang Lin; John E. Milholland; Robert L. Isaacson


Journal of Consulting and Clinical Psychology | 1968

Student achievement motives, achievement cues, and academic achievement.

Wilbert J. McKeachie; Robert L. Isaacson; John E. Milholland; Yi-Guang Lin


American Psychologist | 1958

Comment on "The Needless Assumption of Normality in Pearson's r".

John E. Milholland


Educational and Psychological Measurement | 1955

The Reliability of Test Discriminations

John E. Milholland


Archive | 1955

The assessment of partial knowledge in objective testing

Clyde H. Coombs; John E. Milholland; Frank B. Womer

Collaboration


Dive into the John E. Milholland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge