Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerald J. Melican is active.

Publication


Featured researches published by Gerald J. Melican.


Educational and Psychological Measurement | 1989

Effects of Item Context on Intrajudge Consistency of Expert Judgments via the Nedelsky Standard Setting Method.

Barbara S. Plake; Gerald J. Melican

Judgmental methods for estimating passing scores generally are accomplished by having testing specialists evaluate items in an intact test form. Alternatively, judges are asked to rate individual items from an item pool. Cutscores are determined by these ratings when the items are chosen from the pool to form the operational test. This approach to establishing cutscores assumes (a) stable definitions of minimal competency from the time of review to time of assembly into the operational form and (b) invariance of item rating judgments to test form contextual variables. This study was concerned with the impact of overall test length and difficulty on the expert judgments of item performance by using the Nedelsky method. Results suggest that the judges are fairly consistent in their ratings of items regardless of overall test length or difficulty.


Educational and Psychological Measurement | 1983

Differential Performance of Males and Females on Easy to Hard Item Arrangements: Influence of Feedback at the Item Level

Barbara S. Plake; Gerald J. Melican; Linda Carter; Michael Shaughnessy

Differential test performance by males and females was reported on a quantitative examination as a function of item arrangement (Plake, Ansorge, Parker, and Lowry, 1982). Males performed at a higher level than females on tests the items of which were arranged from easy to hard. Plake and Ansorge (1982) speculated that this male superiority may be a function of differential reinforcement in backgrounds of males and females. Using a Latin Square design, this study provided item performance feedback for a nonquantitative examination. Significant sex-by-order effects did not occur, but the lack of this effect could possibly have been due to the use of a nonquantitative examination (Plake and Ansorge, 1983). It still remains to be seen whether, in using a quantitative examination, differential effects of item feedback may be a source of explanation for evidence of sex-by-order effects. Implications for validity of test score interpretation are considered briefly.


Journal of Psychoeducational Assessment | 1985

Correction for Guessing and Nedelsky's Standard-Setting Method: Are they Compatible?

Gerald J. Melican; Barbara S. Plake

Combining the Livingston and Zieky (1982) correction to a cut score for an examination using correction-for-guessing scoring is reasonable if minimally competent candidates (MCCs) omit only those items for which they cannot eliminate any of the options. Otherwise, the corrected cut-score will be overly harsh, resulting in additional errors in classifications. In this study, omit behavior on a 48-item Mathematics Achievement Test of an empirically established group of MCCs was compared to the predictions of five expert judges. The results suggest that this group of examinees tended to respond to the items with an omit pattern similar to that predicted by the judges.


Archive | 1987

Future of Educational Measurement

Barbara S. Plake; Gerald J. Melican

As a field, educational measurement has strong and essential ties to educational psychology. Practitioners in educational measurement, almost by definition, are closely linked to educational psychology for the basis of their theoretical and pedagogical principles. Researchers in educational measurement are often dependent on educational psychologists for the theoretical underpinning of the psychometric principles and applications they pursue in their scholarly endeavors. Reciprocally, educational psychologists rely on educational measurement for providing the essential psychometric tools for analyzing and applying their educational psychology theories and practices. The details of this close and necessary linkage between educational psychology and educational measurements will be addressed further in this chapter, which will focus on how this interrelationship may evolve and expand in the future.


Educational Measurement: Issues and Practice | 1991

Factors Influencing Intrajudge Consistency During Standard-Setting

Barbara S. Plake; Gerald J. Melican; Craig N. Mills


Educational Measurement: Issues and Practice | 1991

Defining Minimal Competence

Craig N. Mills; Gerald J. Melican; Nancy Thomas Ahluwalia


Applied Measurement in Education | 1988

Estimating and Adjusting Cutoff Scores: Features of Selected Methods

Craig N. Mills; Gerald J. Melican


Educational and Psychological Measurement | 1989

Accuracy of Item Performance Predictions Based on the Nedelsky Standard Setting Method

Gerald J. Melican; Craig N. Mills; Barbara S. Plake


ETS Research Report Series | 1995

EFFECTS OF MODE OF ITEM PRESENTATION ON STANDARD SETTING

Jane Faggen; Gerald J. Melican; Don Powers


Archive | 1985

Prediction of Item Performance by Expert Judges: A Methodology for Examining the Impact of Correction-for-Guessing Instructions on Test Taking Behavior.

Barbara S. Plake; Gerald J. Melican

Collaboration


Dive into the Gerald J. Melican's collaboration.

Top Co-Authors

Avatar

Barbara S. Plake

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Craig N. Mills

Educational Testing Service

View shared research outputs
Top Co-Authors

Avatar

Don Powers

Educational Testing Service

View shared research outputs
Top Co-Authors

Avatar

Linda Carter

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Michael Shaughnessy

University of Nebraska–Lincoln

View shared research outputs
Researchain Logo
Decentralizing Knowledge