Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eugene G. Johnson is active.

Publication


Featured researches published by Eugene G. Johnson.


Journal of Educational Statistics | 1992

Scaling Procedures in NAEP

Robert J. Mislevy; Eugene G. Johnson; Eiji Muraki

Scale-score reporting is a recent innovation in the National Assessment of Educational Progress (NAEP). With scaling methods, the performance of a sample of students in a subject area or subarea can be summarized on a single scale even when different students have been administered different exercises. This article presents an overview of the scaling methodologies employed in the analyses of NAEP surveys beginning with 1984. The first section discusses the perspective on scaling from which the procedures were conceived and applied. The plausible values methodology developed for use in NAEP scale-score analyses is then described, in the contexts of item response theory and average response method scaling. The concluding section lists milestones in the evolution of the plausible values approach in NAEP and directions for further improvement.


Journal of Educational Statistics | 1992

Population Inferences and Variance Estimation for NAEP Data

Eugene G. Johnson; Keith F. Rust

In the National Assessment of Educational Progress (NAEP), population inferences and variance estimation are based on a randomization-based perspective where the link between the observed data and the population quantities of interest is given by the distribution of potential values of estimates over repeated samples from the same population using the identical sample design. Because NAEP uses a complex sample design, many of the assumptions underlying traditional statistical analyses are violated, and, consequently, analysis procedures must be adjusted to appropriately handle the structure of the sample. In this article, we discuss the use of sampling weights in deriving population estimates and consider the effect of nonresponse and undercoverage on those estimates. We also discuss the estimation of sampling variability from complex sample surveys, concentrating on the jackknife repeated replication procedure-the variance estimation procedure used by NAEP-and address the use of a simple approximation to sampling variability. Finally, we discuss measures of the stability of variance estimates.


Journal of Educational and Behavioral Statistics | 1992

Chapter 3: Scaling Procedures in NAEP

Robert J. Mislevy; Eugene G. Johnson; Eiji Muraki

Scale-score reporting is a recent innovation in the National Assessment of Educational Progress (NAEP). With scaling methods, the performance of a sample of students in a subject area or subarea can be summarized on a single scale even when different students have been administered different exercises. This article presents an overview of the scaling methodologies employed in the analyses of NAEP surveys beginning with 1984. The first section discusses the perspective on scaling from which the procedures were conceived and applied. The plausible values methodology developed for use in NAEP scale-score analyses is then described, in the contexts of item response theory and average response method scaling. The concluding section lists milestones in the evolution of the plausible values approach in NAEP and directions for further improvement.


Journal of Educational Statistics | 1992

Sampling and Weighting in the National Assessment.

Keith F. Rust; Eugene G. Johnson

This chapter describes procedures for obtaining the National Assessment of Educational Progress (NAEP) student samples used in the national and state assessments and for deriving survey weights for use in the analysis of the survey data. Following the description of general procedures, more detailed discussion is included about several issues that relate to the procedures used. In some cases, these involve procedures that NAEP is actively reviewing and investigating, with a view toward implementing improvements in the future. In other cases, the procedures, although well established in NAEP, involve technical aspects with interesting features not fully described in the available technical reports. Probability sampling techniques are used to select the student samples for the national assessment and for the Trial State Assessment program. The sample designs for these two types of samples depend on each other only in the implementation of procedures to minimize the overlap of the samples of participating schools. For both the national assessment and the Trial State Assessment, the goal of the sample design is to obtain samples of students from which estimates of subpopulation characteristics can be obtained with reasonably high precision while satisfying economic and operational constraints. We will describe the procedures used to obtain the student samples used in the national and state assessments and to derive survey weights for use in the analysis of the survey data. Additionally, we will discuss several


Applied Psychological Measurement | 1997

Standard Errors of the Kernel Equating Methods Under the Common-Item Design

Michelle Liou; Philip E. Cheng; Eugene G. Johnson

Simplified equations are derived to compute the standard error of the frequency estimation method for studies indicate that the simplified equations work equating score distributions that are continuized using a uniform or Gaussian kernel function (Holland, King, & Thayer, 1989; Holland & Thayer, 1987). The simplified equations can be used to equate both observed- and smoothed-score distributions (Rosenbaum & Thayer, 1987). Results from two empirical reasonably well for moderate-size samples (e.g., 1,000 examinees).


Journal of Educational and Behavioral Statistics | 1992

Chapter 5: Population Inferences and Variance Estimation for NAEP Data:

Eugene G. Johnson; Keith F. Rust

In the National Assessment of Educational Progress (NAEP), population inferences and variance estimation are based on a randomization-based perspective where the link between the observed data and the population quantities of interest is given by the distribution of potential values of estimates over repeated samples from the same population using the identical sample design. Because NAEP uses a complex sample design, many of the assumptions underlying traditional statistical analyses are violated, and, consequently, analysis procedures must be adjusted to appropriately handle the structure of the sample. In this article, we discuss the use of sampling weights in deriving population estimates and consider the effect of nonresponse and undercoverage on those estimates. We also discuss the estimation of sampling variability from complex sample surveys, concentrating on the jackknife repeated replication procedure—the variance estimation procedure used by NAEP—and address the use of a simple approximation to sampling variability. Finally, we discuss measures of the stability of variance estimates.


Journal of Educational Measurement | 1992

The Design of the National Assessment of Educational Progress

Eugene G. Johnson


Journal of Educational Measurement | 1992

Overview of the Scaling Methodology Used in the National Assessment.

Albert E. Beaton; Eugene G. Johnson


ETS Research Report Series | 1987

GENERALIZED VARIANCE FUNCTIONS FOR A COMPLEX SAMPLE SURVEY

Eugene G. Johnson; Benjamin F. King


Journal of Educational Measurement | 1990

The Differential Impact of Curriculum on Aptitude Test Scores

William H. Angoff; Eugene G. Johnson

Collaboration


Dive into the Eugene G. Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howard Wainer

National Board of Medical Examiners

View shared research outputs
Researchain Logo
Decentralizing Knowledge