Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca Zwick is active.

Publication


Featured researches published by Rebecca Zwick.


American Educational Research Journal | 2005

Predicting College Grades and Degree Completion Using High School Grades and SAT Scores: The Role of Student Ethnicity and First Language:

Rebecca Zwick; Jeffrey C. Sklar

The degree to which SAT scores and high school grade-point average (GPA) predicted first-year college GPA (FGPA) and college graduation was examined for four groups: Hispanic students whose first language was Spanish and Hispanic, Black, and White students whose first language was English. The percentage of variance in FGPA jointly explained by high school GPA and SAT score varied from 7% to 20% across groups. Survival analyses showed that high school GPA had a statistically significant influence on graduation in the White/English group; SAT had a significant effect in the Hispanic/English and White/English groups. The regression and survival analyses revealed interesting differences in achievement patterns between the Hispanic/Spanish and Hispanic/English groups, demonstrating the value of taking language background into consideration in educational research.


Journal of Educational and Behavioral Statistics | 1990

When Do Item Response Function and Mantel-Haenszel Definitions of Differential Item Functioning Coincide?

Rebecca Zwick

A test item is typically considered free of differential item functioning (DIF) if its item response function is the same across demographic groups. A popular means of testing for DIF is the Mantel-Haenszel (MH) approach. Holland and Thayer (1988) showed that, under the Rasch model, identity of item response functions across demographic groups implies that the MH null hypothesis will be satisfied when the MH matching variable is test score, including the studied item. This result, however, cannot be generalized to the class of items for which item response functions are monotonic and local independence holds. Suppose that all item response functions are identical across groups, but the ability distributions for the two groups are stochastically ordered. In general, the population MH result will show DIF favoring the higher group on some items and the lower group on others. If the studied item is excluded from the matching criterion under these conditions, the population MH result will always show DIF favoring the higher group.


Journal of Educational and Behavioral Statistics | 1996

Evaluating the Magnitude of Differential Item Functioning in Polytomous Items

Rebecca Zwick; Dorothy T. Thayer

Several recent studies have investigated the application of statistical inference procedures to the analysis of differential item functioning (DIF) in polytomous test items that are scored on an ordinal scale. Mantel’s extension of the Mantel-Haenszel test is one of several hypothesis-testing methods for this purpose. The development of descriptive statistics for characterizing DIF in polytomous test items has received less attention. As a step in this direction, two possible standard error formulas for the polytomous DIF index proposed by Dorans and Schmitt were derived. These standard errors, as well as associated hypothesis-testing procedures, were evaluated though application to simulated data. The standard error that performed better is based on Mantel’s hypergeometric model. The alternative standard error, based on a multinomial model, tended to yield values that were too small.


Journal of Educational Statistics | 1992

Overview of the National Assessment of Educational Progress

Albert E. Beaton; Rebecca Zwick

This chapter gives an overview of the design and the statistical and psychometric analysis methods developed for use in the National Assessment of Educational Progress (NAEP). For more than 20 years, NAEP has provided information about the educational achievements of students in American schools. In recent years, NAEP has been gaining in prominence and has also been growing bigger and more complex. In 1990, an assessment of individual states was added to NAEP. Also, it is anticipated that the legislation that prohibits NAEP from reporting district and school results may be removed and that NAEP may return to annual rather than biennial assessments. In addition, future assessments will involve a larger number of innovative items, such as questions for which students must produce their own answers rather than selecting among specified options, tasks in which students are asked to read aloud, and portfolios that consist of classroom work produced over a period of time. NAEPs never-ending growth and evolution continue to provide new technological challenges to its statisticians and psychometricians.


Journal of Educational and Behavioral Statistics | 2000

Using Loss Functions for DIF Detection: An Empirical Bayes Approach

Rebecca Zwick; Dorothy T. Thayer; Charles Lewis

We investigated a DIF flagging method based on loss functions. The approach builds on earlier research that involved the development of an empirical Bayes (EB) enhancement to Mantel-Haenszel (MH) DIF analysis. The posterior distribution of DIF parameters was estimated and used to obtain the posterior expected loss for the proposed approach and for competing classification rules. Under reasonable assumptions about the relative seriousness of Type I and Type II errors, the loss-function-based DIF detection rule was found to perform better than the commonly used A, B, and C DIF classification system, especially in small samples.


Journal of Educational and Behavioral Statistics | 1992

Chapter 7: Statistical and Psychometric Issues in the Measurement of Educational Achievement Trends: Examples From the National Assessment of Educational Progress

Rebecca Zwick

Like all studies involved in the assessment of trends in educational performance, the National Assessment of Educational Progress (NAEP) is confronted with an array of unresolved methodological and philosophical issues. One of the basic dilemmas faced by NAEP is how to measure performance change while remaining responsive to advances in curriculum and the technology of assessment. NAEP has become much more cautious about making seemingly insubstantial changes in the assessment because of the so-called NAEP reading anomaly—an apparently steep drop between 1984 and 1986 in estimated reading proficiency that was found to have resulted in part from changes in the order and context in which items appeared. Other issues that NAEP must consider in reporting performance trends are the effect of measurement scale indeterminacies and the ways in which interpretation of trend results can depend on the statistics that are selected for comparing proficiency distributions over time.


Applied Psychological Measurement | 2002

Application of an Empirical Bayes Enhancement of Mantel-Haenszel Differential Item Functioning Analysis to a Computerized Adaptive Test.

Rebecca Zwick; Dorothy T. Thayer

This study used a simulation to investigate the applicability to computerized adaptive test data of a differential item functioning (DIF) analysis method developed by Zwick, Thayer, and Lewis. The approach involves an empirical Bayes (EB) enhancement of the popular Mantel-Haenszel (MH) DIF analysis method. Results showed the performance of the EB DIF approach to be quite promising, even in extremely small samples. In particular, the EB procedure was found to achieve roughly the same degree of stability for samples averaging 117 and 40 members in the two examinee groups as did the ordinary MH for samples averaging 240 in each of the two groups. Overall, the EB estimates tended to be closer to their target values than did the ordinary MH statistics in terms of root mean square residuals; the EB statistics were also more highly correlated with the target values than were the MH statistics. When combined with a loss-function-based decision rule, the EB method is better at detecting DIF than conventional approaches, but it has a higher Type I error rate.


Journal of Educational and Behavioral Statistics | 1997

Estimating the Importance of Differential Item Functioning.

Tamás Rudas; Rebecca Zwick

Several methods have been proposed to detect differential item functioning (DIF), an item response pattern in which members of different demographic groups have different conditional probabilities of answering a test item correctly, given the same level of ability. In this article, the mixture index of fit, proposed by Rudas, Clogg, and Lindsay (1994), is used to estimate the fraction of the population for which DIF occurs, and this approach is compared to the Mantel-Haenszel (Mantel & Haenszel, 1959) test of DIF developed by Holland (1985; see Holland & Thayer, 1988). The proposed estimation procedure, which is noniterative, can provide information about which portions of the item response data appear to be contributing to DIF.


Educational and Psychological Measurement | 1997

The Effect of Adaptive Administration on the Variability of the Mantel-Haenszel Measure of Differential Item Functoning

Rebecca Zwick

The Mantel-Haenszel (MH) approach of Holland and Thayer is frequently used to assess differential item functioning (DIE). The formula for the variance of the MH DIE statistic is based on work by Phillips and Holland, and Robins, Breslow, and Greenland. Recent simulation studies showed that, for a given sample size, the MH variances tended to be larger when items were administered to examinees who were randomly selected from a population than when items were administered adaptively. An analytic perspective shed some light on this result. Although the general form of the MH variance is complex and does not provide an intuitive understanding of the phenomenon, application of certain Rasch model assumptions yields a simple expression that appears to explain the difference in variances for adaptive versus nonadaptive administration. The results suggest that adaptive testing may lead to more efficient application of MH DIF analyses.


Journal of Educational and Behavioral Statistics | 1992

Chapter 1: Overview of the National Assessment of Educational Progress

Albert E. Beaton; Rebecca Zwick

This chapter gives an overview of the design and the statistical and psychometric analysis methods developed for use in the National Assessment of Educational Progress (NAEP). For more than 20 years, NAEP has provided information about the educational achievements of students in American schools. In recent years, NAEP has been gaining in prominence and has also been growing bigger and more complex. In 1990, an assessment of individual states was added to NAEP. Also, it is anticipated that the legislation that prohibits NAEP from reporting district and school results may be removed and that NAEP may return to annual rather than biennial assessments. In addition, future assessments will involve a larger number of innovative items, such as questions for which students must produce their own answers rather than selecting among specified options, tasks in which students are asked to read aloud, and portfolios that consist of classroom work produced over a period of time. NAEP’s never-ending growth and evolution continue to provide new technological challenges to its statisticians and psychometricians.

Collaboration


Dive into the Rebecca Zwick's collaboration.

Top Co-Authors

Avatar

Jeffrey C. Sklar

California Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Norman

University of California

View shared research outputs
Top Co-Authors

Avatar

Barbara G. Dodd

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Douglas Folsom

University of California

View shared research outputs
Top Co-Authors

Avatar

Ellijot M. Cramer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge