Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary Pommerich is active.

Publication


Featured researches published by Mary Pommerich.


Linking and Aligning Scores and Scales, Jun, 2005, Princeton University, Princeton, NJ, US; The aforementioned conference provided raw material for this volume. | 2007

Linking and aligning scores and scales

Neil J. Dorans; Mary Pommerich; Paul W. Holland

Overview.- Overview.- Foundations.- A Framework and History for Score Linking.- Data Collection Designs and Linking Procedures.- Equating.- Equating: Best Practices and Challenges to Best Practices.- Practical Problems in Equating Test Scores: A Practitioners Perspective.- Potential Solutions to Practical Equating Issues.- Tests in Transition.- Score Linking Issues Related to Test Content Changes.- Linking Scores Derived Under Different Modes of Test Administration.- Tests in Transition: Discussion and Synthesis.- Concordance.- Sizing Up Linkages.- Concordance: The Good, the Bad, and the Ugly.- Some Further Thoughts on Concordance.- Vertical Scaling.- Practical Issues in Vertical Scaling.- Methods and Models for Vertical Scaling.- Vertical Scaling and No Child Left Behind.- Assessments Linking Group Assessments to Individual.- Linking Assessments Based on Aggregate Reporting: Background and Issues.- An Enhanced Method for Mapping State Standards onto the NAEP Scale.- Using Aggregate-Level Linkages for Estimation and Validation: Comments on Thissen and Braun & Qian.- Postscript.


Applied Psychological Measurement | 2004

Issues in conducting linkages between distinct tests

Mary Pommerich; Bradley A. Hanson; Deborah J. Harris; James A. Sconing

Educational measurement practitioners are often asked to link scores on tests that are built to different content specifications. The goal in linking distinct tests is often similar to that for equating scores across different forms of the same test: to provide a set of comparable scores across the two measures. Traditional equating methods can be applied but results cannot be interpreted in the manner of equated scores. This article proposes a linkage process that consists of four stages to follow in linking distinct tests: choosing an appropriate linkage type and methodology, linking scores and computing summary measures, evaluating the quality of the linkage and determining what to report, and making recommendations for the interpretation and use of the linkage results. The complete linkage process is illustrated by discussing practices and decisions made at each stage for a linkage conducted between ACT Composite and SAT I V+M scores using equipercentile methods. Prediction is explored as an alternate form of linking in situations when use of the equipercentile results is not appropriate.


Applied Psychological Measurement | 2014

Using Multidimensional CAT to Administer a Short, Yet Precise, Screening Test

Lihua Yao; Mary Pommerich; Daniel O. Segall

Multidimensional computerized adaptive testing (MCAT) provides a mechanism by which the simultaneous goals of accurate prediction and minimal testing time for a screening test could both be met. This article demonstrates the use of MCAT to administer a screening test for the Computerized Adaptive Testing–Armed Services Vocational Aptitude Battery (CAT-ASVAB) under a variety of manipulated conditions. CAT-ASVAB is a test battery administered via unidimensional CAT (UCAT) that is used to qualify applicants for entry into the U.S. military and assign them to jobs. The primary research question being evaluated is whether the use of MCAT to administer a screening test can lead to significant reductions in testing time from the full-length selection test, without significant losses in score precision. Different stopping rules, item selection methods, content constraints, time constraints, and population distributions for the MCAT administration are evaluated through simulation, and compared with results from a regular full-length UCAT administration.


Applied Psychological Measurement | 2004

Linking scores via concordance: Introduction to the special issue

Mary Pommerich; Neil J. Dorans

The measurement community is undergoing a shift in perspective with regard to the practice of relating test scores, a process referred to as linking. In the past, much of the research on linking test scores focused on linkages between parallel forms of the same test, and the topic of equating dominated the literature as the means of conducting such a linkage. Linkages between scores from distinct tests (i.e., tests built to different specifications) were discussed and other methods of linking studied, but not to the extent that equating was. Over time, public demand for linkages between distinct tests that do not meet the parallel-forms assumption that underlies equating has increased. As a result, the attention that was once devoted to research on equating is shifting to address the needs of more diverse linkage situations. Linking is worthy of more comprehensive study. There are many different situations under which test scores might be linked. The characteristics of the tests and the intended uses of the test scores to be linked create these different situations. The classic linkage situation occurs when scores between parallel forms designed to measure the same construct are linked. Linking scores from distinct tests that measure related but different constructs provides an example of a subtly different linkage situation. A linkage between scores on tests that are designed to measure the same construct but in different languages yields yet another linkage situation. Scores may also be linked across tests that measure the same construct but are administered via different modes. Likewise, scores on tests that are designed to measure the same construct but at varying levels of difficulty, such as across grades, may be linked. Linkages between scores on the National Assessment of Educational Progress (NAEP) and other tests present a unique linkage situation because NAEP does not report individual examinee scores. Additional situations not outlined here may also fall under the scope of linking. This special issue addresses only linkages of the type in which scores from distinct tests that measure related but different constructs are linked. Score distributions are matched across the two tests to create related or concordant score points. This type of linkage is typically referred to as concordance. Concordance has a close relationship with equating because methods used to equate parallel forms of a test are commonly used to conduct concordances. Yet the linkage situations are very different across concordance and equating, so the practices that are typically followed in conducting and using results from an equating are not necessarily appropriate for a concordance. Our interest in concordance stemmed from a linkage conducted between ACT and SAT I scores (initially reported in Dorans, Lyu, Pommerich, & Houston, 1997). ACT and SAT I scores are highly related, but the tests are built to different specifications and measure different constructs. Throughout the entire process of conducting the linkage, many questions were raised about


College and University | 1997

Concordance Between ACT Assessment and Recentered SAT I Sum Scores.

Neil J. Dorans; C. Felicia Lyu; Mary Pommerich; Walter M. Houston


Archive | 1998

Estimating average domain scores

Mary Pommerich; Act W. Alan Nicewander; Bradley A. Hanson


Journal of Educational Measurement | 2008

Local Dependence in an Operational CAT: Diagnosis and Implications

Mary Pommerich; Daniel O. Segall


Archive | 1999

Issues in Creating and Reporting Concordance Results Based on Equipercentile Methods.

Mary Pommerich; Bradley A. Hanson; Deborah J. Harris; James A. Sconing


Archive | 2003

Context Effects in Pretesting: Impact on Item Statistics and Examinee Scores.

Mary Pommerich; Deborah J. Harris


The Journal of Technology, Learning and Assessment | 2007

The Effect of Using Item Parameters Calibrated from Paper Administrations in Computer Adaptive Test Administrations.

Mary Pommerich

Collaboration


Dive into the Mary Pommerich's collaboration.

Top Co-Authors

Avatar

Daniel O. Segall

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley A. Hanson

The American College of Financial Services

View shared research outputs
Top Co-Authors

Avatar

Lihua Yao

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge