Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward Curley is active.

Publication


Featured researches published by Edward Curley.


Journal of Educational Measurement | 2004

Impact of Fewer Questions per Section on SAT I Scores.

Brent Bridgeman; Catherine Trapani; Edward Curley

The impact of allowing more time for each question on the SAT I: Reasoning Test scores was estimated by embedding sections with a reduced number of questions into the standard 30-minute equating section of two national test administrations. Thus, for example, questions were deleted from a verbal section that contained 35 questions to produce forms that contained 27 or 23 questions. Scores on the 23-question section could then be compared to scores on the same 23 questions when they were embedded in a section that contained 27 or 35 questions. Similarly, questions were deleted from a 25-question math section to form sections of 20 and 17 questions. Allowing more time per question had a minimal impact on verbal scores, producing gains of less than 10 points on the 200–800 SAT scale. Gains for the math score were less than 30 points. High-scoring students tended to benefit more than lower-scoring students, with extra time creating no increase in scores for students with SAT scores of 400 or lower. Ethnic/racial and gender differences were neither increased nor reduced with extra time.


Educational and Psychological Measurement | 2011

Observed Score Equating Using a Mini-Version Anchor and an Anchor with Less Spread of Difficulty: A Comparison Study:

Jinghua Liu; Sandip Sinharay; Paul W. Holland; Miriam Feigenbaum; Edward Curley

Two different types of anchors are investigated in this study: a mini-version anchor and an anchor that has a less spread of difficulty than the tests to be equated. The latter is referred to as a midi anchor. The impact of these two different types of anchors on observed score equating are evaluated and compared with respect to systematic error (bias), random equating error (SEE), and total equating error (RMSE) using SAT operational data. The results suggest that the overall bias, SEE, and RMSE when the midi anchor is used are either smaller than or very similar to those when the mini anchor test is used. The findings suggest that a midi anchor test would be preferred to a mini anchor test if equating accuracy at the ends of the score scale is not a primary concern.


ETS Research Report Series | 2003

EFFECT OF FEWER QUESTIONS PER SECTION ON SAT® I SCORES

Brent Bridgeman; Catherine Trapani; Edward Curley

The impact of allowing more time for each question on SAT® I: Reasoning Test scores was estimated by embedding sections with a reduced number of questions into the standard 30-minute equating section of two national test administrations. Thus, for example, questions were deleted from a verbal section that contained 35 questions to produce forms that contained 27 or 23 questions. Scores on the 23-question section could then be compared to scores on the same 23 questions when they were embedded in a section that contained 27 or 35 questions. Similarly, questions were deleted from a 25-question math section to form sections of 20 and 17 questions. Allowing more time per question had a minimal impact on verbal scores, producing gains of less than 10 points on the 200–800 SAT scale. Gains for the math score were less than 30 points. High-scoring students tended to benefit more than lower-scoring students, with extra time creating no increase in scores for students with SAT scores of 400 or lower. Ethnic/racial and gender differences were neither increased nor reduced with extra time.


Journal of Educational Measurement | 2011

Test Score Equating Using a Mini-Version Anchor and a Midi Anchor: A Case Study Using SAT Data

Jinghua Liu; Sandip Sinharay; Paul W. Holland; Edward Curley; Miriam Feigenbaum


ETS Research Report Series | 2009

THE EFFECTS OF DIFFERENT TYPES OF ANCHOR TESTS ON OBSERVED SCORE EQUATING

Jinghua Liu; Sandip Sinharay; Paul W. Holland; Miriam Feigenbaum; Edward Curley


ETS Research Report Series | 2009

A Scale Drift Study

Jinghua Liu; Edward Curley; Albert Low


ETS Research Report Series | 2014

Test Score Equating Using Discrete Anchor Items Versus Passage‐Based Anchor Items: A Case Study Using SAT® Data

Jinghua Liu; Jiyun Zu; Edward Curley; Jill Carey


ETS Research Report Series | 2012

THE STABILITY OF THE SCORE SCALES FOR THE SAT REASONING TEST™ FROM 2005 TO 2010

Hongwen Guo; Jinghua Liu; Edward Curley; Neil J. Dorans


ETS Research Report Series | 2014

Test Score Equating Using Discrete Anchor Items versus Passage-Based Anchor Items: A Case Study Using "SAT"® Data. Research Report. ETS RR-14-14.

Jinghua Liu; Jiyun Zu; Edward Curley; Jill Carey


ETS Research Report Series | 2012

The Stability of the Score Scales for the "SAT Reasoning Test"™ from 2005 to 2010. Research Report. ETS RR-12-15.

Hongwen Guo; Jinghua Liu; Edward Curley; Neil J. Dorans

Collaboration


Dive into the Edward Curley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiyun Zu

Princeton University

View shared research outputs
Researchain Logo
Decentralizing Knowledge