Frederick Cline
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederick Cline.
Applied Measurement in Education | 2010
Elizabeth Stone; Linda L. Cook; Cara Cahalan Laitusis; Frederick Cline
This validity study examined differential item functioning (DIF) results on large-scale state standards–based English-language arts assessments at grades 4 and 8 for students without disabilities taking the test under standard conditions and students who are blind or visually impaired taking the test with either a large print or braille form. Using the Mantel-Haenszel method, only one item at each grade was flagged as displaying large DIF, in each case favoring students without disabilities. Additional items were flagged as exhibiting intermediate DIF, with some items found to favor each group. A priori hypothesis coding and attempts to predict the effects of large print or braille accommodations on DIF were not found to have a relationship with the actual flagging of items, although some a posteriori explanations could be made. The results are seen as supporting the accessibility and validity of the current test for students who are blind or visually impaired while also identifying areas for improvement consisting mainly of attention to formatting and consistency.
Applied Measurement in Education | 2004
Brent Bridgeman; Frederick Cline; James Hessinger
The Graduate Record Examination General Test (GRE) is a measure of academic reasoning abilities that is intended to be a power test in which speed of responding plays at most a minor role. To test this assumption, we experimentally administered both the verbal and quantitative sections of the GRE with standard time limits and with 1.5 times the standard time limit (e.g., 45 min for a 30-min section). Participants volunteered to take an extra section with the experimental timing at the end of their regular GRE test; their incentive was eligibility for a cash payment if they did as well on the experimental section as on their operational sections. Usable data were obtained from 15,948 examinees. Results indicated that extra time added about 7 points to verbal scores and 7 points to quantitative scores (on the 200-800 score scale). Results were comparable across gender and ethnic groups, but quantitative scores were slightly higher for lower ability examinees.
Applied Measurement in Education | 2010
Linda L. Cook; Daniel R. Eignor; Yasuyo Sawaki; Jonathan Steinberg; Frederick Cline
This study compared the underlying factors measured by a state standards-based grade 4 English-Language Arts (ELA) assessment given to several groups of students. The focus of the research was to gather evidence regarding whether or not the tests measured the same construct or constructs for students without disabilities who took the test under standard conditions, students with learning disabilities who took the test under standard conditions, students with learning disabilities who took the test with accommodations as specified in their Individualized Educational Program (IEP) or 504 plan, and students with learning disabilities who took the test with a read-aloud accommodation/modification. The ELA assessment contained both reading and writing portions. A total of 75 multiple-choice items were analyzed. A series of nested hypotheses were tested to determine if the ELA measured the same factors for students with disabilities who took the assessment with and without accommodations and students without disabilities who took the test without accommodations. The results of these analyses, although not conclusive, indicated that the assessment had a similar factor structure for all groups included in the study.
Applied Measurement in Education | 2009
Brent Bridgeman; Nancy Burton; Frederick Cline
Descriptions of validity results based solely on correlation coefficients or percent of the variance accounted for are not merely difficult to interpret, they are likely to be misinterpreted. Predictors that apparently account for a small percent of the variance may actually be highly important from a practical perspective. This study combined two existing data sets to demonstrate alternative methods of showing the value of the Graduate Record Examination General Test (GRE) as an indicator of first-year graduate grades. The combined data sets contained 4,451 students in six graduate fields: biology, chemistry, education, English, experimental psychology, and clinical psychology. Students within a department were divided into quartiles based on GRE scores and on undergraduate grade point average (UGPA), and the percent of students in the top and bottom quartiles earning a 3.8 or higher GPA in their first year of graduate study was noted. Even after controlling for undergraduate GPA quartiles (i.e., looking at GRE quartile differences within GPA quartiles), substantial differences related to GRE quartile remained.
Journal of Educational Measurement | 2004
Brent Bridgeman; Frederick Cline
ETS Research Report Series | 2000
Brent Bridgeman; Frederick Cline
Journal of Applied Testing Technology | 2014
Linda L. Cook; Daniel R. Eignor; Jonathan Steinberg; Yasuyo Sawaki; Frederick Cline
Journal of Applied Testing Technology | 2009
Jonathan Steinberg; Frederick Cline; Guangming Ling; Linda L. Cook; Namrata Tognatta
Studies in Educational Evaluation | 2001
Carol M. Myford; Frederick Cline
ETS Research Report Series | 2001
Brent Bridgeman; Nancy Burton; Frederick Cline