Jodi M. Casabianca
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jodi M. Casabianca.
Educational and Psychological Measurement | 2013
Jodi M. Casabianca; Daniel F. McCaffrey; Drew H. Gitomer; Courtney A. Bell; Bridget K. Hamre; Robert C. Pianta
Classroom observation of teachers is a significant part of educational measurement; measurements of teacher practice are being used in teacher evaluation systems across the country. This research investigated whether observations made live in the classroom and from video recording of the same lessons yielded similar inferences about teaching. Using scores on the Classroom Assessment Scoring System–Secondary (CLASS-S) from 82 algebra classrooms, we explored the effect of observation mode on inferences about the level or ranking of teaching in a single lesson or in a classroom for a year. We estimated the correlation between scores from the two observation modes and tested for mode differences in the distribution of scores, the sources of variance in scores, and the reliability of scores using generalizability and decision studies for the latter comparisons. Inferences about teaching in a classroom for a year were relatively insensitive to observation mode. However, time trends in the raters’ use of the score scale were significant for two CLASS-S domains, leading to mode differences in the reliability and inferences drawn from individual lessons. Implications for different modes of classroom observation with the CLASS-S are discussed.
Behavior Research Methods | 2018
Rose E. Stafford; Christopher R. Runyon; Jodi M. Casabianca; Barbara G. Dodd
An important consideration of any computer adaptive testing (CAT) program is the criterion used for ending item administration—the stopping rule, which ensures that all examinees are assessed to the same standard. Although various stopping rules exist, none of them have been compared under the generalized partial-credit model (Muraki in Applied Psychological Measurement, 16, 159–176, 1992). In this simulation study we compared the performance of three variable-length stopping rules—standard error (SE), minimum information (MI), and change in theta (CT)—both in isolation and in combination with requirements of minimum and maximum numbers of items, as well as a fixed-length stopping rule. Each stopping rule was examined under two termination criteria—one a more lenient requirement (SE = 0.35, MI = 0.56, CT = 0.05), and one more stringent (SE = 0.30, MI = 0.42, CT = 0.02). The simulation design also included content-balancing and exposure controls, aspects of CAT that have been excluded in previous research comparing variable-length stopping rules. The minimum-information stopping rule produced biased theta estimates and varied greatly in measurement quality across the theta distribution. The absolute-change-in-theta stopping rule had strong performance when paired with a lower criterion and a minimum test length. The standard error stopping rule consistently provided the best balance of measurement precision and operational efficiency and was based on the fewest number of administered items necessary to obtain accurate and precise theta estimates, particularly when it was paired with a maximum-number-of-items stopping rule.
Archive | 2013
Jodi M. Casabianca; Brian W. Junker
This research provides a framework for specifying the latent trait distribution using the family of loglinear smoothing models. Connections between loglinear smoothing models and the standard normal, normal, and direct estimation of the distribution are the beginning of this framework. The framework is important because it gives the connection between the standard approaches and loglinear smoothing models so that parsimonious (smoothing) models providing adequate representation of the distribution can be identified. Future extensions will include additional complex models for estimating the latent trait distribution.
Annals of Internal Medicine | 2008
Paul L. Hebert; Jane E. Sisk; Jason J. Wang; Leah Tuzzio; Jodi M. Casabianca; Mark R. Chassin; Carol R. Horowitz; Mary Ann McLaughlin
ETS Research Report Series | 2006
Alina A. von Davier; Paul W. Holland; Samuel A. Livingston; Jodi M. Casabianca; Mary C. Grant; Kathleen Martin
Foreign Language Annals | 2004
Deborah Lokai Bischof; David I. Baum; Jodi M. Casabianca; Rick Morgan; Kathleen A. Rabitea; Krishna Tateneni
ETS Research Report Series | 2004
Tim Moses; Alina A. von Davier; Jodi M. Casabianca
Archive | 2016
Jodi M. Casabianca; Brian W. Junker
Archive | 2016
Jodi M. Casabianca; Brian W. Junker; Richard J. Patz
Society for Research on Educational Effectiveness | 2013
Daniel F. McCaffrey; Jodi M. Casabianca