Martina Frebort
University of Vienna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martina Frebort.
International Journal of Selection and Assessment | 2010
Klaus D. Kubinger; Stefana Holocher-Ertl; Manuel Reif; Christine Hohensinn; Martina Frebort
Multiple-choice response formats are troublesome, as an item is often scored as solved simply because the examinee may be lucky at guessing the correct option. Instead of pertinent Item Response Theory models, which take guessing effects into account, this paper considers a psycho-technological approach to re-conceptualizing multiple-choice response formats. The free-response format is compared with two different multiple-choice formats: a traditional format with a single correct response option and five distractors (‘1 of 6’), and another with five response options, three of them being distractors and two of them being correct (‘2 of 5’). For the latter format, an item is scored as mastered only if both correct response options and none of the distractors are marked. After the exclusion of a few items, the Rasch model analyses revealed appropriate fit for 188 items altogether. The resulting item-difficulty parameters were used for comparison. The multiple-choice format ‘1 of 6’ differs significantly from the multiple-choice format ‘2 of 5’, while the latter does not differ significantly from the free-response format. The lower difficulty of items ‘1 of 6’ suggests guessing effects.
Educational Research and Evaluation | 2011
Klaus D. Kubinger; Christine Hohensinn; Sandra Hofer; Lale Khorramdel; Martina Frebort; Stefana Holocher-Ertl; Manuel Reif; Philipp Sonnleitner
In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960), this is not really a problem because item calibration works nevertheless. The different test booklets only need to be conceptualized according to a connected incomplete block design. Yet, connectedness of such a design should best be fulfilled severalfold, since deletion of some items in the course of the item pools IRT calibration may become necessary. The real challenge, however, is to meet constraints determined by numerous moderator variables such as different response formats and several topics of content – all the more so, if several ability dimensions are under consideration, the testing duration is strongly limited or individual scoring and feedback is an issue. In this article, we offer a report of how to deal with the resulting problems. Experience is based on the governmental project of the Austrian Educational Standards (Kubinger et al., 2007).
Psychology Science | 2008
Christine Hohensinn; Klaus D. Kubinger; Manuel Reif; Stefana Holocher-Ertl; Lale Khorramdel; Martina Frebort
European Journal of Psychological Assessment | 2011
Lale Khorramdel; Martina Frebort
Archive | 2012
Klaus D. Kubinger; Lisbeth Weitensfelder; Martina Frebort; Philipp Sonnleitner
Archive | 2011
Martina Frebort; Michaela M. Wagner-Menghin
Archive | 2009
Philipp Sonnleitner; Klaus D. Kubinger; Martina Frebort
Testing International | 2008
Klaus D. Kubinger; Martina Frebort; Lale Khorramdel; Lisbeth Weitensfelder; Philipp Sonnleitner; Christine Hohensinn; Manuel Reif; Kathrin Gruber; Stefana Holocher-Ertl
Archive | 2008
Christine Hohensinn; Klaus D. Kubinger; Stefana Holocher-Ertl; Manuel Reif; Lale Khorramdel; Philipp Sonnleitner; Kathrin Gruber; Martina Frebort
Archive | 2007
Martina Frebort; Lale Khorramdel; Stefana Holocher-Ertl; Philipp Sonnleitner; Lisbeth Weitensfelder; Klaus D. Kubinger