Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine Hohensinn is active.

Publication


Featured researches published by Christine Hohensinn.


Educational and Psychological Measurement | 2011

Applying Item Response Theory Methods to Examine the Impact of Different Response Formats

Christine Hohensinn; Klaus D. Kubinger

In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also be studied using methods of item response theory to deal with incomplete data. Response formats can influence item attributes in two ways: different response formats could cause items to measure different latent traits or they could contribute differently to item difficulty. In contrast to previous research, the present study examines the impact of response formats on item attributes of a language awareness test applying different item response theory models. Results indicate that although the language awareness test contains items with different response formats, only one latent trait is measured; no format-specific dimensions were found. Response formats do, however, have a distinct impact on the difficulty of the items. In addition to the effects of the three administered item types, a fourth component that makes items more difficult was identified.


Educational Research and Evaluation | 2011

Analysing item position effects due to test booklet design within large-scale assessment

Christine Hohensinn; Klaus D. Kubinger; Manuel Reif; Eva Schleicher; Lale Khorramdel

For large-scale assessments, usually booklet designs administering the same item at different positions within a booklet are used. Therefore, the occurrence of position effects influencing the difficulty of the item is a crucial issue. Not taking learning or fatigue effects into account would result in a bias of estimated item difficulty. The occurrence of position effects is examined for a 4th-grade mathematical competence test of the Austrian Educational Standards by means of the linear logistic test model (LLTM). A small simulation study assesses the test power for this model. Overall, the LLTM without a modelled position effect yielded a good model fit. Therefore, no relevant global item position effect could be found for the analysed mathematical competence test.


International Journal of Selection and Assessment | 2010

On Minimizing Guessing Effects on Multiple-Choice Items: Superiority of a Two Solutions and Three Distractors Item Format to a One Solution and Five Distractors Item Format

Klaus D. Kubinger; Stefana Holocher-Ertl; Manuel Reif; Christine Hohensinn; Martina Frebort

Multiple-choice response formats are troublesome, as an item is often scored as solved simply because the examinee may be lucky at guessing the correct option. Instead of pertinent Item Response Theory models, which take guessing effects into account, this paper considers a psycho-technological approach to re-conceptualizing multiple-choice response formats. The free-response format is compared with two different multiple-choice formats: a traditional format with a single correct response option and five distractors (‘1 of 6’), and another with five response options, three of them being distractors and two of them being correct (‘2 of 5’). For the latter format, an item is scored as mastered only if both correct response options and none of the distractors are marked. After the exclusion of a few items, the Rasch model analyses revealed appropriate fit for 188 items altogether. The resulting item-difficulty parameters were used for comparison. The multiple-choice format ‘1 of 6’ differs significantly from the multiple-choice format ‘2 of 5’, while the latter does not differ significantly from the free-response format. The lower difficulty of items ‘1 of 6’ suggests guessing effects.


Kindheit Und Entwicklung | 2008

Hochbegabungsdiagnostik: HAWIK-IV oder AID 2

Stefana Holocher-Ertl; Klaus D. Kubinger; Christine Hohensinn

Es werden die beiden Intelligenz-Testbatterien HAWIK-IV und AID 2 in Bezug auf Hochbegabungsdiagnostik gegenubergestellt. Ausgegangen wird von zwei Modellen der Hochbegabungsdiagnostik. Dem traditionellen Ansatz einerseits – (kognitive) Hochbegabung liegt vor bei einem IQ > 130 – und dem „Wiener Diagnosemodell zum Hochleistungspotenzial“ andererseits. Letzteres postuliert in Anlehnung an das „Munchner Hochbegabungsmodell“ zusatzlich zu Begabungsfaktoren, wie vor allem der Intelligenz, bestimmte Personlichkeits- sowie Umweltmerkmale als Moderatoren der Leistungsmanifestation. Die Abhandlung von HAWIK-IV und AID 2 ergibt, dass keine von beiden Testbatterien beiden Modellen gleichermasen gerecht wird, sondern der HAWIK-IV eher im Sinne der traditionellen Hochbegabungsdiagnostik einsetzbar ist, der AID 2 besonders gut fur eine forderungsorientierte Diagnostik im Sinne des „Wiener Diagnosemodells zum Hochleistungspotenzial“. Somit muss in der Praxis zuerst entschieden werden, welchem Modell man sich verpflicht...


Psychological Reports | 2014

Persian Adaptation of Foreign Language Reading Anxiety Scale: A Psychometric Analysis

Purya Baghaei; Christine Hohensinn; Klaus D. Kubinger

The validity and psychometric properties of a new Persian adaptation of the Foreign Language Reading Anxiety Scale were investigated. The scale was translated into Persian and administered to 160 undergraduate students (131 women, 29 men; M age = 23.4 yr., SD=4.3). Rasch model analysis on the scales original 20 items revealed that the data do not fit the partial credit model. Principal components analysis identified three factors: one related to feelings of anxiety about reading, the second reflected the reverse-worded items, and the third related to general ideas about reading in a foreign language. In a re-analysis, the 12 items that loaded on the first factor showed a good fit with the partial credit model.


Frontiers in Psychology | 2017

A Method of Q-Matrix Validation for the Linear Logistic Test Model

Purya Baghaei; Christine Hohensinn

The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices.


Algorithms from and for Nature and Life | 2013

Detecting Person Heterogeneity in a Large-Scale Orthographic Test Using Item Response Models

Christine Hohensinn; Klaus D. Kubinger; Manuel Reif

Achievement tests for students are constructed with the aim of measuring a specific competency uniformly for all examinees. This requires students to work on the items in a homogenous way. The dichotomous logistic Rasch model is the model of choice for assessing these assumptions during test construction. However, it is also possible that various subgroups of the population either apply different strategies for solving the items or make specific types of mistakes, or that different items measure different latent traits. These assumptions can be evaluated with extensions of the Rasch model or other Item Response models. In this paper, the test construction of a new large-scale German orthographic test for eighth grade students is presented. In the process of test construction and calibration, a pilot version was administered to 3,227 students in Austria. In the first step of analysis, items yielded a poor model fit to the dichotomous logistic Rasch model. Further analyses found homogenous subgroups in the sample which are characterized by different orthographic error patterns.


Educational Research and Evaluation | 2011

Designing the test booklets for Rasch model calibration in a large-scale assessment with reference to numerous moderator variables and several ability dimensions

Klaus D. Kubinger; Christine Hohensinn; Sandra Hofer; Lale Khorramdel; Martina Frebort; Stefana Holocher-Ertl; Manuel Reif; Philipp Sonnleitner

In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960), this is not really a problem because item calibration works nevertheless. The different test booklets only need to be conceptualized according to a connected incomplete block design. Yet, connectedness of such a design should best be fulfilled severalfold, since deletion of some items in the course of the item pools IRT calibration may become necessary. The real challenge, however, is to meet constraints determined by numerous moderator variables such as different response formats and several topics of content – all the more so, if several ability dimensions are under consideration, the testing duration is strongly limited or individual scoring and feedback is an issue. In this article, we offer a report of how to deal with the resulting problems. Experience is based on the governmental project of the Austrian Educational Standards (Kubinger et al., 2007).


Psychology Science | 2008

Examining Item-Position Effects in Large-Scale Assessment Using the Linear Logistic Test Model

Christine Hohensinn; Klaus D. Kubinger; Manuel Reif; Stefana Holocher-Ertl; Lale Khorramdel; Martina Frebort


Psychology Science | 2008

Identifying Children Who May Be Cognitively Gifted: The Gap between Practical Demands and Scientific Supply

Stefana Holocher-Ertl; Klaus D. Kubinger; Christine Hohensinn

Collaboration


Dive into the Christine Hohensinn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge