Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iris J. L. Egberink is active.

Publication


Featured researches published by Iris J. L. Egberink.


Assessment | 2011

An item response theory analysis of Harter’s self-perception profile for children or why strong clinical scales should be distrusted

Iris J. L. Egberink; Rob R. Meijer

The authors investigated the psychometric properties of the subscales of the Self-Perception Profile for Children with item response theory (IRT) models using a sample of 611 children. Results from a nonparametric Mokken analysis and a parametric IRT approach for boys (n = 268) and girls (n = 343) were compared. The authors found that most scales formed weak scales and that measurement precision was relatively low and only present for latent trait values indicating low self-perception. The subscales Physical Appearance and Global Self-Worth formed one strong scale. Children seem to interpret Global Self-Worth items as if they measure Physical Appearance. Furthermore, the authors found that strong Mokken scales (such as Global Self-Worth) consisted mostly of items that repeat the same item content. They conclude that researchers should be very careful in interpreting the total scores on the different Self-Perception Profile for Children scales. Finally, implications for further research are discussed.


Educational and Psychological Measurement | 2012

Investigating Invariant Item Ordering in Personality and Clinical Scales Some Empirical Findings and a Discussion

Rob R. Meijer; Iris J. L. Egberink

In recent studies, different methods were proposed to investigate invariant item ordering (IIO), but practical IIO research is an unexploited field in questionnaire construction and evaluation. In the present study, the authors explored the usefulness of different IIO methods to analyze personality scales and clinical scales. From the authors’ analyses, it was clear that for clinical scales consisting of items that cover a limited range of “symptoms,” the IIO property is an unrealistic assumption. For scales that consist of items that cover a broader range of item severity, IIO research can provide useful information. However, removing an item because it violates the assumption of IIO may be problematic because it can affect the construct that is measured. Finally, the authors advise researchers to always use plots of item rest-score regressions to interpret IIO results.


International Journal for the Psychology of Religion | 2016

An Item Response Theory Analysis of The Questionnaire of God Representations

Hanneke Schaap-Jonker; Iris J. L. Egberink; Arjan W. Braam; Jozef Corveleyn

ABSTRACT The Dutch Questionnaire of God Representations (QGR) was investigated by means of item response theory (IRT) modeling in a clinical (n = 329) and a nonclinical sample (n = 792). Through a graded response model and IRT-based differential functioning techniques, detailed item-level analyses and information about measurement invariance between the clinical and nonclinical sample were obtained. On the basis of the results of the IRT analyses, a shortened version of the QGR (S-QGR) was constructed, consisting of 22 items, which functions in the same way in both the clinical and the nonclinical sample. Results indicated that the QGR consists of strong and reliable scales which are able to differentiate among persons. Psychometric characteristics of the S-QGR were adequate.


Educational and Psychological Measurement | 2015

Investigating Measurement Invariance in Computer-Based Personality Testing The Impact of Using Anchor Items on Effect Size Indices

Iris J. L. Egberink; Rob R. Meijer; Jorge N. Tendeiro

A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many anchor items should be selected and which method will provide the “best” results using empirical data. In the present study, we examined the impact of using different numbers of anchor items on effect size indices when investigating measurement invariance on a personality questionnaire in two different assessment situations. Results suggested that the effect size indices were not influenced by using different numbers of anchor items. The values were comparable across different number of anchor items used and were small, which indicate that the effect of differential functioning at the item and test level is very small if not negligible. Practical implications are discussed and we discuss the use of anchor items and effect size indices in practice.


Health Psychology and Behavioral Medicine | 2018

What are the minimal sample size requirements for Mokken scaling? An empirical example with the Warwick- Edinburgh Mental Well-Being Scale

Roger Watson; Iris J. L. Egberink; Lisa Kirke; Jorge N. Tendeiro; Frank Doyle

ABSTRACT Purpose: Sample size in Mokken scales is mostly studied on simulated data, reflected in the lack of consideration of sample size in most Mokken scaling studies. Recently, [Straat, J. H., van der Ark, L. A., & Sijtsma, K. (2014). Minimum sample size requirements for Mokken scale analysis. Educational and Psychological Measurement, 74, 809–822] provided minimum sample size requirements for Mokken scale analysis based on simulation. Our study uses real data from the Warwick-Edinburgh Mental Well-Being Scale (N = 8463) to assess whether these hold. Methods: We use per element accuracy to evaluate the impact of sample size, with scaling coefficients and confidence intervals around scale, item and item pair scalability coefficients. Results: Per element accuracy, scalability coefficients, and confidence intervals around scalability coefficients are sensitive to sample size. The results from Straat et al. were not replicated; depending on the main goal of the research, sample sizes ranging from > 250 to > 1000 are needed. Conclusions: Using our pragmatic approach, some practical recommendations are made regarding sample sizes for studies of Mokken scaling.


Journal of Personality Assessment | 2008

Detection and validation of unscalable item score patterns using item response theory:: An illustration with Harter's Self-Perception Profile for Children

Rob R. Meijer; Iris J. L. Egberink; Wilco H. M. Emons; Klaas Sijtsma


Journal of Research in Personality | 2010

Conscientiousness in the workplace: Applying mixture IRT to investigate scalability and predictive validity

Iris J. L. Egberink; Rob R. Meijer; Bernard P. Veldkamp


Archives of Physical Medicine and Rehabilitation | 2015

Dutch Multifactor Fatigue Scale: A New Scale to Measure the Different Aspects of Fatigue After Acquired Brain Injury

Annemarie C. Visser-Keizer; Antoinette Hogenkamp; Herma J. Westerhof-Evers; Iris J. L. Egberink; Jacoba M. Spikman


Personality and Individual Differences | 2010

Detection of aberrant item score patterns in computerized adaptive testing: An empirical example using the CUSUM

Iris J. L. Egberink; Rob R. Meijer; Bernard P. Veldkamp; Lolle Schakel; Nico G. Smid


Gedrag & Organisatie | 2012

Het nut van item respons theorie bij de constructie en evaluatie van niet-cognitieve instrumenten voor selectie en assessment binnen organisaties. : (The usefulness of item response theory for the construction and evaluation of noncognitive tests in personnel selection and assessment.)

Iris J. L. Egberink; Rob R. Meijer

Collaboration


Dive into the Iris J. L. Egberink's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Annemarie C. Visser-Keizer

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Antoinette Hogenkamp

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Arjan W. Braam

VU University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge