Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judith M. Conijn is active.

Publication


Featured researches published by Judith M. Conijn.


Applied Psychological Measurement | 2014

Statistic lz -Based Person-Fit Methods for Noncognitive Multiscale Measures

Judith M. Conijn; Wilco H. M. Emons; Klaas Sijtsma

Most person-fit statistics require long tests to reliably detect aberrant item-score vectors and are not readily applicable to noncognitive measures that consist of multiple short subscales. The authors propose combining subscale person-fit information to detect aberrant item-score vectors on noncognitive multiscale measures. They used a simulation study and three empirical personality and psychopathology test datasets to assess five multiscale person-fit methods based on the l z person-fit statistic with respect to (a) identifying aberrant item-score vectors, (b) improving accuracy of research results, and (c) understanding causes of aberrant responding. Simulated data analysis showed that the person-fit methods had good detection rates for substantially misfitting item-score vectors. Real-data person-fit analyses identified 4% to 17% misfitting item-score vectors. Removal of these vectors little improved model fit and test-score validity. The person-fit methods helped to understand causes of aberrant responding after controlling for response style on the explanatory variables. More real-data analyses are needed to demonstrate the usefulness of multiscale person-fit methods for noncognitive multiscale measures.


Assessment | 2015

Detecting and Explaining Aberrant Responding to the Outcome Questionnaire–45

Judith M. Conijn; Wilco H. M. Emons; Kim de Jong; Klaas Sijtsma

We applied item response theory based person-fit analysis (PFA) to data of the Outcome Questionnaire-45 (OQ-45) to investigate the prevalence and causes of aberrant responding in a sample of Dutch clinical outpatients. The l z p person-fit statistic was used to detect misfitting item-score patterns and the standardized residual statistic for identifying the source of the misfit in the item-score patterns identified as misfitting. Logistic regression analysis was used to predict person misfit from clinical diagnosis, OQ-45 total score, and Global Assessment of Functioning code. The l z p statistic classified 12.6% of the item-score patterns as misfitting. Person misfit was positively related to the severity of psychological distress. Furthermore, patients with psychotic disorders, somatoform disorders, or substance-related disorders more likely showed misfit than the baseline group of patients with mood and anxiety disorders. The results suggest that general outcome measures such as the OQ-45 are not equally appropriate for patients with different disorders. Our study emphasizes the importance of person-misfit detection in clinical practice.


Multivariate Behavioral Research | 2013

Explanatory, Multilevel Person-Fit Analysis of Response Consistency on the Spielberger State-Trait Anxiety Inventory

Judith M. Conijn; Wilco H. M. Emons; Marcel A.L.M. van Assen; Susanne S. Pedersen; Klaas Sijtsma

Self-report measures are vulnerable to concentration and motivation problems, leading to responses that may be inconsistent with the respondents latent trait value. We investigated response consistency in a sample (N = 860) of cardiac patients with an implantable cardioverter defibrillator and their partners who completed the Spielberger State-Trait Anxiety Inventory on five measurement occasions. For each occasion and for both the state and trait subscales, we used the l p z person-fit statistic to assess response consistency. We used multilevel analysis to model the between-person and within-person differences in the repeated observations of response consistency using time-dependent (e.g., mood states) and time-invariant explanatory variables (e.g., demographic characteristics). Respondents with lower education, undergoing psychological treatment, and with more post-traumatic stress disorder symptoms tended to respond less consistently. The percentages of explained variance in response consistency were small. Hence, we conclude that the results give insight into the causes of response inconsistency but that the identified explanatory variables are of limited practical value for identifying respondents at risk of producing invalid test results. We discuss explanations for the small percentage of explained variance and suggest alternative methods for studying causes of response inconsistency.


Psychological Assessment | 2017

Psychometric Properties of the Leiden Index of Depression Sensitivity (LEIDS).

Ericka Solis; Niki Antypa; Judith M. Conijn; Henk Kelderman; Willem Van der Does

The Leiden Index of Depression Sensitivity (LEIDS; Van der Does, 2002a) is a self-report measure of cognitive reactivity (CR) to sad mood. The LEIDS and its revised version, LEIDS-R (Van der Does & Williams, 2003), reliably distinguish between depression-vulnerable and healthy populations. They also correlate with other markers of depression vulnerability, but little is known about the other psychometric properties. Our aim was to examine the factor structure and validity of the LEIDS-R. We used data from the Netherlands Study of Depression and Anxiety (NESDA; N = 1,696) and a student sample (N = 811) for exploratory and confirmatory factor analysis (EFA and CFA, respectively). CFA showed that model fit of the 6-factor structure was satisfactory in the NESDA sample, but some factors were highly correlated. After removing 4 poor items, EFA yielded an alternative 5-factor structure and could not replicate the original 6-factor model. Testing for measurement invariance across recruitment groups of NESDA showed support for strong invariance. Due to high interfactor correlations, a bifactor model with 1 general factor and 5 specific factors was fitted in 2 samples. This model supported use of a general factor, but high factor loadings in specific factors supported retaining a 5-subscale structure. Higher scores on the general factor were associated with a history of depression, especially in participants with a history of comorbid anxiety. We concluded that the LEIDS-R has good psychometric properties. A modified version, LEIDS-RR, comprised of 5 subscales and a total CR score, is recommended for future research. One of the subscales is suitable as a short form.


Multivariate Behavioral Research | 2011

On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis

Judith M. Conijn; Wilco H. M. Emons; Marcel A.L.M. van Assen; Klaas Sijtsma

The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this multilevel framework. An advantage of the multilevel framework is that it allows relating person fit to explanatory variables for person misfit/fit. We critically discuss Reises approach. First, we argue that often the interpretation of the PRF slope as an indicator of person misfit is incorrect. Second, we show that the multilevel logistic regression model and the logistic PRF model are incompatible, resulting in a multilevel person-fit framework, which grossly violates the bivariate normality assumption for residuals in the multilevel model. Third, we use a Monte Carlo study to show that in the multilevel logistic regression framework estimates of distribution parameters of PRF intercepts and slopes are biased. Finally, we discuss the implications of these results and suggest an alternative multilevel regression approach to explanatory person-fit analysis. We illustrate the alternative approach using empirical data on repeated anxiety measurements of cardiac arrhythmia patients who had a cardioverter-defibrillator implanted.


Applied Psychological Measurement | 2016

Identifying Person-Fit Latent Classes, and Explanation of Categorical and Continuous Person Misfit

Judith M. Conijn; Klaas Sijtsma; Wilco H. M. Emons

Latent class (LC) cluster analysis of a set of subscale lz person-fit statistics was proposed to explain person misfit on multiscale measures. The proposed explanatory LC person-fit analysis was used to analyze data of students (N = 91,648) on the nine-subscale School Attitude Questionnaire Internet (SAQI). Inspection of the class-specific lz mean and variance structure combined with explanatory analysis of class membership showed that the data included a poor-fit class, a class showing good fit combined with social desirability bias, a good-fit class, and two classes that were more difficult to interpret. A comparison of multinomial logistic regression predicting class membership and multiple regression predicting continuous person fit showed that LC cluster analysis provided information about aberrant responding unattainable by means of linear multiple regression. It was concluded that LC person-fit analysis has added value to common approaches to explaining aberrant responding to multiscale measures.


Quality of Life Research | 2018

Measurement versus prediction in the construction of patient-reported outcome questionnaires: can we have our cake and eat it?

Niels Smits; Judith M. Conijn

BackgroundTwo important goals when using questionnaires are (a) measurement: the questionnaire is constructed to assign numerical values that accurately represent the test taker’s attribute, and (b) prediction: the questionnaire is constructed to give an accurate forecast of an external criterion. Construction methods aimed at measurement prescribe that items should be reliable. In practice, this leads to questionnaires with high inter-item correlations. By contrast, construction methods aimed at prediction typically prescribe that items have a high correlation with the criterion and low inter-item correlations. The latter approach has often been said to produce a paradox concerning the relation between reliability and validity [1–3], because it is often assumed that good measurement is a prerequisite of good prediction.ObjectiveTo answer four questions: (1) Why are measurement-based methods suboptimal for questionnaires that are used for prediction? (2) How should one construct a questionnaire that is used for prediction? (3) Do questionnaire-construction methods that optimize measurement and prediction lead to the selection of different items in the questionnaire? (4) Is it possible to construct a questionnaire that can be used for both measurement and prediction?Illustrative exampleAn empirical data set consisting of scores of 242 respondents on questionnaire items measuring mental health is used to select items by means of two methods: a method that optimizes the predictive value of the scale (i.e., forecast a clinical diagnosis), and a method that optimizes the reliability of the scale. We show that for the two scales different sets of items are selected and that a scale constructed to meet the one goal does not show optimal performance with reference to the other goal.DiscussionThe answers are as follows: (1) Because measurement-based methods tend to maximize inter-item correlations by which predictive validity reduces. (2) Through selecting items that correlate highly with the criterion and lowly with the remaining items. (3) Yes, these methods may lead to different item selections. (4) For a single questionnaire: Yes, but it is problematic because reliability cannot be estimated accurately. For a test battery: Yes, but it is very costly. Implications for the construction of patient-reported outcome questionnaires are discussed.


Assessment | 2018

Response Inconsistency of Patient-Reported Symptoms as a Predictor of Discrepancy Between Patient and Clinician-Reported Depression Severity

Judith M. Conijn; Wilco H. M. Emons; Bethan F. Page; Klaas Sijtsma; Willem Van der Does; Ingrid V. E. Carlier; Erik J. Giltay

The aim of this study was to assess the extent to which discrepancy between self-reported and clinician-rated severity of depression are due to inconsistent self-reports. Response inconsistency threatens the validity of the test score. We used data from a large sample of outpatients (N = 5,959) who completed the self-report Beck Depression Inventory–II (BDI-II) and the clinician-rated Montgomery–Åsberg Depression Rating Scale (MADRS). We used item response theory based person-fit analysis to quantify the inconsistency of the self-report item scores. Inconsistency was weakly positively related to patient–clinician discrepancy (i.e., higher BDI-II scores relative to MADRS scores). The mediating effect of response inconsistency in the relationship between discrepancy and demographic (e.g., ethnic origin) and clinical variables (e.g., cognitive problems) was negligible. The small direct and mediating effects of response inconsistency suggest that inaccurate patient self-reports are not a major cause of patient–clinician discrepancy in outpatient samples. Future research should investigate the role of clinician biases in explaining clinician–patient discrepancy.


International Journal of Methods in Psychiatric Research | 2017

Person misfit on the Inventory of Depressive Symptomatology: Low quality self-report or true atypical symptom profile?

Judith M. Conijn; Philip Spinhoven; Rob R. Meijer; Femke Lamers

Person misfit on a self‐report measure refers to a response pattern that is unlikely given a theoretical measurement model. Person misfit may reflect low quality self‐report data, for example due to random responding or misunderstanding of items. However, recent research in the context of psychopathology suggests that person misfit may reflect atypical symptom profiles that have implications for diagnosis or treatment. We followed‐up on Wanders et al. (Journal of Affective Disorders, 180, 36–43, 2015) who investigated person misfit on the Inventory of Depressive Symptomatology (IDS) in the Netherlands Study of Depression and Anxiety (n = 2,981). Our goal was to investigate the extent to which misfit on the IDS reflects low‐quality self‐report patterns and the extent to which it reflects true atypical symptom profiles. Regression analysis showed that person misfit related more strongly to self‐report quality indicators than to variables quantifying theoretically‐derived atypical symptom profiles. A data‐driven atypical symptom profile explained most variance in person misfit, suggesting that person misfit on the IDS mainly reflects a sample‐ and questionnaire‐specific atypical symptom profile. We concluded that person‐fit statistics are useful for detecting IDS scores that may not be valid. Further research is necessary to support the interpretation of person misfit as reflecting a meaningful atypical symptom combination.


Assessment | 2017

Satisficing in Mental Health Care Patients: The Effect of Cognitive Symptoms on Self-Report Data Quality

Judith M. Conijn; Philip Spinhoven

Respondents may use satisficing (i.e., nonoptimal) strategies when responding to self-report questionnaires. These satisficing strategies become more likely with decreasing motivation and/or cognitive ability (Krosnick, 1991). Considering that cognitive deficits are characteristic of depressive and anxiety disorders, depressed and anxious patients may be prone to satisficing. Using data from the Netherland’s Study of Depression and Anxiety (N = 2,945), we studied the relationship between depression and anxiety, cognitive symptoms, and satisficing strategies on the NEO Five-Factor Inventory. Results showed that respondents with either an anxiety disorder or a comorbid anxiety and depression disorder used satisficing strategies substantially more often than healthy respondents. Cognitive symptom severity partly mediated the effect of anxiety disorder and comorbid anxiety disorder on satisficing. The results suggest that depressed and anxious patients produce relatively low-quality self-report data—partly due to cognitive symptoms. Future research should investigate the degree of satisficing across different mental health care assessment contexts.

Collaboration


Dive into the Judith M. Conijn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik J. Giltay

Leiden University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Femke Lamers

VU University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge