Laszlo A. Erdodi
University of Windsor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laszlo A. Erdodi.
Child Neuropsychology | 2017
Jonathan D. Lichtenstein; Laszlo A. Erdodi; Kate Linnea
ABSTRACT The importance of performance validity tests (PVTs) is increasingly recognized in pediatric neuropsychology. To date, research has focused on investigating whether PVTs designed for adults function similarly in children. The downward extension of adult cutoffs is counter-intuitive considering the robust effect of age-related changes in basic cognitive skills in children and adolescents. The purpose of this study was to examine the signal detection properties of a forced-choice recognition trial (FCR-C) for the California Verbal Learning Test – Children’s Version. A total of 72 children aged 6–15 years (M = 11.1 , SD = 2.6) completed the FCR-C as part of a larger neuropsychological assessment battery. Cross-validation analyses revealed that the FCR-C had good signal detection performance against reference PVTs. The first level of failure (≤14/15) produced the best combination of overall sensitivity (.31) and specificity (.87). A more conservative FCR-C cutoff (≤13) resulted in a predictable trade-off between sensitivity (.15) and specificity (.94), but also a net loss in discriminant power. Lowering the cutoff to ≤12 resulted in a slight improvement in specificity (.97) but further deterioration in sensitivity (.14). These preliminary findings suggest that the FCR-C has the potential to become the newest addition to a growing arsenal of pediatric PVTs.
Applied Neuropsychology | 2017
Laszlo A. Erdodi; Robert M. Roth
ABSTRACT Complex Ideational Material (CIM) is a sentence comprehension task designed to detect pathognomonic errors in receptive language. Nevertheless, patients with apparently intact language functioning occasionally score in the impaired range. If these instances reflect poor test taking effort, CIM has potential as a performance validity test (PVT). Indeed, in 68 adults medically referred for neuropsychological assessment, CIM was a reliable marker of psychometrically defined invalid responding. A raw score ≤9 or T-score ≤29 achieved acceptable combinations of sensitivity (.34–.40) and specificity (.82–.90) against two reference PVTs, and produced a zero overall false positive rate when scores on all available PVTs were considered. More conservative cutoffs (≤8/ ≤ 23) with higher specificity (.95–1.00) but lower sensitivity (.14–.17) may be warranted in patients with longstanding, documented neurological deficits. Overall, results indicate that in the absence of overt aphasia, poor performance on CIM is more likely to reflect invalid responding than true language impairment. The implications of the clinical interpretation of CIM are discussed.
Clinical Neuropsychologist | 2017
Kelly Y. An; Kristen Kaploun; Laszlo A. Erdodi; Christopher A. Abeare
Abstract Objective: This study compared failure rates on performance validity tests (PVTs) across liberal and conservative cutoffs in a sample of undergraduate students participating in academic research.Method: Participants (n = 120) were administered four free-standing PVTs (Test of Memory Malingering, Word Memory Test, Rey 15-Item Test, Hiscock Forced-Choice Procedure) and three embedded PVTs (Digit Span, letter and category fluency). Participants also reported their perceived level of effort during testing.Results: At liberal cutoffs, 36.7% of the sample failed ≥1 PVTs, 6.7% failed ≥2, and .8% failed 3. At conservative cutoffs, 18.3% of the sample failed ≥1 PVTs, 2.5% failed ≥2, and .8% failed 3. Participants were 3 to 5 times more likely to fail embedded (15.8–30.8%) compared to free-standing PVTs (3.3–10.0%). There was no significant difference in failure rates between native and non-native English speaking participants at either liberal or conservative cutoffs. Additionally, there was no relation between self-reported effort and PVT failure rates.Conclusions: Although PVT failure rates varied as a function of PVTs and cutoffs, between a third and a fifth of the sample failed ≥1 PVTs, consistent with high initial estimates of invalid performance in this population. Embedded PVTs had notably higher failure rates than free-standing PVTs. Assuming optimal effort in research using students as participants without a formal assessment of performance validity introduces a potentially significant confound in the study design.
Child Neuropsychology | 2018
Jonathan D. Lichtenstein; Laszlo A. Erdodi; Jaspreet K. Rai; Anya Mazur-Mosiewicz; Lloyd Flaro
ABSTRACT Past studies have examined the ability of the Wisconsin Card Sorting Test (WCST) to discriminate valid from invalid performance in adults using both individual embedded validity indicators (EVIs) and multivariate approaches. This study is designed to investigate whether the two most stable of these indicators—failures to maintain set (FMS) and the logistical regression equation S-BLRE—can be extended to pediatric populations. The classification accuracy for FMS and S-BLRE was examined in a mixed clinical sample of 226 children aged 7 to 17 years (64.6% male, MAge = 13.6 years) against a combination of established performance validity tests (PVTs). The results show that at adult cutoffs, FMS and S-BLRE produce an unacceptably high failure rate (33.2% and 45.6%) and low specificity (.55–.72), but an upward adjustment in cutoffs significantly improves classification accuracy. Defining Pass as <2 and Fail as ≥4 on FMS results in consistently good specificity (.89–.92) but low and variable sensitivity (.00–.33). Similarly, cutting the S-BLRE distribution at 3.68 produces good specificity (.90–.92) but variable sensitivity (.06–.38). Passing or failing FMS or S-BLRE is unrelated to age, gender and IQ. The data from this study suggest that in a pediatric sample, adjusted cutoffs on the FMS and S-BLRE ensure good specificity, but with low or variable sensitivity. Thus, they should not be used in isolation to determine the credibility of a response set. At the same time, they can make valuable contributions to pediatric neuropsychology by providing empirically-supported, expedient and cost-effective indicators to enhance performance validity assessment.
Clinical Neuropsychologist | 2017
Laszlo A. Erdodi; Jonathan D. Lichtenstein
Abstract Objective: Embedded validity indicators (EVIs) are cost-effective psychometric tools to identify non-credible response sets during neuropsychological testing. As research on EVIs expands, assessors are faced with an emerging contradiction: the range of credible impairment disappears between the ‘normal’ and ‘invalid’ range of performance. We labeled this phenomenon as the invalid-before-impaired paradox. This study was designed to explore the origin of this psychometric anomaly, subject it to empirical investigation, and generate potential solutions. Method: Archival data were analyzed from a mixed clinical sample of 312 (MAge = 45.2; MEducation = 13.6) patients medically referred for neuropsychological assessment. The distribution of scores on eight subtests of the third and fourth editions of Wechsler Adult Intelligence Scale (WAIS) were examined in relation to the standard normal curve and two performance validity tests (PVTs). Results: Although WAIS subtests varied in their sensitivity to non-credible responding, they were all significant predictors of performance validity. While subtests previously identified as EVIs (Digit Span, Coding, and Symbol Search) were comparably effective at differentiating credible and non-credible response sets, their classification accuracy was driven by their base rate of low scores, requiring different cutoffs to achieve comparable specificity. Conclusions: Invalid performance had a global effect on WAIS scores. Genuine impairment and non-credible performance can co-exist, are often intertwined, and may be psychometrically indistinguishable. A compromise between the alpha and beta bias on PVTs based on a balanced, objective evaluation of the evidence that requires concessions from both sides is needed to maintain/restore the credibility of performance validity assessment.
Applied neuropsychology. Child | 2017
Laszlo A. Erdodi; Jonathan D. Lichtenstein; Jaspreet K. Rai; Lloyd Flaro
ABSTRACT In previous research, several subscales of Conners’ CPT-II were found to be useful as performance validity tests (PVTs) when administered to adults with traumatic brain injury (TBI). Furthermore, invalid response sets were associated with inflated scores on several CPT-II scales. The present study proposed to investigate whether these findings would replicate in a pediatric sample. The analyses were based on archival data from 15 children with TBI. The Omissions, Hit RT, Perseverations, and Hit RT BC scales proved effective at differentiating valid and invalid response sets. However, Commission errors were unrelated to scores on PVTs. A composite measure based on these four scores was a superior and more stable validity indicator than individual scales. Two or more T-scores >65 on any of these scales resulted in acceptable overall specificity (.86–1.00) and variable sensitivity (.00–1.00). Scores on CPT-II scales were generally higher among those who failed the reference PVTs. Results suggest that embedded CPT-II validity indices developed in adult TBI samples function similarly in children with TBI, with some notable exceptions. Although the use of adult PVT cutoffs in pediatric assessment is a common practice, and broadly supported by the present findings, there remains a clear need for the independent empirical validation of adult PVTs in children.
Brain Injury | 2017
Laszlo A. Erdodi; Jaspreet K. Rai
ABSTRACT Objective: This study investigated the potential of alternative, more liberal cutoffs on Trial 2 of the Test of Memory Malingering (TOMM) to improve classification accuracy relative to the standard cutoffs (≤44). Method: The sample consisted of 152 patients (49.3% male) with psychiatric conditions (PSY) and traumatic brain injury (TBI) referred for neuropsychological assessment in a medico-legal setting (MAge = 44.4, MEducation = 11.9 years). Classification accuracy for various TOMM Trial 2 cutoffs was computed against three criterion measures. Results: Patients with TBI failed TOMM Trial 2 cutoffs at higher rates than patients with PSY. Trial 2 ≤49 achieved acceptable combinations of sensitivity (0.38–0.67) and specificity (0.89–0.96) in all but one comparison group. Trial 2 ≤48 improved specificity (0.94–0.98) with minimal loss in sensitivity. The standard cutoff (≤44) disproportionally traded sensitivity (0.15–0.50) for specificity (0.96–1.00).Conclusions: One error on TOMM Trial 2 constitutes sufficient evidence to question the credibility of a response set. However, the confidence in classifying a score as invalid continues to increase with each additional error. Even at the most liberal conceivable cutoff (≤49), the TOMM detected only about half of the patients who failed other criterion measures. Therefore, it should never be used in isolation to determine performance validity.
Clinical Neuropsychologist | 2017
Laszlo A. Erdodi; Katherine Jongsma; Meriam Issa
Abstract Objective: The present study was designed to examine the potential of the Boston Naming Test – Short Form (BNT-15) to provide an objective estimate of English proficiency. A secondary goal was to examine the effect of limited English proficiency (LEP) on neuropsychological test performance.Method: A brief battery of neuropsychological tests was administered to 79 bilingual participants (40.5% male, MAge = 26.9, MEducation = 14.2). The majority (n = 56) were English dominant (EN), and the rest were Arabic dominant (AR). The BNT-15 was further reduced to 10 items that best discriminated between EN and AR (BNT-10). Participants were divided into low, intermediate, and high English proficiency subsamples based on BNT-10 scores (≤6, 7–8, and ≥9). Performance across groups was compared on neuropsychological tests with high and low verbal mediation.Results: The BNT-15 and BNT-10 respectively correctly identified 89 and 90% of EN and AR participants. Level of English proficiency had a large effect (partial η2 = .12–.34; Cohen’s d = .67–1.59) on tests with high verbal mediation (animal fluency, sentence comprehension, word reading), but no effect on tests with low verbal mediation (auditory consonant trigrams, clock drawing, digit-symbol substitution).Conclusions: The BNT-15 and BNT-10 can function as indices of English proficiency and predict the deleterious effect of LEP on neuropsychological tests with high verbal mediation. Interpreting low scores on such measures as evidence of impairment in examinees with LEP would likely overestimate deficits.
Child Neuropsychology | 2017
Jaspreet K. Rai; Maurissa Abecassis; Joseph E. Casey; Lloyd Flaro; Laszlo A. Erdodi; Robert M. Roth
ABSTRACT Aboriginal children in Canada are at high risk of fetal alcohol spectrum disorder (FASD) but there is little research on the cognitive impact of prenatal alcohol exposure (PAE) in this population. This paper reviews the literature on parent report of executive functioning in children with FASD that used the Behavior Rating Inventory of Executive Function (BRIEF). New data on the BRIEF is then reported in a sample of 52 Aboriginal Canadian children with FASD for whom a primary caregiver completed the BRIEF. The children also completed a battery of neuropsychological tests. The results reveal mean scores in the impaired range for all three BRIEF index scores and seven of the eight scales, with the greatest difficulties found on the Working Memory, Inhibit and Shift scales. The majority of the children were reported as impaired on the index scores and scales, with Working Memory being most commonly impaired scale. On the performance-based tests, Trails B and Letter Fluency are most often reported as impaired, though the prevalence of impairment is greater for parent ratings than test performance. No gender difference is noted for the parent report, but the boys had slightly slower intellectual functioning and were more perseverative than the girls on testing. The presence of psychiatric comorbidity is unrelated to either BRIEF or test scores. These findings are generally consistent with prior studies indicating that parents observe considerable executive dysfunction in children with FASD, and that children with FASD may have more difficulty with executive functions in everyday life than is detected by laboratory-based tests alone.
Applied Neuropsychology | 2017
Laszlo A. Erdodi
ABSTRACT This study was designed to examine the “domain specificity” hypothesis in performance validity tests (PVTs) and the epistemological status of an “indeterminate range” when evaluating the credibility of a neuropsychological profile using a multivariate model of performance validity assessment. While previous research suggests that aggregating PVTs produces superior classification accuracy compared to individual instruments, the effect of the congruence between the criterion and predictor variable on signal detection and the issue of classifying borderline cases remain understudied. Data from a mixed clinical sample of 234 adults referred for cognitive evaluation (MAgeu2009=u200946.6; MEducationu2009=u200913.5) were collected. Two validity composites were created: one based on five verbal PVTs (EI-5VER) and one based on five nonverbal PVTs (EI-5NV) and compared against several other PVTs. Overall, language-based tests of cognitive ability were more sensitive to elevations on the EI-5VER compared to visual-perceptual tests; whereas, the opposite was observed with the EI-5NV. However, the match between predictor and criterion variable had a more complex relationship with classification accuracy, suggesting the confluence of multiple factors (sensory modality, cognitive domain, testing paradigm). An “indeterminate range” of performance validity emerged that was distinctly different from both the Pass and the Fail group. Trichotomized criterion PVTs (Pass-Borderline-Fail) had a negative linear relationship with performance on tests of cognitive ability, providing further support for an “in-between” category separating the unequivocal Pass and unequivocal Fail classification range. The choice of criterion variable can influence classification accuracy in PVT research. Establishing a Borderline range between Pass and Fail more accurately reflected the distribution of scores on multiple PVTs. The traditional binary classification system imposes an artificial dichotomy on PVTs that was not fully supported by the data. Accepting “indeterminate” as a legitimate third outcome of performance validity assessment has the potential to improve the clinical utility of PVTs and defuse debates regarding “near-Passes” and “soft Fails.”