Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John F. Linck is active.

Publication


Featured researches published by John F. Linck.


Archives of Clinical Neuropsychology | 2014

The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms

Daniel Proto; Nicholas J. Pastorek; Brian I. Miller; Jennifer Romesser; Anita H. Sim; John F. Linck

Evaluating performance validity is important in any neuropsychological assessment, and prior research recommends a threshold for invalid performance of two or more performance validity test (PVT) failures. However, extant findings also indicate that failing a single PVT is associated with significant changes in neuropsychological performance. The current study sought to determine if there is an appreciable difference in neuropsychological testing results between individuals failing different numbers of PVTs. In a sample of veterans with reported histories of mild traumatic brain injury (mTBI; N =178), analyses revealed that individuals failing only one PVT performed significantly worse than individuals failing no PVTs on measures of verbal learning and memory, processing speed, and cognitive flexibility. Additionally, individuals failing one versus two PVTs significantly differed only on delayed free recall scores. The current findings suggest that failure of even one PVT should elicit consideration of performance invalidity, particularly in individuals with histories of mTBI.


Clinical Neuropsychologist | 2014

PTSD and Cognitive Functioning: Importance of Including Performance Validity Testing

Nick M. Wisdom; Nicholas J. Pastorek; Brian I. Miller; Jane E. Booth; Jennifer Romesser; John F. Linck; Anita H. Sim

Many studies have observed an association between post-traumatic stress disorder (PTSD) and cognitive deficits across several domains including memory, attention, and executive functioning. The inclusion of response bias measures in these studies, however, remains largely unaddressed. The purpose of this study was to identify possible cognitive impairments correlated with PTSD in returning OEF/OIF/OND veterans after excluding individuals failing a well-validated performance validity test. Participants included 126 men and 8 women with a history of mild traumatic brain injury (TBI) referred for a comprehensive neuropsychological evaluation as part of a consortium of five Veterans Affairs hospitals. The PTSD CheckList (PCL) and Word Memory Test (WMT) were used to establish symptoms of PTSD and invalid performance, respectively. Groups were categorized as follows: Control (PCL < 50, pass WMT), PTSD-pass (PCL ≥ 50, pass WMT), and PTSD-fail (PCL ≥ 50, fail WMT). As hypothesized, failure on the WMT was associated with significantly poorer performance on almost all cognitive tests administered; however, no significant differences were detected between individuals with and without PTSD symptoms after separating out veterans failing the WMT. These findings highlight the importance of assessing respondent validity in future research examining cognitive functioning in psychiatric illness and warrant further consideration of prior studies reporting PTSD-associated cognitive deficits.


Applied Neuropsychology | 2014

Olfactory deficits in frontotemporal dementia as measured by the Alberta Smell Test.

Daniel J. Heyanka; Charles J. Golden; Robert McCue; David M. Scarisbrick; John F. Linck; Nancy I. Zlatkin

The study of olfaction in neurodegeneration has primarily focused on Alzheimers disease. Research of olfaction in frontotemporal dementia (FTD) has generally not been empirically studied. The current study compared olfaction in FTD to major depressive disorder (MDD) using the Alberta Smell Test (AST). Independent-samples t test results suggested olfaction in FTD was impaired when compared with participants diagnosed with MDD. The AST Total score (out of 20 trials) significantly predicted the diagnostic group and accounted for 40% of the variance in diagnostic group status with an odds ratio of 20.08. Results suggested that a cutoff of ≤2/20 differentiated FTD from MDD with 94% accuracy (91% sensitivity, 97% specificity) and a cutoff of ≤1/20 differentiated the groups with a 95.5% hit rate (91% sensitivity, 100% specificity). Results confirmed olfactory identification deficits in FTD and suggested that the AST is an effective tool for the demarcation of FTD from MDD. This is especially important due to the potential for significant overlap in the behavioral/emotional phenotype and cognitive deficits between the two disorders when presented with early stages of FTD.


Archives of Clinical Neuropsychology | 2015

A Factor Analytic Approach to the Validation of the Word Memory Test and Test of Memory Malingering as Measures of Effort and Not Memory

Daniel J. Heyanka; Nicholas S. Thaler; John F. Linck; Nicholas J. Pastorek; Brian I. Miller; Jennifer Romesser; Anita H. Sim

Research has demonstrated the utility of performance validity tests (PVTs) as a method of determining adequate effort during a neuropsychological evaluation. Although some studies affirm that forced-choice PVTs measure effort rather than memory, doubts remain in the literature. The purpose of the current study was to evaluate the relationship between effort and memory variables in a mild traumatic brain injury (TBI) sample (n = 160) by separating memory and effort as distinct factors while statistically controlling for the shared covariance between the variables. A two-factor solution was extracted such that the five PVT variables loaded on Factor 1 and the four memory variables loaded on Factor 2. The pattern matrix, which controls for the covariance between variables, provided clear support of two highly distinct factors with minimal cross-loadings. Our findings support assertions that PVTs measure effort independent of memory in veterans with mild TBI.


Archives of Clinical Neuropsychology | 2012

Brief Report: The Temporal Stability of the Repeatable Battery for the Assessment of Neuropsychological Status Effort Index in Geriatric Samples

Kerry M. O'Mahar; Kevin Duff; James G. Scott; John F. Linck; Russell L. Adams; James W. Mold

The Effort Index (EI) of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) was developed to identify inadequate effort. Although researchers have examined its validity, the reliability of the EI has not been evaluated. The current study examined the temporal stability of the EI across 1 year in two independent samples of older adults. One sample consisted of 445 cognitively intact older adults (mean age = 72.89; 59% having 12-15 years of education) and the second sample consisted of 51 individuals diagnosed with amnestic Mild Cognitive Impairment (mean age = 82.41; 41% having 12-15 years of education). For both samples, the EI was found to have low stability (Spearmans ρ = .32-.36). When participants were divided into those whose EI stayed stable or improved versus those whose EI worsened (i.e., declining effort) on retesting, it was observed that individuals with lower baseline RBANS Total scores tended to worsen on the EI across time. Overall, the findings suggest low temporal stability of the EI in two geriatric samples. In particular, individuals with poorer cognition at baseline could present with poorer effort across time. These findings also suggest the need to further examine the temporal stability of other effort measures.


Clinical Neuropsychologist | 2016

Base rate comparison of suboptimal scores on the RBANS effort scale and effort index in Parkinson’s disease

Kirstine R. Carter; James G. Scott; Russell L. Adams; John F. Linck

Abstract Objective: The effort index (EI) and the effort scale are commonly used embedded effort indicators on the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). This investigation examined the rates of suboptimal scores on the EI and effort scale in a Parkinson’s disease (PD) sample. Method: One hundred and sixty-three participants who have been diagnosed with PD by a board-certified neurologist were included in the study. The base rate of suboptimal scores on the EI and effort scale was calculated for the entire group. Results: On average, participants were 66.8 years of age (SD = 9.5) and had a mean education of 13.5 years (SD = 2.79). The mean Mini-Mental State Examination score was 27.0 (SD = 3.1). Overall, 8% of participants scored below the cut-off for optimal performance on the EI while 62.6% performed in the suboptimal range for the effort scale. Conclusion: The utility of the EI and the effort scale in PD populations warrants further examination. Additionally, results demonstrate the need for validation of embedded RBANS effort measures in various disease populations.


Clinical Neuropsychologist | 2010

Assessment of the Rbans Visual and Verbal Indices in a Sample of Neurologically Impaired Elderly Participants

Darci R. Morgan; John F. Linck; James G. Scott; Russell L. Adams; James W. Mold

With increases in the older adult population, brief assessments sensitive to dementia are essential. This study assessed the effectiveness of the verbal memory and visual processing indices proposed by Duff et al. (2009) to differentiate participants with neurological disorders. Participants included individuals diagnosed with mild cognitive impairment (MCI; n = 38), Alzheimers disease (AD; n = 100), or Parkinsons disease (PD; n = 35), with ages ranging from 65–93 years. In addition, normal control participants (n = 100) within the same age range were used for comparison. ANOVA and posthoc analyses revealed that the normal control and AD groups were significantly different from all groups for Verbal and Visual Indices. However, the MCI and PD groups did not differ from each other. Predictive discriminant analysis (PDA) assessed classification rates of the groups, and the normal participants were classified best (63% to 92%). The AD group followed with percentages ranging from 64% to 76%. Specifically, when classifying the normal and AD groups using both Verbal and Visual Indices of the RBANS together, sensitivity was 92.0% (n = 92) and specificity was 79.0% (n = 79). Overall classification rates for this analysis were 85.5%. Overall, the RBANS Verbal and Visual Indices may provide additional information when working with neurologically impaired older adults, with overall classification rates ranging from 61.5% to 85.5%.


Journal of Head Trauma Rehabilitation | 2015

Clinician versus Veteran ratings on the Mayo-Portland Participation Index in veterans with a history of mild traumatic brain injury.

Katie McCulloch; Nicholas J. Pastorek; Brian I. Miller; Jennifer Romesser; John F. Linck; Anita H. Sim; Maya Troyanskaya; Kacey Little Maestas

Background:The Department of Veterans Affairs is encouraging administration of the Mayo-Portland Adaptability Inventory–4 Participation Index (M2PI) to identify long-term psychosocial outcomes of Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND) Veterans with a history of traumatic brain injury (TBI). Objective:To evaluate clinician and Veteran interrater reliability and how response validity influences M2PI item ratings. Participants:A total of 122 OEF/OIF/OND Veterans who reported a history consistent with mild TBI during deployment and were referred for neuropsychological evaluation following Comprehensive TBI Evaluation. Design:Interrater reliability study. Main Measures:M2PI; Minnesota Multiphasic Personality Inventory–2 Symptom Validity Scale (FBS). Results:Veterans reported greater perceived restrictions than clinicians across all M2PI items and total score. Interrater correlations ranged from rs = 0.27 (residence) to rs = 0.58 (money management) across items, with a total score correlation of rs = 0.60. When response bias was indicated, both Veterans and clinicians reported greater participation restrictions than those reported by Veterans without evidenced response bias. Conclusion:Low interrater correlation is consistent with previous findings. As ratings of clinicians and Veterans should not be interpreted as equivalent, documenting the raters identity is important for interpretation. Using objective indicators of functional outcome may assist clinician raters, particularly when self-report may be biased.


Archives of Clinical Neuropsychology | 2013

Heterogeneity in Trail Making Test Performance in OEF/OIF/OND Veterans with Mild Traumatic Brain Injury

Nicholas S. Thaler; John F. Linck; Daniel J. Heyanka; Nicholas J. Pastorek; Brian I. Miller; Jennifer Romesser; Anita Sim; Daniel N. Allen

This study used cluster analysis to examine variability in Trail Making Test (TMT) performance in a sample of Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn (OEF/OIF/OND) veterans referred for mild traumatic brain injury (mTBI). Three clusters were extracted, two of which were characterized by level of performance and the third with a unique performance pattern characterized by slow performance on the TMT B (Low B). Clusters did not differ on demographic or psychiatric variables. The Above Average cluster had better performance on measures of processing speed, working memory, and phonemic fluency compared with the Low B cluster. Results suggest that a subset of patients with mTBI perform poorly on TMT B, which subsequently predicts poorer cognitive functioning on several other neuropsychological measures. This subset may be vulnerable to cognitive changes in the context of mTBI and multiple comorbidities while a number of other patients remain cognitively unaffected under the same circumstances.


Archives of Clinical Neuropsychology | 2014

Ecological Validity of Performance Validity Testing

Sara M. Lippa; Nicholas J. Pastorek; Jennifer Romesser; John F. Linck; Anita H. Sim; Nick M. Wisdom; Brian I. Miller

Collaboration


Dive into the John F. Linck's collaboration.

Top Co-Authors

Avatar

Brian I. Miller

University of Mississippi Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James G. Scott

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Russell L. Adams

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Heyanka

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

David M. Scarisbrick

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

James W. Mold

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Asim A Shah

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Britta Ostermeyer

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Christopher T. Copeland

University of Oklahoma Health Sciences Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge