Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan W. Schroeder is active.

Publication


Featured researches published by Ryan W. Schroeder.


Assessment | 2012

Reliable Digit Span: a systematic review and cross-validation study.

Ryan W. Schroeder; Philip Twumasi-Ankrah; Lyle E. Baade; Paul S. Marshall

Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these limitations, thus increasing the measure’s clinical utility. Sensitivity and specificity rates were calculated for the ≤6 and ≤7 cutoffs when data were globally combined and divided by clinical groups. The cross-validation of specific diagnostic groups was consistent with the data reported in the literature. Overall, caution should be used when utilizing the ≤7 cutoff in all clinical groups and when utilizing the ≤6 cutoff in the following groups: cerebrovascular accident, severe memory disorders, mental retardation, borderline intellectual functioning, and English as a second language. Additional limitations and cautions are provided.


Clinical Neuropsychologist | 2015

[Formula: see text]Neuropsychologists' Validity Testing Beliefs and Practices: A Survey of North American Professionals.

Phillip K. Martin; Ryan W. Schroeder; Anthony P. Odland

Objective: The current study investigated changes in neuropsychologists’ validity testing beliefs and practices since publication of the last North American survey targeting these issues in 2007 and explored emerging issues in validity testing that had not been previously addressed in the professional survey literature. Methods: Licensed North American neuropsychologists (n = 316), who primarily evaluate adults, were surveyed in regard to the following topics: (1) comparison of objective validity testing, qualitative data, and clinical judgment; (2) approaches to validity test administration; (3) formal communication in cases of suspected malingering; (4) reporting of validity test results; (5) suspected causes of invalidity; (6) integration of stand-alone, embedded, and symptom-report validity measures; (7) multiple performance validity test interpretation; (8) research practices; and (9) popularity of specific validity instruments. Results: Overall, findings from the current survey indicated that all but a small minority of respondents routinely utilize validity testing in their examinations. Furthermore, nearly all neuropsychologists surveyed believed formal validity testing to be mandatory in forensic evaluations and at least desirable in clinical evaluations. While results indicated general agreement among neuropsychologists across many aspects of validity testing, responses regarding some facets of validity test implementation, interpretation, and reporting were more variable. Validity testing utilization generally did not differ according to level of forensic involvement but did vary in respect to respondent literature consumption. Conclusions: Study findings differ significantly from past professional surveys and indicate an increased utilization of validity testing, suggesting a pronounced paradigm shift in neuropsychology validity testing beliefs and practices.


Clinical Neuropsychologist | 2012

Validation of MMPI-2-RF validity scales in criterion group neuropsychological samples.

Ryan W. Schroeder; Lyle E. Baade; Caleb P. Peck; Emanuel J. VonDran; Callie J. Brockman; Blake K. Webster; Robin J. Heinrichs

This study utilized multiple criterion group neuropsychological samples to evaluate the “over-reporting” and “under-reporting” MMPI-2-RF validity scales. The five criterion groups included in this study were (1) litigating traumatic brain injury patients who failed Slick et al. criteria for probable malingering, (2) litigating traumatic brain injury patients who passed Slick et al. criteria, (3) mixed neuropsychological outpatients who passed SVTs and were diagnosed with primary neurological conditions, (4) mixed neuropsychological outpatients who passed SVTs and were diagnosed with primary psychiatric conditions, and (5) epileptic seizure disorder inpatients who were diagnosed via video-EEG. Using the data from these groups, cumulative percentages for all possible T-scores and sensitivity and specificity rates for optimal cutoff scores were determined. When specificity rates were set at 90% across all non-malingering neurological condition groups, sensitivity rates ranged from 48% (FBS-r) to 10% (K-r).


Clinical Neuropsychologist | 2011

Evaluation of the Appropriateness of Multiple Symptom Validity Indices in Psychotic and Non-Psychotic Psychiatric Populations

Ryan W. Schroeder; Paul S. Marshall

Although it is recognized that significant cognitive deficits are inherent in many psychiatric disorders, there is minimal research on whether the deficits can cause a failing score on symptom validity tests (SVTs). The performances of 104 and 178 patients with psychotic disorders and non-psychotic psychiatric disorders, respectively, on seven SVTs were examined. Analyses indicate that most of these SVTs have specificity rates of 90% or better for both clinical groups. Further, only 7% of patients in the psychotic group and 5% of patients in the non-psychotic psychiatric group produced false-positive classifications based on malingering criteria similar to those suggested by Slick et al. (i.e., failure of two or more SVTs or failure of one SVT at statistically significantly worse than chance rates). Consequently this research indicates that psychiatric disorders typically do not adversely affect SVT performance.


Archives of Clinical Neuropsychology | 2013

Efficacy of Test of Memory Malingering Trial 1, Trial 2, the Retention Trial, and the Albany Consistency Index in a Criterion Group Forensic Neuropsychological Sample

Ryan W. Schroeder; W. H. Buddin; D. D. Hargrave; E. J. VonDran; E. B. Campbell; Callie J. Brockman; Robin J. Heinrichs; Lyle E. Baade

The Test of Memory Malingering is one of the most popular and heavily researched validity tests available for use in neuropsychological evaluations. Recent research has suggested, however, that the original indices and cutoffs may require modifications to increase sensitivity rates. Some of these modifications lack cross-validation and no study has examined all indices in a single sample. This study compares Trial 1, Trial 2, the Retention Trial, and the newly created Albany Consistency Index in a criterion group forensic neuropsychological sample. Findings lend support for the newly created indices and cutoff scores. Implications and cautionary statements are provided and discussed.


Clinical Neuropsychologist | 2016

Expert beliefs and practices regarding neuropsychological validity testing

Ryan W. Schroeder; Phillip K. Martin; Anthony P. Odland

Abstract Objective: The current study investigated expert beliefs and practices as they relate to neuropsychological validity testing. Methods: North American neuropsychologists with expertise in neuropsychological validity testing (n = 24) were surveyed on numerous items related to validity testing. Results were analyzed and compared to findings from a prior expert survey and a recent survey of a general sample of neuropsychological practitioners. Results: Responses varied among experts on some items, indicating that experts have differences of opinion and practice regarding certain validity testing topics. However, expert opinion converged on a number of topics central to validity testing, particularly those highlighting the need for and importance of validity testing in neuropsychological assessment. Notably, expert responses on these topics often agreed with responses obtained from a prior expert sample and a general sample of neuropsychological practitioners. Conclusions: The results allow practitioners to see the range of validity testing beliefs and practices among current experts. Especially in those areas where consensus emerged, the results provide a way for practitioners to determine if their practices are consistent with those of their expert colleagues.


Clinical Neuropsychologist | 2013

Differences in MMPI-2 FBS and RBS Scores in Brain Injury, Probable Malingering, and Conversion Disorder Groups: A Preliminary Study

C. P. Peck; Ryan W. Schroeder; Robin J. Heinrichs; E. J. VonDran; Callie J. Brockman; B. K. Webster; Lyle E. Baade

This study examined differences in raw scores on the Symptom Validity Scale and Response Bias Scale (RBS) from the Minnesota Multiphasic Personality Inventory-2 in three criterion groups: (i) valid traumatic brain injured, (ii) invalid traumatic brain injured, and (iii) psychogenic non-epileptic seizure disorders. Results indicate that a >30 raw score cutoff for the Symptom Validity Scale accurately identified 50% of the invalid traumatic brain injured group, while misclassifying none of the valid traumatic brain injured group and 6% of the psychogenic non-epileptic seizure disorder group. Using a >15 RBS raw cutoff score accurately classified 50% of the invalid traumatic brain injured group and misclassified fewer than 10% of the valid traumatic brain injured and psychogenic non-epileptic seizure disorder groups. These cutoff scores used conjunctively did not misclassify any members of the psychogenic non-epileptic seizure disorder or valid traumatic brain injured groups, while accurately classifying 44% of the invalid traumatic brain injured individuals. Findings from this preliminary study suggest that the conjunctive use of the Symptom Validity Scale and the RBS from the Minnesota Multiphasic Personality Inventory-2 may be useful in differentiating probable malingering from individuals with brain injuries and conversion disorders.


Archives of Clinical Neuropsychology | 2015

Does True Neurocognitive Dysfunction Contribute to Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form Cognitive Validity Scale Scores?

Phillip K. Martin; Ryan W. Schroeder; Robin J. Heinrichs; Lyle E. Baade

Previous research has demonstrated RBS and FBS-r to identify non-credible reporters of cognitive symptoms, but the extent that these scales might be influenced by true neurocognitive dysfunction has not been previously studied. The present study examined the relationship between these cognitive validity scales and neurocognitive performance across seven domains of cognitive functioning, both before and after controlling for PVT status in 120 individuals referred for neuropsychological evaluations. Variance in RBS, but not FBS-r, was significantly accounted for by neurocognitive test performance across most cognitive domains. After controlling for PVT status, however, relationships between neurocognitive test performance and validity scales were no longer significant for RBS, and remained non-significant for FBS-r. Additionally, PVT failure accounted for a significant proportion of the variance in both RBS and FBS-r. Results support both the convergent and discriminant validity of RBS and FBS-r. As neither scale was impacted by true neurocognitive dysfunction, these findings provide further support for the use of RBS and FBS-r in neuropsychological evaluations.


Clinical Neuropsychologist | 2014

An Examination of the Frequency of Invalid Forgetting on the Test of Memory Malingering

W. Howard Buddin; Ryan W. Schroeder; David D. Hargrave; Emmanuel J. Von Dran; Elizabeth B. Campbell; Callie J. Brockman; Robin J. Heinrichs; Lyle E. Baade

The Test of Memory Malingering (TOMM) is the most used performance validity test in neuropsychology, but does not measure response consistency, which is central in the measurement of credible presentation. Gunner, Miele, Lynch, and McCaffrey (2012) developed the Albany Consistency Index (ACI) to address this need. The ACI consistency measurement, however, may penalize examinees, resulting in suboptimal accuracy. The Invalid Forgetting Frequency Index (IFFI), created for the present study, utilizes an algorithm to identify and differentiate learning and inconsistent response patterns across TOMM trials. The purpose of this study was to assess the diagnostic accuracy of the ACI and IFFI against a reference test (Malingered Neurocognitive Dysfunction criteria), and to compare both to the standard TOMM indexes. This retrospective case-control study used 59 forensic cases from an outpatient clinic in Southern Kansas. Results indicated that sensitivity, negative predictive value, and overall accuracy of the IFFI were superior to both the TOMM indexes and ACI. Logistic regression odds ratios were similar for TOMM Trial 2, Retention, and IFFI (1.25, 1.24, 1.25, respectively), with the ACI somewhat lower (1.18). The IFFI had the highest rate of group membership predictions (79.7%). Implications and limitations of the present study are discussed.


Cognitive and Behavioral Neurology | 2012

The Coin-in-the-Hand Test and dementia: more evidence for a screening test for neurocognitive symptom exaggeration.

Ryan W. Schroeder; Caleb P. Peck; William H. Buddin; Robin J. Heinrichs; Lyle E. Baade

Background:The Coin-in-the-Hand Test was developed to help clinicians distinguish patients who are neurocognitively impaired from patients who are exaggerating or feigning memory complaints. Previous findings have shown that participants asked to feign memory problems and patients suspected of malingering performed worse on the test than patients with genuine neurocognitive dysfunction. Objective:We reviewed the literature on the Coin-in-the-Hand Test and evaluated test performance by 45 hospitalized patients who had dementia with moderately to severely impaired cognition. Methods:We analyzed Coin-in-the-Hand Test scores, neuropsychological findings, and other data to determine whether demographic or neurocognitive variables affected Coin-in-the-Hand Test scores. We also calculated base rates of these scores and provided cutoff ranges for clinical use. Results:Coin-in-the-Hand Test scores were independent of neurocognitive functioning, age, education level, and type of dementia. Base rates of scores suggest that a low cutoff can help differentiate between patients with true neurocognitive impairments and those exaggerating or feigning memory complaints. Conclusions:Both the literature and our findings show the Coin-in-the-Hand Test to have potential as a quick and easy screening tool to detect neurocognitive symptom exaggeration. This test could effectively supplement commonly used neurocognitive screens such as the Mini-Mental State Examination, the Saint Louis University Mental Status Examination, and the Montreal Cognitive Assessment.

Collaboration


Dive into the Ryan W. Schroeder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony P. Odland

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul S. Marshall

Hennepin County Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge