Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas M. Whiteside is active.

Publication


Featured researches published by Douglas M. Whiteside.


Applied Neuropsychology | 2016

Verbal Fluency: Language or Executive Function Measure?

Douglas M. Whiteside; Tammy Kealey; Matthew Semla; Hien Luu; Linda Rice; Michael R. Basso; Brad L. Roper

Measures of phonemic and semantic verbal fluency, such as FAS and Animal Fluency (Benton, Hamsher, & Sivan, 1989), are often thought to be measures of executive functioning (EF). However, some studies (Henry & Crawford, 2004a, 2004b, 2004c) have noted there is also a language component to these tasks. The current exploratory factor-analytic study examined the underlying cognitive structure of verbal fluency. Participants were administered language and EF measures, including the Controlled Oral Word Association Test (FAS version), Animal Fluency, Boston Naming Test (BNT), Vocabulary (Wechsler Adult Intelligence Scale-III), Wisconsin Card-Sorting Test (WCST, perseverative responses), and Trail-Making Test-Part B (TMT-B). A 2-factor solution was found with the 1st factor, language, having significant loadings for BNT and Vocabulary, while the second factor was labeled EF because of significant loading from the WCST and TMT-B. Surprisingly, FAS and Animal Fluency loaded exclusively on to the language factor and not EF. The current results do not exclude EF as a determinant of verbal fluency, but they do suggest that language processing is the critical component for this task, even without significant aphasic symptoms. Thus, the results indicated that both letter (phonemic) and category (semantic) fluency are related to language, but the relationship to EF is not supported by the results.


Clinical Neuropsychologist | 2012

Detecting Suboptimal Cognitive Effort: Classification Accuracy of the Conner's Continuous Performance Test-II, Brief Test of Attention, and Trail Making Test

Michelle Busse; Douglas M. Whiteside

Many cognitive measures have been studied for their ability to detect suboptimal cognitive effort; however, attention measures have not been extensively researched. The current study evaluated the classification accuracy of commonly used attention/concentration measures, the Brief Test of Attention (BTA), Trail Making Test (TMT), and the Conners’ Continuous Performance Test (CPT-II). Participants included 413 consecutive patients who completed a comprehensive neuropsychological evaluation. Participants were separated into two groups, identified as either unbiased responders or biased responders as determined by performance on the TOMM. Based on Mann-Whitney U results, the two groups differed significantly on all attentional measures. Classification accuracy of the BTA (.83), CPT-II omission errors (OE; .76) and TMT B (.75) were acceptable; however, classification accuracy of CPT-II commission errors (CE; .64) and TMT A (.62) were poor. When variables were combined in different combinations, sensitivity did not significantly increase. Results indicated for optimal cut-off scores, sensitivity ranged from 48% to 64% when specificity was at least 85%. Given that sensitivity rates were not adequate, there remains a need to utilize highly sensitive measures in addition to these embedded measures. Results were discussed within the context of research promoting the need for multiple measures of cognitive effort.


Clinical Neuropsychologist | 2009

Relationship between Tomm performance and PAI validity scales in a mixed clinical sample

Douglas M. Whiteside; Philip Dunbar-Mayer; Dana Waters

This study addressed the relationship between Personality Assessment Inventory (PAI) validity indicators and cognitive effort measures on the Test of Memory Malingering (TOMM). Significant correlations were found between TOMM and some PAI validity scales. Factor analysis results found separate cognitive and personality components, but the Negative Impression Management (NIM) scale, a measure of response bias, had factor loadings on both the cognitive and the personality components. Follow-up hierarchical multiple regression and t-test analysis generally confirmed this result, and found that NIM and the Infrequency (INF) scale have significant relationships with the TOMM. The results indicate that individuals with elevations on the PAIs INF and NIM scales often display decreased cognitive effort on the TOMM. The current results support the hypothesis that personality assessment validity indicators have a modest but significant relationship with poor cognitive effort.


Journal of Clinical and Experimental Neuropsychology | 2015

Language-based embedded performance validity measures in traumatic brain injury

Douglas M. Whiteside; Julia Kogan; Lydia Wardin; Derek Phillips; M. Graciela Franzwa; Linda Rice; Michael R. Basso; Brad L. Roper

No studies to date have investigated the Boston Naming Test (BNT) as an embedded performance validity test (PVT). This study investigated the classification accuracy of the Boston Naming Test (BNT) and the Verbal Fluency Test (FAS and Animal Fluency), as embedded PVTs in a compensation-seeking mild traumatic brain injury (MTBI) sample (N = 57) compared to a non-compensation-seeking moderate-to-severe TBI (STBI) sample (N = 61). Participants in the MTBI sample who failed two or more PVTs were included, as were STBI participants who passed all PVTs. The classification accuracy of the individual tests and a logistically derived combined (LANGPVT) measure were studied. Results showed significant group differences (p < .05) on BNT, Animal Fluency, and LANGPVT between the MTBI and STBI groups. However, receiver operating characteristic (ROC) analyses indicated that only LANGPVT had acceptable classification accuracy (area under the curve > .70). Setting specificity at approximately .90, the recommended LANGPVT cutoff scores had sensitivity of .26. Results indicated that, similar to other embedded PVTs, these measures had low sensitivity when adequate specificity levels were maintained. However, extremely low scores on these measures are unlikely to occur in non-compensation-seeking, non-language-impaired, STBI cases.


Clinical Neuropsychologist | 2011

Classification Accuracy of Multiple Visual Spatial Measures in the Detection of Suspect Effort

Douglas M. Whiteside; Danielle Wald; Michelle Busse

A wide variety of cognitive measures, particularly memory measures, have been studied for their ability to detect suspect effort, or biased responding on neuropsychological assessment instruments. However, visual spatial measures have received less attention. The purpose of this study was to evaluate the classification accuracy of several commonly used visual spatial measures, including the Judgment of Line Orientation Test, the Benton Facial Recognition Test, the Hooper Visual Organization Test, and the Rey Complex Figure Test-Copy and Recognition trials. Participants included 491 consecutive referrals who participated in a comprehensive neuropsychological assessment and met study criteria. Participants were divided into two groups identified as either unbiased responding (UR, N = 415) or biased responding (BR, N = 30) based on their performance on two measures of effort. The remaining participants (N = 46) had discrepant performance on the symptom validity measures and were excluded from further analysis. The groups differed significantly on all measures. Additionally, receiver operating characteristic (ROC) analysis indicated all of the measures had acceptable classification accuracy, but a measure combining scores from all of the measures had excellent classification accuracy. Results indicated that various cut-off scores on the measures could be used depending on the context of the evaluation. Suggested cut-off scores for the measures had sensitivity levels of approximately 32–46%, when specificity was at least 87%. When combined, the measures suggested cut-off scores had sensitivity increase to 57% while maintaining the same level of specificity (87%). The results were discussed in the context of research advocating the use of multiple measures of effort.


Journal of Clinical and Experimental Neuropsychology | 2012

Differential response patterns on the Personality Assessment Inventory (PAI) in compensation-seeking and non-compensation-seeking mild traumatic brain injury patients

Douglas M. Whiteside; Jennifer Galbreath; Michelle Brown; Jane Turnbull

There is relatively little research on the Personality Assessment Inventory (PAI) with mild traumatic brain injury (MTBI) populations. There is also little research on how compensation-seeking status affects personality assessment results in MTBI patients. The current study examined the PAI scales and subscales in two MTBI groups, one composed of compensation-seeking MTBI patients and the other consisting of non-compensation-seeking MTBI patients. Results indicated significant differences on several scales and subscales between the two MTBI groups, with the compensation-seeking MTBI patients having significantly higher elevations on scales related to somatic preoccupation (Somatic Complaint Scale, SOM), emotional distress (Anxiety Scale, ANX; Anxiety Related Disorders Scale, ARD; Depression Scale, DEP), and the Negative Impression Management, NIM, validity scale. All the SOM subscales and the Anxiety Cognitive (ANX-C) and ANX Affective, ANX-A, subscales were also elevated in the compensation-seeking group. Results indicated that several scales on the PAI were sensitive to group differences in compensation-seeking status in MTBI patients.


Clinical Neuropsychologist | 2010

Relationship between suboptimal cognitive effort and the clinical scales of the Personality Assessment Inventory

Douglas M. Whiteside; Courtney Clinton; Christina Diamonti; Julie Stroemel; Claire White; Anya Zimberoff; Dana Waters

Little research has examined the relationship between the Personality Assessment Inventory (PAI) and cognitive effort. The current study extends the research on personality assessment and suboptimal cognitive effort by evaluating the relationship between the PAI clinical scales and the Test of Memory Malingering (TOMM) in a neuropsychological population. Utilizing corrections for multiple comparisons, rank-order correlations with the TOMM Trial 2 (T2) and the PAI clinical scales indicated a significant relationship with the SOM (rho = −.26, p <.001), with additional scales (SCZ, ANX, and DEP) trending toward significance. Analysis of SOM subscales indicated a significant relationship between SOM-C and T2 as well. To further explore the relationship between SOM and the TOMM, ANOVA results indicated that individuals scoring within normal limits on the SOM had higher mean TOMM scores than those with extremely elevated SOM. Additional analyses indicated that utilizing the cut-off for extreme responding on the SOM scale (T > 87) had adequate sensitivity (93%) and specificity (76%) in predicting TOMM performance, with a positive predictive power of 54% and a negative predictive power of 97%, resulting in a 91% correct classification rate. Thus, the evidence suggests that extreme scores on SOM should prompt careful evaluation for suboptimal cognitive effort.


Clinical Neuropsychologist | 1996

Cognitive screening with the neurobehavioral cognitive status examination in a chronic liver disease population

Douglas M. Whiteside; Marjorie A. Padula; Louise K. Jeffrey; Rowen K. Zetterman

Abstract The Neurobehavioral Cognitive Status Examination (NCSE) was developed as a brief neuropsychological screening instrument. To date there has been minimal research investigating the usefulness of the NCSE. The purpose of this study is to analyze the factor structure of the NCSE and investigate the relationship between the NCSE and the Trail Making Test as a criterion measure of cognitive dysfunction in patients with chronic liver disease. Exploratory factor analysis suggests that an attention-based, general cognitive functioning factor is present. Correlational and regression analyses suggest that the Memory and Construction subtests are most strongly related to cognitive dysfunction secondary to hepatic encephalopathy. Finally, preliminary norms for the NCSE are presented utilizing this population. Results suggest that the NCSE is relatively free of age and education effects, is an adequate screen for general cognitive dysfunction, and is a viable alternative to other cognitive screening instrumen...


Clinical Neuropsychologist | 2015

Derivation of a Cross-Domain Embedded Performance Validity Measure in Traumatic Brain Injury

Douglas M. Whiteside; Owen J. Gaasedelen; Amanda E. Hahn-Ketter; Hien Luu; Michelle L. Miller; Virginia Persinger; Linda Rice; Michael R. Basso

Objective: Performance validity assessment is increasingly considered standard practice in neuropsychological evaluations. The current study extended research on logistically derived performance validity tests (PVTs) by utilizing neuropsychological measures from multiple cognitive domains instead of from a single measure or a single cognitive domain. Method: A logistic-derived PVT was calculated using several measures from multiple cognitive domains, including verbal memory (California Verbal Learning Test-II Trial 5, Total Hits, and False Positives), attention (Brief Test of Attention Total score), and language (Boston Naming Test T-score, and Animal Fluency T-score). Due to its cross-domain nature, the cross-domain logistic-derived embedded PVT was hypothesized to have excellent classification accuracy for non-credible performance. Participants included 224 patients who completed all measures and were moderate to severe traumatic brain injury (STBI) patients (N = 66), possible mild TBI (MTBI-FAIL) patients who failed at least 2 independent PVTs (N = 67), and possible mild TBI patients who passed all PVTs (MTBI-PASS; N = 91). Logistic regression and ROC analyses were conducted on the MTBI-FAIL group and the STBI group. Results: Multivariate analysis of variance indicated that the MTBI-FAIL group was significantly lower on all measures than the MTBI-PASS and the STBI groups. Using logistic regression, CVLT Total Hits, BTA, and the CVLT False Positives best differentiated between the MTBI-FAIL and STBI groups. The logistically derived PVT had excellent classification accuracy (area under the curve [AUC] = .84), with sensitivity at .54 when specificity was set at .90, higher than any individual variable. Conclusions: Findings support the use of this logistical-derived variable as an embedded PVT and support further research with this type of methodology.


Clinical Neuropsychologist | 2014

Exploring the reliability and component structure of the personality assessment inventory in a neuropsychological sample.

Michelle Busse; Douglas M. Whiteside; Dana Waters; Jared R. Hellings; Peter Ji

The current study was designed to advance general research investigating the Personality Assessment Inventory (PAI), by examining whether the psychometric properties of the PAI would generalize to a sample differing from the original standardization sample. Specifically, the reliability and factor structure of the PAI were examined in a mixed neuropsychological sample. Reliability full scale coefficients ranged from .72 to .94, and subscale coefficients ranged from .60 to .90. Confirmatory factor analysis (CFA) was conducted to test Morey’s original four-factor model (for all 22 PAI scales) and three-factor model (for the 11 clinical scales). CFA results indicated that Morey’s original factor solutions were not a good fit. Thus, following Morey’s original methodology, principal components analyses (PCA) were conducted on all 22 PAI scales and on the 11 PAI clinical scales and the results indicated evidence for a five-component solution (for all 22 PAI scales) and a two-component solution (for the 11 clinical scales). Overall, while results indicated some relatively subtle differences between the original standardization sample and the current sample, they still supported the notion that the PAI is a reliable and valid measure when used in a neuropsychological sample. This study expands upon the existing literature related to the clinical utility of the PAI in specialized samples.

Collaboration


Dive into the Douglas M. Whiteside's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda Rice

Rehabilitation Institute of Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amanda E. Hahn-Ketter

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar

Brad L. Roper

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Dennis R. Combs

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hien Luu

Adler School of Professional Psychology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christine Paprocki

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge