Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wesley R. Cole is active.

Publication


Featured researches published by Wesley R. Cole.


Journal of Head Trauma Rehabilitation | 2015

The experience, expression, and control of anger following traumatic brain injury in a military sample.

Jason M. Bailie; Wesley R. Cole; Brian J. Ivins; Cynthia M. Boyd; Steven Lewis; John Neff; Karen Schwab

Objective:To investigate the impact of traumatic brain injury (TBI) on the experience and expression of anger in a military sample. Participants:A total of 661 military personnel with a history of TBI and 1204 military personnel with no history of TBI. Design:Cross-sectional, between-group design, using multivariate analysis of variance. Main Measure:State-Trait Anger Expression Inventory-2 (STAXI-2). Results:Participants with a history of TBI had higher scores on the STAXI-2 than controls and were 2 to 3 times more likely than the participants in the control group to have at least 1 clinically significant elevation on the STAXI-2. Results suggested that greater time since injury (ie, months between TBI and assessment) was associated with lower scores on the STAXI-2 State Anger scale. Conclusion:Although the results do not take into account confounding psychiatric conditions and cannot address causality, they suggest that a history of TBI increases the risk of problems with the experience, expression, and control of anger. This bolsters the need for proper assessment of anger when evaluating TBI in a military cohort.


Journal of the Neurological Sciences | 2016

Assessment of the King-Devick® (KD) test for screening acute mTBI/concussion in warfighters

David V. Walsh; José E. Capó-Aponte; Thomas Beltran; Wesley R. Cole; Ashley Ballard; Joseph Y. Dumayas

OBJECTIVES The Department of Defense reported that 344,030 cases of traumatic brain injury (TBI) were clinically confirmed from 2000 to 2015, with mild TBI (mTBI) accounting for 82.3% of all cases. Unfortunately, warfighters with TBI are often identified only when moderate or severe head injuries have occurred, leaving more subtle mTBI cases undiagnosed. This study aims to identify and validate an eye-movement visual test for screening acute mTBI. METHODS Two-hundred active duty military personnel were recruited to perform the King-Devick® (KD) test. Subjects were equally divided into two groups: those with diagnosed acute mTBI (≤72h) and age-matched controls. The KD test was administered twice for test-retest reliability, and the outcome measure was total cumulative time to complete each test. RESULTS The mTBI group had approximately 36% mean slower performance time with significant differences between the groups (p<0.001) in both tests. There were significant differences between the two KD test administrations in each group, however, a strong correlation was observed between each test administration. CONCLUSIONS Significant differences in KD test performance were seen between the acute mTBI and control groups. The results suggest the KD test can be utilized for screening acute mTBI. A validated and rapidly administered mTBI screening test with results that are easily interpreted by providers is essential in making return-to-duty decisions in the injured warfighter.


Contemporary Clinical Trials | 2015

Concussion treatment after combat trauma: Development of a telephone based, problem solving intervention for service members

Kathleen R. Bell; Jo Ann Brockway; Jesse R. Fann; Wesley R. Cole; Jef St. De Lore; Nigel Bush; Ariel J. Lang; Tessa Hart; Michael Warren; Sureyya Dikmen; Nancy Temkin; Sonia Jain; Rema Raman; Murray B. Stein

Military service members (SMs) and veterans who sustain mild traumatic brain injuries (mTBI) during combat deployments often have co-morbid conditions but are reluctant to seek out therapy in medical or mental health settings. Efficacious methods of intervention that are patient-centered and adaptable to a mobile and often difficult-to-reach population would be useful in improving quality of life. This article describes a new protocol developed as part of a randomized clinical trial of a telephone-mediated program for SMs with mTBI. The 12-session program combines problem solving training (PST) with embedded modules targeting depression, anxiety, insomnia, and headache. The rationale and development of this behavioral intervention for implementation with persons with multiple co-morbidities is described along with the proposed analysis of results. In particular, we provide details regarding the creation of a treatment that is manualized yet flexible enough to address a wide variety of problems and symptoms within a standard framework. The methods involved in enrolling and retaining an often hard-to-study population are also highlighted.


Journal of Neurotrauma | 2017

Telephone Problem Solving for Service Members with Mild Traumatic Brain Injury: A Randomized Clinical Trial.

Kathleen R. Bell; Jesse R. Fann; Jo Ann Brockway; Wesley R. Cole; Nigel Bush; Sureyya Dikmen; Tessa Hart; Ariel J. Lang; Gerald A. Grant; Gregory A. Gahm; Mark A. Reger; Jef St. De Lore; Joanie Machamer; Karin Ernstrom; Rema Raman; Sonia Jain; Murray B. Stein; Nancy Temkin

Mild traumatic brain injury (mTBI) is a common injury for service members in recent military conflicts. There is insufficient evidence of how best to treat the consequences of mTBI. In a randomized, clinical trial, we evaluated the efficacy of telephone-delivered problem-solving treatment (PST) on psychological and physical symptoms in 356 post-deployment active duty service members from Joint Base Lewis McChord, Washington, and Fort Bragg, North Carolina. Members with medically confirmed mTBI sustained during deployment to Iraq and Afghanistan within the previous 24 months received PST or education-only (EO) interventions. The PST group received up to 12 biweekly telephone calls from a counselor for subject-selected problems. Both groups received 12 educational brochures describing common mTBI and post-deployment problems, with follow-up for all at 6 months (end of PST), and at 12 months. At 6 months, the PST group significantly improved on a measure of psychological distress (Brief Symptom Inventory; BSI-18) compared to the EO group (p = 0.005), but not on post-concussion symptoms (Rivermead Post-Concussion Symptoms Questionnaire [RPQ]; p = 0.19), the two primary endpoints. However, these effects did not persist at 12-month follow-up (BSI, p = 0.54; RPQ, p = 0.45). The PST group also had significant short-term improvement on secondary endpoints, including sleep (p = 0.01), depression (p = 0.03), post-traumatic stress disorder (p = 0.04), and physical functioning (p = 0.03). Participants preferred PST over EO (p < 0.001). Telephone-delivered PST appears to be a well-accepted treatment that offers promise for reducing psychological distress after combat-related mTBI and could be a useful adjunct treatment post-mTBI. Further studies are required to determine how to sustain its effects. (Trial registration: ClinicalTrials.gov Identifier: NCT01387490 https://clinicaltrials.gov ).


Journal of Clinical and Experimental Neuropsychology | 2018

Clinical utility of the mBIAS and NSI validity-10 to detect symptom over-reporting following mild TBI: A multicenter investigation with military service members

Patrick Armistead-Jehle; Douglas B. Cooper; Chad E. Grills; Wesley R. Cole; S Lippa; Robert L. Stegman; Rael T. Lange

ABSTRACT Self-report measures are commonly relied upon in military healthcare environments to assess service members following a mild traumatic brain injury (mTBI). However, such instruments are susceptible to over-reporting and rarely include validity scales. This study evaluated the utility of the mild Brain Injury Atypical Symptoms scale (mBIAS) and the Neurobehavioral Symptom Inventory Validity-10 scale to detect symptom over-reporting. A total of 359 service members with a reported history of mTBI were separated into two symptom reporting groups based on MMPI-2-RF validity scales (i.e., non-over-reporting versus symptom over-reporting). The clinical utility of the mBIAS and Validity-10 as diagnostic indicators and screens of symptom over-reporting were evaluated by calculating sensitivity, specificity, positive test rate, positive predictive power (PPP), and negative predictive power (NPP) values. An mBIAS cut score of ≥10 was optimal as a diagnostic indicator, which resulted in high specificity and PPP; however, sensitivity was low. The utility of the mBIAS as a screening instrument was limited. A Validity-10 cut score of ≥33 was optimal as a diagnostic indicator. This resulted in very high specificity and PPP, but low sensitivity. A Validity-10 cut score of ≥7 was considered optimal as a screener, which resulted in moderate sensitivity, specificity, NPP, but relatively low PPP. Owing to low sensitivity, the current data suggests that both the mBIAS and Validity-10 are insufficient as stand-alone measures of symptom over-reporting. However, Validity-10 scores above the identified cut-off of ≥7should be taken as an indication that further evaluation to rule out symptom over-reporting is necessary.


Journal of Clinical and Experimental Neuropsychology | 2017

The impact of administration order in studies of computerized neurocognitive assessment tools (NCATs)

Wesley R. Cole; J Arrieux; Elizabeth M. Dennison; Brian J. Ivins

ABSTRACT Computerized neurocognitive assessment tools (NCATs) have become a common way to assess postconcussion symptoms. As there is increasing research directly comparing multiple NCATs to each other, it is important to consider the impact that order of test administration may have on the integrity of the results. This study investigates the impact of administration order in a study of four different NCATs; Automated Neuropsychological Assessment Metrics (ANAM4), CNS Vital Signs (CNS-VS), CogState, and Immediate Post-Concussion Assessment and Cognitive Test (ImPACT). A total of 272 healthy active duty Service Members were enrolled into this study. All participants were randomly assigned to take two of the four NCATs with order of administration counterbalanced. Analyses attempted to investigate the effect of administration order alone (e.g., Time 1 versus Time 2), the effect of administration order combined with the impact of the specific NCAT received at Time 1, and only the impact of the Time 1 NCAT on Time 2 score variability. Specifically, independent samples t tests were used to compare Time 1 and Time 2 scores within each NCAT. Additional t tests compared Time 1 to Time 2 scores with Time 2 scores grouped by the NCAT received at Time 1. One-way analysis of variance (ANOVA) was used to compare only an NCAT’s Time 2 scores grouped by the NCAT received at Time 1. Cohen’s d effect sizes were calculated for all comparisons. The results from this study revealed statistically significant order effects for CogState and CNS-VS, though with effect sizes generally indicating minimum practical value, and marginal or absent order effects for ANAM4 and ImPACT with no clinically meaningful implications. Despite finding minimal order effects, clinicians should be mindful of the impact of administering multiple NCATs in a single session. Future studies should continue to be designed to minimize the potential effect of test administration order.


Archives of Clinical Neuropsychology | 2015

Using Base Rates of Low Scores to Interpret the ANAM4 TBI-MIL Battery Following Mild Traumatic Brain Injury

Brian J. Ivins; Rael T. Lange; Wesley R. Cole; Robert Kane; Karen Schwab; Grant L. Iverson

Base rates of low ANAM4 TBI-MIL scores were calculated in a convenience sample of 733 healthy male active duty soldiers using available military reference values for the following cutoffs: ≤2nd percentile (2 SDs), ≤5th percentile, <10th percentile, and <16th percentile (1 SD). Rates of low scores were also calculated in 56 active duty male soldiers who sustained an mTBI an average of 23 days (SD = 36.1) prior. 22.0% of the healthy sample and 51.8% of the mTBI sample had two or more scores below 1 SD (i.e., 16th percentile). 18.8% of the healthy sample and 44.6% of the mTBI sample had one or more scores ≤5th percentile. Rates of low scores in the healthy sample were influenced by cutoffs and race/ethnicity. Importantly, some healthy soldiers obtain at least one low score on ANAM4. These base rate analyses can improve the methodology for interpreting ANAM4 performance in clinical practice and research.


Concussion | 2017

A review of the validity of computerized neurocognitive assessment tools in mild traumatic brain injury assessment

J Arrieux; Wesley R. Cole; Angelica P Ahrens

Computerized neurocognitive assessment tools (NCATs) offer potential advantages over traditional neuropsychological tests in postconcussion assessments. However, their psychometric properties and clinical utility are still questionable. The body of research regarding the validity and clinical utility of NCATs suggests some support for aspects of validity (e.g., convergent validity) and some ability to distinguish between concussed individuals and controls, though there are still questions regarding the validity of these tests and their clinical utility, especially outside of the acute injury timeframe. In this paper, we provide a comprehensive summary of the existing validity literature for four commonly used and studied NCATs (automated neuropsychological assessment metrics, CNS vital signs, cogstate and immediate post-concussion and cognitive testing) and lay the groundwork for future investigations.


Archives of Clinical Neuropsychology | 2018

Performance and Symptom Validity Testing as a Function of Medical Board Evaluation in U.S. Military Service Members with a History of Mild Traumatic Brain Injury

Patrick Armistead-Jehle; Wesley R. Cole; Robert L. Stegman

Objective The study was designed to replicate and extend pervious findings demonstrating the high rates of invalid neuropsychological testing in military service members (SMs) with a history of mild traumatic brain injury (mTBI) assessed in the context of a medical evaluation board (MEB). Method Two hundred thirty-one active duty SMs (61 of which were undergoing an MEB) underwent neuropsychological assessment. Performance validity (Word Memory Test) and symptom validity (MMPI-2-RF) test data were compared across those evaluated within disability (MEB) and clinical contexts. Results As with previous studies, there were significantly more individuals in an MEB context that failed performance (MEB = 57%, non-MEB = 31%) and symptom validity testing (MEB = 57%, non-MEB = 22%) and performance validity testing had a notable affect on cognitive test scores. Performance and symptom validity test failure rates did not vary as a function of the reason for disability evaluation when divided into behavioral versus physical health conditions. Conclusions These data are consistent with past studies, and extends those studies by including symptom validity testing and investigating the effect of reason for MEB. This and previous studies demonstrate that more than 50% of SMs seen in the context of an MEB will fail performance validity tests and over-report on symptom validity measures. These results emphasize the importance of using both performance and symptom validity testing when evaluating SMs with a history of mTBI, especially if they are being seen for disability evaluations, in order to ensure the accuracy of cognitive and psychological test data.


Archives of Clinical Neuropsychology | 2018

A Comparison of Four Computerized Neurocognitive Assessment Tools to a Traditional Neuropsychological Test Battery in Service Members with and without Mild Traumatic Brain Injury

Wesley R. Cole; J Arrieux; Brian J. Ivins; Karen Schwab; Felicia M. Qashu

Objective Computerized neurocognitive assessment tools (NCATS) are often used as a screening tool to identify cognitive deficits after mild traumatic brain injury (mTBI). However, differing methodology across studies renders it difficult to identify a consensus regarding the validity of NCATs. Thus, studies where multiple NCATs are administered in the same sample using the same methodology are warranted. Method We investigated the validity of four NCATs: the ANAM4, CNS-VS, CogState, and ImPACT. Two NCATs were randomly assigned and a battery of traditional neuropsychological (NP) tests administered to healthy control active duty service members (n = 272) and to service members within 7 days of an mTBI (n = 231). Analyses included correlations between NCAT and the NP test scores to investigate convergent and discriminant validity, and regression analyses to identify the unique variance in NCAT and NP scores attributed to group status. Effect sizes (Cohens f2) were calculated to guide interpretation of data. Results Only 37 (0.6%) of the 5,655 correlations calculated between NCATs and NP tests are large (i.e. r ≥ 0.50). The majority of correlations are small (i.e. 0.30 > r ≥ 0.10), with no clear patterns suggestive of convergent or discriminant validity between the NCATs and NP tests. Though there are statistically significant group differences across most NCAT and NP test scores, the unique variance accounted for by group status is minimal (i.e. semipartial R2 ≤ 0.033, 0.024, 0.062, and 0.011 for ANAM4, CNS-VS, CogState, and ImPACT, respectively), with effect sizes indicating small to no meaningful effect. Conclusion Though the results are not overly promising for the validity of the four NCATs we investigated, traditional methods of investigating psychometric properties may not be appropriate for computerized tests. We offer several conceptual and methodological considerations for future studies regarding the validity of NCATs.

Collaboration


Dive into the Wesley R. Cole's collaboration.

Top Co-Authors

Avatar

J Arrieux

Womack Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Karen Schwab

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Grant L. Iverson

Spaulding Rehabilitation Hospital

View shared research outputs
Top Co-Authors

Avatar

Rael T. Lange

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar

Noah D. Silverberg

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ann I. Scher

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Ariel J. Lang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesse R. Fann

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge