Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J Arrieux is active.

Publication


Featured researches published by J Arrieux.


Journal of Clinical and Experimental Neuropsychology | 2017

The impact of administration order in studies of computerized neurocognitive assessment tools (NCATs)

Wesley R. Cole; J Arrieux; Elizabeth M. Dennison; Brian J. Ivins

ABSTRACT Computerized neurocognitive assessment tools (NCATs) have become a common way to assess postconcussion symptoms. As there is increasing research directly comparing multiple NCATs to each other, it is important to consider the impact that order of test administration may have on the integrity of the results. This study investigates the impact of administration order in a study of four different NCATs; Automated Neuropsychological Assessment Metrics (ANAM4), CNS Vital Signs (CNS-VS), CogState, and Immediate Post-Concussion Assessment and Cognitive Test (ImPACT). A total of 272 healthy active duty Service Members were enrolled into this study. All participants were randomly assigned to take two of the four NCATs with order of administration counterbalanced. Analyses attempted to investigate the effect of administration order alone (e.g., Time 1 versus Time 2), the effect of administration order combined with the impact of the specific NCAT received at Time 1, and only the impact of the Time 1 NCAT on Time 2 score variability. Specifically, independent samples t tests were used to compare Time 1 and Time 2 scores within each NCAT. Additional t tests compared Time 1 to Time 2 scores with Time 2 scores grouped by the NCAT received at Time 1. One-way analysis of variance (ANOVA) was used to compare only an NCAT’s Time 2 scores grouped by the NCAT received at Time 1. Cohen’s d effect sizes were calculated for all comparisons. The results from this study revealed statistically significant order effects for CogState and CNS-VS, though with effect sizes generally indicating minimum practical value, and marginal or absent order effects for ANAM4 and ImPACT with no clinically meaningful implications. Despite finding minimal order effects, clinicians should be mindful of the impact of administering multiple NCATs in a single session. Future studies should continue to be designed to minimize the potential effect of test administration order.


Concussion | 2017

A review of the validity of computerized neurocognitive assessment tools in mild traumatic brain injury assessment

J Arrieux; Wesley R. Cole; Angelica P Ahrens

Computerized neurocognitive assessment tools (NCATs) offer potential advantages over traditional neuropsychological tests in postconcussion assessments. However, their psychometric properties and clinical utility are still questionable. The body of research regarding the validity and clinical utility of NCATs suggests some support for aspects of validity (e.g., convergent validity) and some ability to distinguish between concussed individuals and controls, though there are still questions regarding the validity of these tests and their clinical utility, especially outside of the acute injury timeframe. In this paper, we provide a comprehensive summary of the existing validity literature for four commonly used and studied NCATs (automated neuropsychological assessment metrics, CNS vital signs, cogstate and immediate post-concussion and cognitive testing) and lay the groundwork for future investigations.


Archives of Clinical Neuropsychology | 2018

A Comparison of Four Computerized Neurocognitive Assessment Tools to a Traditional Neuropsychological Test Battery in Service Members with and without Mild Traumatic Brain Injury

Wesley R. Cole; J Arrieux; Brian J. Ivins; Karen Schwab; Felicia M. Qashu

Objective Computerized neurocognitive assessment tools (NCATS) are often used as a screening tool to identify cognitive deficits after mild traumatic brain injury (mTBI). However, differing methodology across studies renders it difficult to identify a consensus regarding the validity of NCATs. Thus, studies where multiple NCATs are administered in the same sample using the same methodology are warranted. Method We investigated the validity of four NCATs: the ANAM4, CNS-VS, CogState, and ImPACT. Two NCATs were randomly assigned and a battery of traditional neuropsychological (NP) tests administered to healthy control active duty service members (n = 272) and to service members within 7 days of an mTBI (n = 231). Analyses included correlations between NCAT and the NP test scores to investigate convergent and discriminant validity, and regression analyses to identify the unique variance in NCAT and NP scores attributed to group status. Effect sizes (Cohens f2) were calculated to guide interpretation of data. Results Only 37 (0.6%) of the 5,655 correlations calculated between NCATs and NP tests are large (i.e. r ≥ 0.50). The majority of correlations are small (i.e. 0.30 > r ≥ 0.10), with no clear patterns suggestive of convergent or discriminant validity between the NCATs and NP tests. Though there are statistically significant group differences across most NCAT and NP test scores, the unique variance accounted for by group status is minimal (i.e. semipartial R2 ≤ 0.033, 0.024, 0.062, and 0.011 for ANAM4, CNS-VS, CogState, and ImPACT, respectively), with effect sizes indicating small to no meaningful effect. Conclusion Though the results are not overly promising for the validity of the four NCATs we investigated, traditional methods of investigating psychometric properties may not be appropriate for computerized tests. We offer several conceptual and methodological considerations for future studies regarding the validity of NCATs.


Clinical Neuropsychologist | 2016

Interpreting change on the neurobehavioral symptom inventory and the PTSD checklist in military personnel

Heather G. Belanger; Rael T. Lange; Jason M. Bailie; Grant L. Iverson; J Arrieux; Brian J. Ivins; Wesley R. Cole

Abstract Objective: The purpose of this study was to examine the prevalence and stability of symptom reporting in a healthy military sample and to develop reliable change indices for two commonly used self-report measures in the military health care system. Participants and method: Participants were 215 U.S. active duty service members recruited from Fort Bragg, NC as normal controls as part of a larger study. Participants completed the Neurobehavioral Symptom Inventory (NSI) and Posttraumatic Checklist (PCL) twice, separated by approximately 30 days. Results: Depending on the endorsement level used (i.e. ratings of ‘mild’ or greater vs. ratings of ‘moderate’ or greater), approximately 2–15% of this sample met DSM-IV symptom criteria for Postconcussional Disorder across time points, while 1–6% met DSM-IV symptom criteria for Posttraumatic Stress Disorder. Effect sizes for change from Time 1 to Time 2 on individual symptoms were small (Cohen’s d = .01 to .13). The test–retest reliability for the NSI total score was r = .78 and the PCL score was r = .70. An eight-point change in symptom reporting represented reliable change on the NSI total score, with a seven-point change needed on the PCL. Conclusions: Postconcussion-like symptoms are not unique to mild TBI and are commonly reported in a healthy soldier sample. It is important for clinicians to use normative data when evaluating a service member or veteran and when evaluating the likelihood that a change in symptom reporting is reliable and clinically meaningful.


Archives of Clinical Neuropsychology | 2013

Test–Retest Reliability of Four Computerized Neurocognitive Assessment Tools in an Active Duty Military Population

Wesley R. Cole; J Arrieux; Karen Schwab; Brian J. Ivins; Felicia M. Qashu; Steven Lewis


Journal of The International Neuropsychological Society | 2017

Intraindividual Cognitive Variability: An Examination of ANAM4 TBI-MIL Simple Reaction Time Data from Service Members with and without Mild Traumatic Brain Injury

Wesley R. Cole; Emma Gregory; J Arrieux; F. Jay Haran


Archives of Physical Medicine and Rehabilitation | 2018

Differences in Reaction Time Latency Error on the ANAM4 Across Three Computer Platforms

Brittney Roberson; J Arrieux; Katie Russell; Wesley R. Cole


Archives of Physical Medicine and Rehabilitation | 2018

A Psychometric Comparison of Performance Validity Measures on NCATs Following mTBI

Brittney Roberson; J Arrieux; Katie Russell; Wesley R. Cole


Archives of Clinical Neuropsychology | 2018

B - 54Agreement Between Brief Computerized Neurocognitive Assessment Tools and a Traditional Measure of Executive Function at Clinically Meaningful Performance Levels

Brian J. Ivins; J Arrieux; Karen Schwab; Wesley R. Cole


Archives of Clinical Neuropsychology | 2018

Assessment - 2 What are Computerized Neurocognitive Assessment Tools (NCATs) Actually Measuring? Using Principal Component Analyses to Compare NCATs to Traditional Neuropsychological Tests

Wesley R. Cole; A Ahrens; J Arrieux; B Roberson; K Russell; B Ivins

Collaboration


Dive into the J Arrieux's collaboration.

Top Co-Authors

Avatar

Wesley R. Cole

Womack Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Karen Schwab

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Grant L. Iverson

Spaulding Rehabilitation Hospital

View shared research outputs
Top Co-Authors

Avatar

Rael T. Lange

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar

Angelica P Ahrens

Womack Army Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven Lewis

Womack Army Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge