Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tara L. Victor is active.

Publication


Featured researches published by Tara L. Victor.


Clinical Neuropsychologist | 2009

Interpreting the Meaning of Multiple Symptom Validity Test Failure

Tara L. Victor; Kyle Brauer Boone; J. Greg Serpa; Jody Buehler; Elizabeth Ziegler

While it is recommended that judgments regarding the credibility of test performance be based on the results of more than one effort indicator, and recent efforts have been made to improve interpretation of multiple effort test failure, the field currently lacks adequate guidelines for using multiple measures of effort in concert with one another. A total of 103 patients were referred for outpatient neuropsychological evaluation, which included multiple measures of negative response bias embedded in standard test batteries. Using any pairwise failure combination to predict diagnostic classification was superior (sensitivity = 83.8%, specificity = 93.9%, overall hit rate = 90.3%) to using any one test by itself and to using any three-test failure combination. Further, the results were comparable to the results of logistical regression analyses using the embedded indicators as continuous predictors. Given its parsimony and clinical utility, the pairwise failure model is therefore a recommended criterion for identifying non-credible performance; however, there are of course other important contextual factors and influences to consider, which are also discussed.


Clinical Neuropsychologist | 2009

DEMENTIA AND EFFORT TEST PERFORMANCE

Andy C. Dean; Tara L. Victor; Kyle Brauer Boone; Linda Philpott; Ryan A. Hess

Research on the performance of patients with dementia on tests of effort is particularly limited. We examined archival data from 214 non-litigating patients with dementia on 18 effort indices derived from 12 tests (WAIS-III/WAIS-R Digit Span and Vocabulary, Dot Counting Test, Warrington Recognition Memory Test–Words, WMS-III Logical Memory, Rey Word Recognition Memory Test, Finger Tapping, b-Test, Rey 15-Item, Test of Memory Malingering, Rey Auditory Verbal Learning Test, and Rey Complex Figure Test). Results indicated that recommended cut-offs for Digit Span indicators (Vocabulary Minus Digit Span and four-digit forward span time score) provided ≥90% specificity across participants, while the majority of other effort tests displayed specificities in the 30–70% range. Analyses of test specificity as a function of Mini Mental Status Examination (MMSE) score and specific dementia diagnosis are provided, as well as adjustments to cut-offs to maintain specificity where feasible.


Clinical Neuropsychologist | 2008

The Relationship of IQ to Effort Test Performance

Andy C. Dean; Tara L. Victor; Kyle Brauer Boone; Ginger Arnold

The relationship between IQ and nine effort indicators was examined in a sample of 189 neuropsychology clinic outpatients who were not in litigation or attempting to obtain disability. Participants with the lowest IQ (50–59) failed approximately 60% of the effort tests, while patients with an IQ of 60 to 69 failed 44% of effort indicators, and individuals with borderline IQ (70 to 79) exhibited a 17% failure rate. All patients with IQ < 70 failed at least one effort test. Cutoffs for the Warrington Recognition Memory Test (Words) and Finger Tapping maintained the highest specificities in low IQ samples.


Clinical Neuropsychologist | 2010

Examination of various WMS-III logical memory scores in the assessment of response bias.

Kirsty E. Bortnik; Kyle Brauer Boone; Sarah D. Marion; Stacy Amano; Elizabeth A. Ziegler; Tara L. Victor; Michelle A. Zeller

The assessment of response validity during neuropsychological evaluation is an integral part of the testing process. Research has increasingly focused on the use of “embedded” effort measures (derived from standard neuropsychological tasks) because they do not require additional administration time and are less likely to be identified as effort indicators by test takers because of their primary focus as measures of cognitive function. The current study examined the clinical utility of various WMS-III Logical Memory scores in detecting response bias, as well as the Rarely Missed Index, an embedded effort indicator derived from the WMS-III Logical Memory Delayed Recognition subtest. The Rarely Missed Index cut-off only identified 24.1% of 63 non-credible participants (at ≥90% specificity in 125 credible patients), and cut-offs for other Logical Memory variables were in fact found to be more sensitive to non-credible performance. A new indicator, consisting of the weighted combination of the two most sensitive Logical Memory subtest scores (Logical Memory II raw score and Logical Memory Delayed Recognition raw score), was associated with 53% to 60% sensitivity, and thus may be an effective adjunct when utilized in conjunction with other validated effort indicators and collateral information in identifying non-credible performance.


Archives of Clinical Neuropsychology | 2013

Cross validation of the Lu and colleagues (2003) Rey-Osterrieth Complex Figure Test effort equation in a large known-group sample

Seaaira D. Reedy; Kyle Brauer Boone; Maria E. Cottingham; Debra F. Glaser; Po H. Lu; Tara L. Victor; Elizabeth Ziegler; Michelle A. Zeller; Mathew J. Wright

A Rey-Osterrieth Complex Figure Test (ROCFT) equation incorporating copy and recognition was found to be useful in detecting negative response bias in neuropsychological assessments (ROCFT Effort Equation; Lu, P. H., Boone, K. B., Cozolino, L., & Mitchell, C. (2003). Effectiveness of the Rey-Osterrieth Complex Figure Test and the Meyers and Meyers recognition trial in the detection of suspect effort. Clinical Neuropsychologist, 17, 426-440). In the current cross validation of this validity, the credible patient group (n = 146; 124 with equation data) outperformed the noncredible group (n = 157; 115 with equation data) on copy, 3-min recall, total recognition correct and the Effort Equation, but the latter was most effective in classifying subjects. A cut-off of ≤50 maintained specificity of 90% and achieved sensitivity of 80%. Results of the current cross validation provide corroboration that the ROCFT Effort Equation is an effective measure of neurocognitive response bias.


Clinical Neuropsychologist | 2014

Apparent effect of type of compensation seeking (disability versus litigation) on performance validity test scores may be due to other factors.

Maria E. Cottingham; Tara L. Victor; Kyle Brauer Boone; Elizabeth Ziegler; Michelle A. Zeller

Neuropsychologists use performance validity tests (PVTs; Larrabee, 2012) to ensure that results of testing are reflective of the test taker’s true neurocognitive ability, and their use is recommended in all compensation-seeking settings. However, whether the type of compensation context (e.g., personal injury litigation versus disability seeking) impacts the nature and extent of neurocognitive symptom feigning has not been adequately investigated. PVT performance was compared in an archival data set of noncredible individuals in either a personal injury litigation (n = 163) or a disability-seeking context (n = 201). Individuals were deemed noncredible based on meeting Slick, Sherman, and Iverson’s (1999) criteria including failure on at least two PVTs and a lack of congruency between their low cognitive scores and normal function in activities of daily living (ADLs). In general, disability seekers tended to perform in a less sophisticated manner than did litigants (i.e., they failed more indicators and did so more extensively). Upon further investigation, these differences were in part accounted for by type of diagnoses feigned; those seeking compensation for mental health diagnoses were more likely to feign or exaggerate a wide variety of cognitive deficits, whereas those with claimed medical diagnoses (i.e., traumatic brain injury) were more targeted in their attempts to feign and/or exaggerate neurocognitive compromise.


Clinical Neuropsychologist | 2010

Use of the WAIS-III picture completion subtest as an embedded measure of response bias.

Ryan E. Solomon; Kyle Brauer Boone; Deborah S. Miora; Sherry Skidmore; Maria E. Cottingham; Tara L. Victor; Elizabeth A. Ziegler; Michelle A. Zeller

In the present study a large sample of credible patients (n = 172) scored significantly higher than a large sample of noncredible participants (n = 195) on several WAIS-III Picture Completion variables: Age Adjusted Scaled Score, raw score, a “Rarely Missed” index (the nine items least often missed by credible participants), a “Rarely Correct” index (nine items correct <26% of the time in noncredible participants and with at least a 25 percentage-point lower endorsement rate as compared to credible participants), and a “Most Discrepant” index (the six items that were the most discrepant in correct endorsement between groups—at least a 40 percentage point difference). Comparison of the various scores showed that the “Most Discrepant” index outperformed all the others in identifying response bias (nearly 65% sensitivity at 92.8% specificity as compared to at most 59% sensitivity for the other scores). While no differences in Picture Completion scores were observed between less-educated (<12 years) and better-educated (≥12 years) credible participants, noncredible participants with <12 years of education scored significantly poorer than noncredible participants with 12 or more years of education. On the “Most Discrepant” index, 76.7% of less-educated noncredible participants were detected as compared to 58.3% of better-educated noncredible participants. Results of the current study suggest that the Picture Completion subtest of the WAIS-III is an effective measure of response bias, and that it may have a unique role in identifying suboptimal effort in less-educated test takers.


Clinical Neuropsychologist | 2008

Examination of the Impact of Ethnicity on the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Fake Bad Scale

Andy C. Dean; Kyle Brauer Boone; Michelle S. Kim; Ashley R. Curiel; David J. Martin; Tara L. Victor; Michelle A. Zeller; Yoshado K. Lang

The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Fake Bad Scale (FBS; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) has been shown to be sensitive to somatic over-endorsement. However, the impact of ethnicity has not been examined on the FBS, which is of concern given some studies that show increased rates of somatic endorsement in particular ethnic groups. We evaluated the FBS performance of 190 Caucasian American, Hispanic, and African American outpatients who were obtained from two different clinical settings, excluding those who were applying for disability or in litigation. We failed to find significant ethnic differences in mean FBS performance or in cut-off specificity rates. We did find evidence of a gender effect, supporting continued use of gender-specific FBS cutoffs.


Psychology, Learning and Teaching | 2012

The Impact of Mindful Awareness Practices on College Student Health, Well-Being, and Capacity for Learning: A Pilot Study:

Kiyomi Yamada; Tara L. Victor

This preliminary study examined the feasibility and potential utility of mindful awareness practices (MAPs) in terms of enhancing student learning in the college classroom, as well as improving psychological well-being. One of two identical undergraduate psychology sections included a 10-minute MAP at the beginning of every class (mindfulness group n = 37; control group n = 23). Primary learning and secondary self-report outcomes were obtained. Controlling for significant demographic covariates, students in the mindfulness group demonstrated significant increases in mindful awareness traits and reductions in rumination and state anxiety compared with controls. While mindfulness intervention did not lead to significant improvement in academic performance across the semester, 81% of students self-reported positive effects of MAPs on their learning. It is concluded that it is feasible to incorporate MAPs into a regular college classroom. MAPs may help improve student psychological well-being. Although students perceived themselves to benefit from their mindfulness practice, further research is needed to examine the effects of MAPs on student academic performance.


Clinical Neuropsychologist | 2014

Comparison of Credible Patients of Very Low Intelligence and Non-Credible Patients on Neurocognitive Performance Validity Indicators

Klayton Smith; Kyle Brauer Boone; Tara L. Victor; Deborah S. Miora; Maria E. Cottingham; Elizabeth Ziegler; Michelle A. Zeller; Matthew J. Wright

The purpose of this archival study was to identify performance validity tests (PVTs) and standard IQ and neurocognitive test scores, which singly or in combination, differentiate credible patients of low IQ (FSIQ ≤ 75; n = 55) from non-credible patients. We compared the credible participants against a sample of 74 non-credible patients who appeared to have been attempting to feign low intelligence specifically (FSIQ ≤ 75), as well as a larger non-credible sample (n = 383) unselected for IQ. The entire non-credible group scored significantly higher than the credible participants on measures of verbal crystallized intelligence/semantic memory and manipulation of overlearned information, while the credible group performed significantly better on many processing speed and memory tests. Additionally, credible women showed faster finger-tapping speeds than non-credible women. The credible group also scored significantly higher than the non-credible subgroup with low IQ scores on measures of attention, visual perceptual/spatial tasks, processing speed, verbal learning/list learning, and visual memory, and credible women continued to outperform non-credible women on finger tapping. When cut-offs were selected to maintain approximately 90% specificity in the credible group, sensitivity rates were highest for verbal and visual memory measures (i.e., TOMM trials 1 and 2; Warrington Words correct and time; Rey Word Recognition Test total; RAVLT Effort Equation, Trial 5, total across learning trials, short delay, recognition, and RAVLT/RO discriminant function; and Digit Symbol recognition), followed by select attentional PVT scores (i.e., b Test omissions and time to recite four digits forward). When failure rates were tabulated across seven most sensitive scores, a cut-off of ≥ 2 failures was associated with 85.4% specificity and 85.7% sensitivity, while a cut-off of ≥ 3 failures resulted in 95.1% specificity and 66.0% sensitivity. Results are discussed in light of extant literature and directions for future research.

Collaboration


Dive into the Tara L. Victor's collaboration.

Top Co-Authors

Avatar

Kyle Brauer Boone

Alliant International University

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Ziegler

United States Department of Veterans Affairs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deborah S. Miora

Alliant International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman Abeles

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Annette Ermshar

Alliant International University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge