Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Meyers is active.

Publication


Featured researches published by John E. Meyers.


Clinical Neuropsychologist | 1995

Rey complex figure test under four different administration procedures

John E. Meyers; Kelly R. Meyers

Abstract This study was undertaken to identify the relationship between the raw scores obtained on the Rey Complex Figure Test (CFT) under four different administration procedures; additionally, the effects of the administration procedures on the Recognition Trial (Meyers & Meyers, 1994) were examined. The Recognition Trial is a new instrument, developed to assess recognition of various parts of the CFT. Many authors have presented a variety of administration procedures; however, there are no studies that examine the relationship of the various administration procedures. The administration procedures used were as follows: (1) Copy, Immediate recall, 30-min recall, and Recognition Trial; (2) Copy, 3-min recall, 30-min recall, Recognition Trial; (3) Copy, Immediate recall, 3-min recall, 30-min recall, Recognition Trial; (4) Copy, 30-min recall, Recognition Trial. The results of the study indicate no significant difference in the 30-min recall score or on the Recognition Trial if an immediate/short-term reca...


Clinical Neuropsychologist | 2003

Neuropsychological impairment following traumatic brain injury: a dose-response analysis.

Martin L. Rohling; John E. Meyers; Scott R. Millis

Dikmen, Machamer,Winn, and Temkin (1995) administered the Halstead–Reitan Battery (HRB) to a sample of TBI patients. Similar patients were obtained from the second author (JEM) for two main purposes. First, we wished to determine if there is a dose-response relationship between TBI severity and residual cognitive deficit. Second, are Dikmen et al. results generalizable to other TBI samples? Analyses of the Meyers sample replicated the analyses of Dikmen sample. A significant dose-response relationship between loss of consciousness (LOC) and cognitive impairment was found using effect sizes for the Dikmen sample, as well as using regression-based normative T scores for the Meyers sample. The two methods were highly correlated with one another. Using mean scores for the six LOC-severity groups and the two samples resulted in a correlation coefficient r = .97, p < .0001. Results are presented for clinicians to use when assessing individual patients.


Archives of Clinical Neuropsychology | 2011

Embedded Symptom Validity Tests and Overall Neuropsychological Test Performance

John E. Meyers; Marie Volbrecht; Bradley N. Axelrod; Lorrie Reinsch-Boothby

A sample of 314 consecutive clinical and forensic referrals with mild traumatic brain injury was evaluated using the Meyers Neuropsychological Battery (MNB). A comparison was made of the test performance and performance on the embedded Symptom Validity Tests (SVTs) with a control for multicolinearity utilized. Using the nine embedded SVTs in the MNB, the incidence of poor effort fell at 26% of the total sample. Involvement in litigation was related to more failures on the individual SVTs. The correlation between failed effort measures and the Overall Test Battery Mean (OTBM) was consistently negative, regardless of litigation status, in that more failures were associated with lower OTBM scores. The correlation between the number of SVTs failed and the OTBM was -.77. Our results are similar to those presented by Green, Rohling, Lees-Haley, and Allen (2001); who reported a .73 correlation with the failure on the Word Memory Test and performance on the OTBM. The results of the current study also indicate that 50% of the variance in neuropsychological testing can be accounted by failures on internal SVTs.


Applied Neuropsychology | 2000

Assessment of Malingering in Chronic Pain Patients Using Neuropsychological Tests

John E. Meyers; Anh Diep

Validity checks into neuropsychological tests have been successful at detecting malingering in litigant patients with mild brain injury in recent years. This study expanded on these findings and examined whether 6 neuropsychological tests could be used to detect malingering in litigant (n = 55) and nonlitigant (n = 53) patients claiming cognitive deficits due to chronic pain. Encouraging findings were found. When patients were matched on age, gender, racial or ethnic background, years of education, and time postinjury, almost one third (29%) of patients in the litigant group failed 2 or more validity checks in these 6 neuropsychological tests versus none (0%) of the patients in the nonlitigant group. This result challenges the validity of some litigant patients who complain of cognitive deficits due to chronic pain. Furthermore, the findings suggest that neuropsychological assessments can be used as part of the assessment of chronic pain complainants. Further investigation of the validity markers in these 6 neuropsychological tests is recommended.


Clinical Neuropsychologist | 2009

40 Plus or Minus 10, a New Magical Number: Reply to Russell

Glenn J. Larrabee; Scott R. Millis; John E. Meyers

Russell (2009 this issue) has criticized our recently published investigation (Larrabee, Millis, & Meyers, 2008) comparing the diagnostic discrimination of an ability-focused neuropsychological battery (AFB) to that of the Halstead Reitan Battery (HRB). He contended that our symptom validity test (SVT) screening excluding 43% of brain dysfunction and 15% of control patients using computations based on Digit Span inappropriately excluded patients with brain damage, due to the correlation of Digit Span with the Average Index Score (AIS). Our exclusion of 43% of brain dysfunction participants matches the frequency of invalid neuropsychological data of 40–50% or more reported by numerous studies for a wide range of settings with external incentive. Moreover, our study was not an investigation of malingering; rather, we screened our data to insure that only valid data remained, for the most meaningful comparison of the AFB to the HRB. Russells argument that Digit Span is correlated with brain damage confounds the criterion, AIS (a composite cognitive score), with the predictor, Digit Span (another cognitive score), rather than employing a truly independent neurologic criterion. The fact that Digit Span is notoriously insensitive to brain dysfunction underscores the robustness of our findings, for if we inappropriately excluded brain-damaged patients for low Digit Span, as Russell claimed, this resulted in our sample reflecting more subtle degree of brain dysfunction, and the superiority of the AFB over the HRB was demonstrated under the most challenging of discriminative conditions.


Assessment | 2008

Classification accuracy of MMPI-2 validity scales in the detection of pain-related malingering: a known-groups study.

Kevin J. Bianchini; Joseph L. Etherton; Kevin W. Greve; Matthew T. Heinly; John E. Meyers

The purpose of this study was to determine the accuracy of Minnesota Multiphasic Personality Inventory 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The sample consisted of patients without financial incentive (n = 23), nonmalingering patients with financial incentive (n = 34), patients definitively determined to be malingering based on published criteria ( n = 32), and college students asked to simulate pain-related disability (n = 26). The MMPI-2 validity scales differentiated malingerers from nonmalingerers with a high degree of accuracy. Hypochondriasis and Hysteria were also effective. For all variables except Scale L, more extreme scores were associated with higher specificity. This study demonstrates that the MMPI-2 is capable of differentiating intentional exaggeration from the effects on symptom report of chronic pain, genuine psychological disturbance, and concurrent stress associated with pursuing a claim in a medico-legal context.


Clinical Neuropsychologist | 2008

Sensitivity to Brain Dysfunction of the Halstead-Reitan vs an Ability-Focused Neuropsychological Battery

Glenn J. Larrabee; Scott R. Millis; John E. Meyers

We compared the sensitivity to brain dysfunction of an ability focused neuropsychological battery (AFB), as a proxy for the core of a flexible battery, to the Halstead-Reitan Battery (HRB). The AFB was designed to represent constructs of language function, fine motor skill, working memory, processing speed, verbal and visual memory, and verbal and visual abstraction and problem solving. Receiver operating characteristic analysis (ROC) yielded an area under curve (AUC) of. 86 for the AFB, versus. 83 for the HRB ( p =. 50), for discriminating 54 patients with brain dysfunction due to various etiologies, from 69 non-neurologic medical controls. Additionally, Bayesian Model Averaging selected four tests from the combined set of AFB and HRB subtests, plus Trail Making B, which optimally discriminated the brain dysfunction from medical control patients: H-Words, Grooved Pegboard, Finger Tapping, and Trail Making B. These data support the current mainstream practice in neuropsychology of using an AFB (flexible battery) to assess brain dysfunction. In particular, tests involving processing speed appear to be among the most sensitive measures of brain dysfunction. The data do not support the superiority of the HRB to AFB approaches.


Archives of Clinical Neuropsychology | 2002

Dichotic listening: expanded norms and clinical application

John E. Meyers; Richard J. Roberts; John D. Bayless; Kurt Volkert; Paul E. Evitts

The object of this study was to provide an expanded normative base for the Dichotic Word Listening Test (DWLT), with particular emphasis on the performance of older individuals. The normative study consisted of 336 community living volunteers. These new norms were used to compare several groups of neurologically impaired patient groups. DWLT was found to be sensitive to the presence of brain injury, and also to the degree of acute injury as measured by loss of consciousness. The results of the short form version of the DWLT test showed 100% specificity and 60% sensitivity for mildly brain-injured patients to 80% sensitivity for more severely brain-injured patients. The respective sensitivities for Left CVA and Right CVA were 55% and 88%. The present findings suggest that the DWLT is a valid and easy to use clinical tool.


Clinical Neuropsychologist | 1994

Recognition Subtest for the Complex Figure

John E. Meyers; Donald Lange

Abstract This is a series of three related studies using the Rey-Osterrieth Complex Figure (CFT) and a new Recognition Subtest for the CFT. A comparison of scores on three matched groups of 30 subjects each (Brain-Injured, Psychiatric, and Normals) showed that the Recognition Subtest was useful in discriminating between the groups. A discriminant analysis was computed. Next, a preliminary normative group of 208 normals was collected for the CFT and the Recognition Subtest. Finally, a study comparing subjects with minor brain injury (no loss of consciousness) with those who were more impaired (with loss of consciousness) was made. This experiment demonstrated that the discriminant function proved to be most effective in discriminating brain-injured from normals and subjects with minor brain injuries. The inclusion of the Recognition Subtest with the CFT increases the breadth and discriminant ability of the CFT and provides a dimension not previously available with the CFT alone.


Clinical Neuropsychologist | 2014

Finger Tapping Test Performance as a Measure of Performance Validity

Bradley N. Axelrod; John E. Meyers; Jeremy J. Davis

The Finger Tapping Test (FTT) has been presented as an embedded measure of performance validity in most standard neuropsychological evaluations. The present study evaluated the utility of three different scoring systems intended to detect invalid performance based on FTT. The scoring systems were evaluated in neuropsychology cases from clinical and independent practices, in which credible performance was determined based on passing all performance validity measures or failing two or more validity indices. Each FTT scoring method presented with specificity rates at approximately 90% and sensitivity of slightly more than 40%. When suboptimal performance was based on the failure of any of the three scoring methods, specificity was unchanged and sensitivity improved to 50%. The results are discussed in terms of the utility of combining multiple scoring measures for the same test as well as benefits of embedded measures administered over the duration of the evaluation.

Collaboration


Dive into the John E. Meyers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marie Volbrecht

University of South Dakota

View shared research outputs
Top Co-Authors

Avatar

Martin L. Rohling

University of South Alabama

View shared research outputs
Top Co-Authors

Avatar

Amy Junghyun Lee

Brigham Young University–Hawaii

View shared research outputs
Top Co-Authors

Avatar

Zachary W. Rupp

Brigham Young University–Hawaii

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin W. Greve

University of New Orleans

View shared research outputs
Researchain Logo
Decentralizing Knowledge