Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George K. Henry is active.

Publication


Featured researches published by George K. Henry.


Clinical Neuropsychologist | 2010

Official position of the american academy of clinical neuropsychology on serial neuropsychological assessments: the utility and challenges of repeat test administrations in clinical and forensic contexts

Robert L. Heilbronner; Jerry J. Sweet; Deborah K. Attix; Kevin R. Krull; George K. Henry; Robert P. Hart

Serial assessments are now common in neuropsychological practice, and have a recognized value in numerous clinical and forensic settings. These assessments can aid in differential diagnosis, tracking neuropsychological strengths and weaknesses over time, and managing various neurologic and psychiatric conditions. This document provides a discussion of the benefits and challenges of serial neuropsychological testing in the context of clinical and forensic assessments. Recommendations regarding the use of repeated testing in neuropsychological practice are provided.


Clinical Neuropsychologist | 2005

PROBABLE MALINGERING AND PERFORMANCE ON THE TEST OF VARIABLES OF ATTENTION

George K. Henry

Fifty subjects with mild head injury involved in personal injury litigation and 2 subjects referred for evaluation of their disability status underwent comprehensive neuropsychological examination including the Test of Variables of Attention (TOVA). Group status was determined by performance on symptom validity testing. Twenty-six subjects who failed symptom validity testing formed the probable malingering (PM) group, while 26 subjects who passed symptom validity testing comprised the not malingering (NM) group. Subjects in the PM group performed significantly worse on all TOVA variables relative to subjects in the NM group. Discriminant function analyses revealed that TOVA omission errors ≥3 errors was the best predictor of group status. Malingering research employing a group of probable clinical malingerers has direct generalizability to real-world settings.


Clinical Neuropsychologist | 2008

Comparison of the Lees-Haley Fake Bad Scale, Henry-Heilbronner Index, and Restructured Clinical Scale 1 in Identifying Noncredible Symptom Reporting

George K. Henry; Robert L. Heilbronner; Wiley Mittenberg; Craig K. Enders; Shianna R. Stanczak

A known groups design investigated the comparative predictive validity of the 27-item MMPI-2 Restructured Scale 1 (RC1), the 43-item Lees-Haley Fake Bad Scale (FBS), and the 15-item Henry-Heilbronner Index (HHI) to identify noncredible symptom response sets in 63 personal injury litigants and disability claimants compared to 77 non-litigating head-injured controls. Logistic regression analyses revealed that the HHI and FBS were better predictors of group membership than the RC1. Results suggest that the FBS, HHI, and RC1 may be measuring different constructs. The HHI and FBS reflect an exaggeration of disability or illness-related behavior. Differences in scale construction are discussed. The RC1 may have greater relevance under external incentive conditions involving chronic pain patients, or clinical patients with no external incentive to exaggerate their symptom presentation.


Clinical Neuropsychologist | 1996

Verbal fluency task equivalence

Maureen Lacy; Paul A. Gore; Neil H. Pliskin; George K. Henry; Robert L. Heilbronner; Darryl P. Hamer

Abstract The research and clinical use of the verbal fluency paradigm has been hindered by the paucity of information on the equivalency of the various versions of this measure. Currently, the comparability of the two most commonly used forms of the letter fluency task, the “FAS” and “CFL” tests, is uncertain. The equivalence of these versions was investigated by examining their consistency across and within settings and disease processes. The two verbal fluency tasks were administered to 287 patients at two separate sites as part of a neuropsychological evaluation. Results showed that the CFL and FAS verbal fluency paradigms were equivalent across both settings and diagnostic groups with correlations ranging from .87 to .94. These findings may be useful for both researchers and clinicians who require equivalent measures for repeated testing. Furthermore, the demonstrated equivalency of the two paradigms may facilitate interpretation of research findings across laboratories.


American Journal of Sports Medicine | 1991

Neuropsychologic test performance in amateur boxers

Robert L. Heilbronner; George K. Henry; Martiece Carson-Brewer

Cognitive functions of 23 amateur boxers were as sessed immediately before and after an amateur boxing event. A range of cognitive measures were employed including tasks of verbal, figural, and incidental memory, motor functions, attention and concentration, and infor mation processing speed. Compared to their prefight performance, boxers demonstrated impairments in ver bal and incidental memory, but enhanced executive and motor functions postfight. There were no observed differences between winners and losers on any of the measures. The results are compared to other studies that have shown only minor changes in cognitive func tions in amateur boxers compared to controls.


Psychopharmacology | 1988

Effects of protriptyline on vigilance and information processing in narcolepsy.

George K. Henry; Robert P. Hart; Joseph A. Kwentus; M. Jean Sicola

Vigilance, memory function, and response latency on the Sternberg short-term memory scanning task were examined in eight narcoleptic patients on and off medication. Off medication, half of the patients demonstrated reduced vigilance and all displayed diminished automatic memory encoding and longer response latencies on the Sternberg memory scanning procedure relative to the treated condition. Protriptyline normalized vigilance in half of the patients, while response latency and automatic information processing significantly improved in all. These findings are discussed with regard to the potential effect of the medication on central nervous system arousal.


Clinical Neuropsychologist | 2008

Empirical Derivation of a New MMPI-2 Scale for Identifying Probable Malingering in Personal Injury Litigants and Disability Claimants: The 15-Item Malingered Mood Disorder Scale (MMDS)

George K. Henry; Robert L. Heilbronner; Wiley Mittenberg; Craig K. Enders; Darci M. Roberts

A new 15-item MMPI-2 subscale, the Malingered Mood Disorder Scale (MMDS), was empirically derived from the original 32-item Malingered Depression Scale (MDS) of Steffan, Clopton, and Morgan (2003). The MMDS was superior to the original MDS in identification of symptom exaggeration in personal injury litigants and disability claimants compared to non-litigating head-injured controls. Logistic regression revealed that a cut score of ≥ 7 on the MMDS produced good specificity (93.4%) with an associated sensitivity of 54.8%. An MMDS score of ≥ 8 was associated with 100% positive predictive power, i.e., no false positive errors. These results suggest that the MMDS may be useful in identifying personal injury litigants and disability claimants who exaggerate emotional disturbance on the MMPI-2.


Clinical Neuropsychologist | 2009

Comparison of the MMPI-2 Restructured Demoralization Scale, Depression Scale, and Malingered Mood Disorder Scale in Identifying Non-credible Symptom Reporting in Personal Injury Litigants and Disability Claimants

George K. Henry; Robert L. Heilbronner; Wiley Mittenberg; Craig K. Enders; Kristen Domboski

A known groups design compared the ability of the 24-item MMPI-2 Restructured Clinical Demoralization Scale (RCd), the 57-item Depression Scale (Scale 2), and the 15-item Malingered Mood Disorder Scale (MMDS) to identify non-credible symptom response sets in 84 personal injury litigants and disability claimants compared to 77 non-litigating head-injured controls. All three scales showed large effect sizes (>0.80). Scale 2 was associated with the largest effect size (2.19), followed by the MMDS (1.65), and the RCd (0.85). Logistic regression analyses revealed that a cutscore of ≥28 on the 57-item Scale 2 was associated with high specificity (96.1%) and sensitivity (76.2%), while a cutscore of ≥16 on the 24-item RCd was less accurate (87% specificity and 50% sensitivity). Cutscores for the MMDS were not calculated as they were reported in a previous study. Results indicated that like the 15-item MMDS, the 57-item MMPI-2 Scale 2 may provide another empirically derived index with known error rates upon which examiners may rely to investigate hypotheses relative to exaggeration of illness-related behavior and impression management in forensic contexts involving PI litigants and disability claimants.


Clinical Neuropsychologist | 2013

Derivation of the MMPI-2-RF Henry-Heilbronner Index-r (HHI-r) scale.

George K. Henry; Robert L. Heilbronner; James Algina; Yasemin Kaya

The 15-item Henry-Heilbronner Index (HHI) was published in 2006 as an MMPI-2 embedded measure of psychological response validity. When the MMPI-2 was revised in 2008 only 11 of the 15 original HHI items were retained on the MMPI-2-RF, prohibiting use of the HHI as an embedded validity indicator on the MMPI-2-RF. Using the original HHI sample an 11-item version of the HHI, the HHI-r, was evaluated for use as an embedded measure of psychological response validity for the MMPI-2-RF. The 11-item HHI-r was very similar to the HHI in classification accuracy. An HHI-r cutoff score of ≥7 was associated with a classification accuracy rate of 84.0%, good sensitivity (68.9%), and high specificity (93.2%) in identifying symptom exaggeration in personal injury and disability litigants versus non-litigating head-injured patients. These preliminary results suggest the HHI-r functions in a manner similar to the original HHI as a measure of psychological response validity, and may be used by psychologists and neuropsychologists as an MMPI-2-RF embedded validity indicator.


Applied Neuropsychology | 2011

Noncredible performance in individuals with external incentives: empirical derivation and cross-validation of the Psychosocial Distress Scale (PDS).

George K. Henry; Robert L. Heilbronner; Wiley Mittenberg; Craig K. Enders; Abigail Stevens; Moira Dux

Using a known groups design, a new Minnesota Multiphasic Personality Inventory (MMPI-2) subscale, the 20-item Psychosocial Distress Scale (PDS), was empirically derived and cross-validated. The PDS demonstrated good classification accuracy between subjects under external incentive vs. no incentive conditions. In the initial calibration sample (N = 84) a cut score of ≥10 on the PDS was associated with good classification accuracy (85.7%), high specificity (90.0%), and adequate sensitivity (81.8%). Under cross-validation conditions (N = 83) a cut score of ≥10 on the PDS was also associated with nearly identical classification accuracy (86.5%), specificity (91.89%), and sensitivity (82.61%). A cut score of ≥12 was associated with 100% positive predictive power; that is, no false-positive errors in both the initial calibration sample and the subsequent cross-validation sample. The current study suggests that in addition to noncredible cognitive performance, civil litigants and disability claimants may overreport psychosocial complaints that can be identified and that the scale may generalize to other settings and patient groups.

Collaboration


Dive into the George K. Henry's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wiley Mittenberg

Nova Southeastern University

View shared research outputs
Top Co-Authors

Avatar

Paul Buck

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Russell L. Adams

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar

Abigail Stevens

Nova Southeastern University

View shared research outputs
Top Co-Authors

Avatar

Allison Myers

Nova Southeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darci M. Roberts

Nova Southeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge