Julie K. Lynch
State University of New York System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julie K. Lynch.
Assessment | 2007
Lyndsey Bauer; Sid E. O'Bryant; Julie K. Lynch; Robert J. McCaffrey; Jerid M. Fisher
Assessing effort level during neuropsychological evaluations is critical to support the accuracy of cognitive test scores. Many instruments are designed to measure effort, yet they are not routinely administered in neuropsychological assessments. The Test of Memory Malingering (TOMM) and the Word Memory Test (WMT) are commonly administered symptom validity tests with sound psychometric properties. This study examines the use of the TOMM Trial 1 and the WMT Immediate Recognition (IR) trial scores as brief screening tools for insufficient effort through an archival analysis of a combined sample of mild head-injury litigants ( N = 105) who were assessed in forensic private practices. Results show that both demonstrate impressive diagnostic accuracy and calculations of positive and negative predictive power are presented for a range of base rates. These results support the utility of Trial 1 of the TOMM and the WMT IR trial as screening methods for the assessment of insufficient effort in neuropsychological assessments.
Archives of Clinical Neuropsychology | 2012
Andrea S. Miele; Jessica H. Gunner; Julie K. Lynch; Robert J. McCaffrey
Symptom validity assessment is an important part of neuropsychological evaluation. There are currently several free-standing symptom validity tests (SVTs), as well as a number of empirically derived embedded validity indices, that have been developed to assess that an examinee is putting forth an optimal level of effort during testing. The use of embedded validity indices is attractive since they do not increase overall testing time and may also be less vulnerable to coaching. In addition, there are some instances where embedded validity indices are the only tool available to the neuropsychological practitioner for assessing an examinees level of effort. As with free-standing measures, the sensitivity and specificity of embedded validity indices to suboptimal effort varies. The present study evaluated the diagnostic validity of 17 embedded validity indices by comparing performance on these indices to performance on combinations of free-standing SVTs. Results from the current medico-legal sample revealed that of the embedded validity indices, Reliable Digit Span had the best classification accuracy; however, the findings do not support the use of this embedded validity index in the absence of free-standing SVTs.
Clinical Neuropsychologist | 1996
Robert J. McCaffrey; Jerid M. Fisher; Barry A. Gold; Julie K. Lynch
Abstract State and federal laws and court decisions that address requests for the presence or absence of third party observers during forensic evaluations are reviewed, as are the legal arguments for both their inclusion and exclusion. Potential sources of interference created by observers presence during the neuropsychological evaluation are outlined with reference to the Ethical Principles of Psychologists and Code of Conduct of the American Psychological Association, the Specialty Guidelines for Forensic Psychologists: Committee on Ethical Guidelines for Forensic Psychologists, and the Standards for Educational and Psychological Testing. The relevant empirical literature dealing with the phenomenon of social facilitation is also presented. Guidelines are offered for use by the neuropsychologist who receives a request for observation by a third party.
Archives of Clinical Neuropsychology | 2012
Jessica H. Gunner; Andrea S. Miele; Julie K. Lynch; Robert J. McCaffrey
The determination of examinee effort is an important component of a neuropsychological evaluation and relies heavily on the use of symptom validity tests (SVTs) such as the Test of Memory Malingering (TOMM) and the Word Memory Test (WMT). Diagnostic utility of SVTs varies. The sensitivity of traditional TOMM criteria to suboptimal effort is low. An index of response consistency across three trials of the TOMM was developed, denoted the Albany Consistency Index (ACI). This index identified a large proportion of examinees classified as optimal effort using traditional TOMM interpretive guidelines but suboptimal effort using the WMT profile analysis. In addition, previous research was extended, demonstrating a relationship between examinee performance on SVTs and neuropsychological tests. Effort classification using the ACI predicted the performance on the Global Memory Index from the Memory Assessment Scales. In conclusion, the ACI was a more sensitive indicator of suboptimal effort than traditional TOMM interpretive guidelines.
Applied Neuropsychology | 2013
Jessica H. Stenclik; Andrea S. Miele; Graham M. Silk-Eglit; Julie K. Lynch; Robert J. McCaffrey
Accurate determination of performance validity is paramount in any neuropsychological assessment. Numerous freestanding symptom validity tests, like the Test of Memory Malingering (TOMM), have been developed to assist in this process; however, research and clinical experiences have suggested that each may not function with the same classification accuracy. In an effort to increase the TOMMs ability to accurately classify performance validity, recent research has investigated the use of nonstandard cutoff scores. The purpose of this study was to potentially validate the use of two, nonstandard cutoff scores (<49 on Trial 2 or the Retention Trial or ≤39 on Trial 1) applied to the TOMM in a medicolegal sample of mild traumatic brain injury litigants. Both descriptive and inferential statistics found that the cutoff of <49 on Trial 2 or the Retention Trial was the most sensitive to performance validity as compared with both the standard TOMM criteria and the cutoff of ≤39. These findings support the use of nonstandard cutoffs to increase the TOMMs classification accuracy.
Applied Neuropsychology | 2015
Graham M. Silk-Eglit; Jessica H. Stenclik; Andrea S. Miele; Julie K. Lynch; Robert J. McCaffrey
Several studies have documented improvements in the classification accuracy of performance validity tests (PVTs) when they are combined to form aggregated models. Fewer studies have evaluated the impact of aggregating additional PVTs and changing the classification threshold within these models. A recent Monte Carlo simulation demonstrated that to maintain a false-positive rate (FPR) of ≤.10, only 1, 4, 8, 10, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. The current study sought to evaluate these findings with embedded PVTs in a sample of real-life litigants and to highlight a potential danger in analytic flexibility with embedded PVTs. Results demonstrated that to maintain an FPR of ≤.10, only 3, 7, 10, 14, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. Analyzing more than these numbers of PVTs resulted in a dramatic increase in the FPR. In addition, in the most extreme case, flexibility in analyzing and reporting embedded PVTs increased the FPR by 67%. Given these findings, a more objective approach to analyzing and reporting embedded PVTs should be introduced.
Archives of Clinical Neuropsychology | 2008
Julie E. Horwitz; Julie K. Lynch; Robert J. McCaffrey; Jerid M. Fisher
This study examined the utility of a screening battery developed by Reitan & Wolfson, 2006 for predicting neuropsychological impairment on the Halstead-Reitan neuropsychological test battery for adults. Using archival neuropsychological data from 69 litigants seen in a private practice setting, the Pearson correlation between the General Neuropsychological Deficit Scale (GNDS) score and the total Neuropsychological Deficit Scale (NDS) score from the screening battery (SBNDS) was .82. ROC curve analysis determined that the AUC was .88. Using a cutoff score of 9, as recommended by Reitan and Wolfson, the screening battery had excellent specificity but only fair sensitivity for identifying individuals with neuropsychological impairment on the Halstead-Reitan battery. Using a cutoff score of 8, the sensitivity and specificity of the screening battery was comparable to the findings of Reitan and Wolfson. The findings from this study indicate the optimal cutoff score for the screening battery may vary with different populations. The positive predictive power (PPP) and negative predictive power (NPP) were calculated for various base rates for cut scores with both sensitivity and specificity of greater than .600, and this information is provided.
Neuropsychology Review | 1992
Robert J. McCaffrey; Julie K. Lynch
The literature purporting to demonstrate that clinical neuropsychology is of limited validity in the forensic setting is reviewed critically and alternative interpretations are discussed. The methodological, procedural, conceptual, data analytical and survey/research design limitations are evaluated.
Archives of Clinical Neuropsychology | 2012
Jessica H. Gunner; Andrea S. Miele; Julie K. Lynch; Robert J. McCaffrey
There is currently no standard criterion for determining abnormal test scores in neuropsychology; thus, a number of different criteria are commonly used. We investigated base rates of abnormal scores in healthy older adults using raw and T-scores from indices of the Wisconsin Card Sorting Test and Stroop Color-Word Test. Abnormal scores were examined cumulatively at seven cutoffs including >1.0, >1.5, >2.0, >2.5, and >3.0 standard deviations (SD) from the mean as well as those below the 10th and 5th percentiles. In addition, the number of abnormal scores at each of the seven cutoffs was also examined. Results showed when considering raw scores, ∼15% of individuals obtained scores>1.0 SD from the mean, around 10% were less than the 10th percentile, and 5% fell >1.5 SD or <5th percentile from the mean. Using T-scores, approximately 15%-20% and 5%-10% of scores were >1.0 and >1.5 SD from the mean, respectively. Roughly 15% and 5% fell at the <10th and <5th percentiles, respectively. Both raw and T-scores>2.0 SD from the mean were infrequent. Although the presence of a single abnormal score at 1.0 and 1.5 SD from the mean or at the 10th and 5th percentiles was not unusual, the presence of ≥2 abnormal scores using any criteria was uncommon. Consideration of base rate data regarding the percentage of healthy individuals scoring in the abnormal range should help avoid classifying normal variability as neuropsychological impairment.
Archives of Clinical Neuropsychology | 1996
Robert J. McCaffrey; Julie K. Lynch
Abstract This report is a replication of a survey conducted a decade ago that examined the graduate training and postdoctoral experiences of the instructors of clinical neuropsychology within American Psychological Association-accredited doctoral programs in clinical psychology. The orginal findings indicated that the formal background and clinical experiences of most of the instructors of clinical neuropsychology fell short of the minimum level of training guidelines proposed by the International Neuropsychology Society (INS) Task Force on Education, Accreditation, and Credentialing. In the present survey, data from 61 doctoral training programs revealed that instructors are more closely approximating the minimal educational standards than they were a decade earlier. Specifically, respondents to the current survey indicated more training in both the neurosciences and clinical neuropsychology and involvement in more research activities than did respondents to the past survey. Despite the significant improvements in the background training of current instructors, many still do not report educational and clinical experiences that are consistent with the INS minimal level training guidelines.