Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory J. Meyer is active.

Publication


Featured researches published by Gregory J. Meyer.


American Psychologist | 2001

Psychological testing and psychological assessment. A review of evidence and issues.

Gregory J. Meyer; Stephen E. Finn; Lorraine D. Eyde; Gary G. Kay; Kevin L. Moreland; Robert R. Dies; Elena J. Eisman; Tom Kubiszyn; Geoffrey M. Reed

This article summarizes evidence and issues associated with psychological assessment. Data from more than 125 meta-analyses on test validity and 800 samples examining multimethod assessment suggest 4 general conclusions: (a) Psychological test validity is strong and compelling, (b) psychological test validity is comparable to medical test validity, (c) distinct assessment methods provide unique sources of information, and (d) clinicians who rely exclusively on interviews are prone to incomplete understandings. Following principles for optimal nomothetic research, the authors suggest that a multimethod assessment battery provides a structured means for skilled clinicians to maximize the validity of individualized assessments. Future investigations should move beyond an examination of test scales to focus more on the role of psychologists who use tests as helpful tools to furnish patients and referral sources with professional consultation.


Psychological Assessment | 2003

The incremental validity of psychological testing and assessment: conceptual, methodological, and statistical issues.

John Hunsley; Gregory J. Meyer

There has been insufficient effort in most areas of applied psychology to evaluate incremental validity. To further this kind of validity research, the authors examined applicable research designs, including those to assess the incremental validity of test instruments, of test-informed clinical inferences, and of newly developed measures. The authors also considered key statistical and measurement issues that can influence incremental validity findings, including the entry order of predictor variables, how to interpret the size of a validity increment, and possible artifactual effects in the criteria selected for incremental validity research. The authors concluded by suggesting steps for building a cumulative research base concerning incremental validity and by describing challenges associated with applying nomothetic research findings to individual clinical cases.


Psychological Methods | 2006

When effect sizes disagree: the case of r and d.

Robert E. McGrath; Gregory J. Meyer

The increased use of effect sizes in single studies and meta-analyses raises new questions about statistical inference. Choice of an effect-size index can have a substantial impact on the interpretation of findings. The authors demonstrate the issue by focusing on two popular effect-size measures, the correlation coefficient and the standardized mean difference (e.g., Cohens d or Hedgess g), both of which can be used when one variable is dichotomous and the other is quantitative. Although the indices are often practically interchangeable, differences in sensitivity to the base rate or variance of the dichotomous variable can alter conclusions about the magnitude of an effect depending on which statistic is used. Because neither statistic is universally superior, researchers should explicitly consider the importance of base rates to formulate correct inferences and justify the selection of a primary effect-size statistic.


Psychological Bulletin | 2013

The Validity of Individual Rorschach Variables: Systematic Reviews and Meta-Analyses of the Comprehensive System

Joni L. Mihura; Gregory J. Meyer; Nicolae Dumitrascu; George Bombel

We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = .27 (k = 770) as compared to r = .08 (k = 386) across 42 meta-analyses examining variables against introspectively assessed criteria (e.g., self-report). Using Hemphills (2003) data-driven guidelines for interpreting the magnitude of assessment effect sizes with only externally assessed criteria, we found 13 variables had excellent support (r ≥ .33, p < .001; [Symbol: see text] FSN > 50), 17 had good support (r ≥ .21, p < .05, FSN ≥ 10), 10 had modest support (p < .05 and either r ≥ .21, FSN < 10, or r = .15-.20, FSN ≥ 10), 13 had little (p < .05 and either r = < .15 or FSN < 10) or no support (p > .05), and 12 had no construct-relevant validity studies. The variables with the strongest support were largely those that assess cognitive and perceptual processes (e.g., Perceptual-Thinking Index, Synthesized Response); those with the least support tended to be very rare (e.g., Color Projection) or some of the more recently developed scales (e.g., Egocentricity Index, Isolation Index). Our findings are less positive, more nuanced, and more inclusive than those reported in the CS test manual. We discuss study limitations and the implications for research and clinical practice, including the importance of using different methods in order to improve our understanding of people.


Journal of Personality Assessment | 2002

An Examination of Interrater Reliability for Scoring the Rorschach Comprehensive System in Eight Data Sets

Gregory J. Meyer; Mark J. Hilsenroth; Dirk Baxter; John E. Exner; J. Christopher Fowler; Craig Piers; Justin Resnick

In this article, we describe interrater reliability for the Comprehensive System (CS; Exner, 1993) in 8 relatively large samples, including (a) students, (b) experienced researchers, (c) clinicians, (d) clinicians and then researchers, (e) a composite clinical sample (i.e., a to d), and 3 samples in which randomly generated erroneous scores were substituted for (f) 10%, (g) 20%, or (h) 30% of the original responses. Across samples, 133 to 143 statistically stable CS scores had excellent reliability, with median intraclass correlations of .85, .96, .97, .95, .93, .95, .89, and .82, respectively. We also demonstrate reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyers (1997a) procedures for estimating response segment reliability were accurate. The CS can be scored reliably, but because scoring is the result of coder skills clinicians must conscientiously monitor their accuracy.


Journal of Personality Assessment | 2007

Toward International Normative Reference Data for the Comprehensive System

Gregory J. Meyer; Philip Erdberg; Thomas W. Shaffer

We build on the work of all the authors contributing to this Special Supplement by summarizing findings across their samples of data, and we also draw on samples published elsewhere. Using 21 samples of adult data from 17 countries we create a composite set of internationally-based reference means and standard deviations from which we compute T-scores for each sample. Figures illustrate how the scores in each sample are distributed and how the samples compare across variables in eight Rorschach Comprehensive System (CS; Exner, 2003) clusters. The adult samples from around the world are generally quite similar, and thus we encourage clinicians to integrate the composite international reference values into their clinical interpretation of protocols. However, the 31 child and adolescent samples from 5 countries produce unstable and often quite extreme values on many scores. Until the factors contributing to the variability among these samples are more fully understood, we discourage clinicians from using many CS scores to make nomothetic, score-based inferences about psychopathology in children and adolescents.


Journal of Personality Assessment | 2003

Refinements in the Rorschach Ego Impairment Index Incorporating the Human Representational Variable

Donald J. Viglione; William Perry; Gregory J. Meyer

The Ego Impairment Index (EII; Perry & Viglione, 1991) is a composite measure of psychological impairment and thought disturbance developed from the empirical and theoretical literature on the Rorschach. In this article, we summarize reliability and validity data regarding the EII. Our major goal was to present the rationale and empirical basis for recent refinements in the EII. Among the subcomponents of the original EII was the Human Experience variable (HEV), which has recently been revised and replaced with the Human Representational variable (HRV; Viglione, Perry, Jansak, Meyer, & Exner, 2003). In this study, we replaced the HEV with the HRV to create the EII-2. This was accomplished by recalculating the factor coefficients with a sample of 363 Rorschach protocols. We present additional validity data for the new EII-2. Research recommendations and interpretive guidelines are also presented.


Psychological Assessment | 1997

Assessing Reliability: Critical Corrections for a Critical Examination of the Rorschach Comprehensive System.

Gregory J. Meyer

Wood, Nezworski, and Stejskal (1996a, 1996b) argued that the Rorschach Comprehensive System (CS) lacked many essential pieces of reliability data and that the available evidence indicated that scoring reliability may be little better than chance. Contrary to their assertions, the author suggests why rater agreement should focus on responses rather than summary scores, how field reliability moves away from testing CS scoring principles, and how no psychometric distinction exists between a percentage correct and a percentage agreement index. Also, after reviewing problematic qualities of kappa, a meta-analysis of published data is presented indicating that the CS has excellent chancecorrected interrater reliability (Estimated K, M = .86, range = .72-.96). Finally, the author notes that Wood et al. ignored at least 17 CS studies of test-retest reliability that contain many of the important data they said were missing. The author concluded that Wood et al.s erroneous assertions about the more elementary topic of reliability make suspect their assertions about the more complex topic of validity.


Journal of Personality Assessment | 2000

A replication of Rorschach and MMPI-2 convergent validity.

Gregory J. Meyer; Robert J. Riethmiller; Regina D. Brooks; William A. Benoit; Leonard Handler

We replicated prior research on Rorschach and MMPI-2 convergent validity by testing 8 hypotheses in a new sample of patients. We also extended prior research by developing criteria to include more patients and by applying the same procedures to 2 self-report tests: the MMPI-2 and the MCMI-II. Results supported our hypotheses and paralleled the prior findings. Furthermore, 3 different tests for methodological artifacts could not account for the results. Thus, the convergence of Rorschach and MMPI-2 constructs seems to be partially a function of how patients interact with the tests. When patients approach each test with a similar style, conceptually aligned constructs tend to correlate. Although this result is less robust, when patients approach each test in an opposing manner, conceptually aligned constructs tend to be negatively correlated. When test interaction styles are ignored, MMPI-2 and Rorschach constructs tend to be uncorrelated, unless a sample just happens to possess a correlation between Rorschach and MMPI-2 stylistic variables. Remaining ambiguities and suggestions for further advances are discussed.


Journal of Personality Assessment | 2000

Incremental Validity of the Rorschach Prognostic Rating Scale Over the MMPI Ego Strength Scale and IQ

Gregory J. Meyer

A recent meta-analysis found that the Rorschach Prognostic Rating Scale (RPRS) had a strong ability to predict subsequent outcome (r = .44, N = 783; Meyer & Handler, 1997, this issue). However, that review did not directly address questions of incremental validity. This article focuses on the ability of the RPRS to predict outcome after taking into account other sources of data. Across studies that examined both the RPRS and the MMPI Ego Strength scale, the RPRS had a strong ability to predict outcome (r = .40, N = 187), whereas the MMPI scale did not (r = .02, N = 280). Nine studies examined the RPRS along with an intelligence test and allowed direct numerical estimates of incremental validity to be calculated. Across studies, the RPRS demonstrated strong incremental validity after controlling for intelligence (incremental r = .36, N = 358). It is clear that the Rorschach can make unique contributions to understanding clinically relevant processes in ways that self-reports or measured intelligence cannot. Contemporary Rorschach scales should continue to be evaluated for their distinctive and incremental contribution to clinical practice.

Collaboration


Dive into the Gregory J. Meyer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald J. Viglione

Alliant International University

View shared research outputs
Top Co-Authors

Avatar

George Bombel

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar

William Perry

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary G. Kay

Georgetown University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorraine D. Eyde

United States Office of Personnel Management

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip Erdberg

Alliant International University

View shared research outputs
Researchain Logo
Decentralizing Knowledge