Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James M. Conway is active.

Publication


Featured researches published by James M. Conway.


Organizational Research Methods | 2003

A Review and Evaluation of Exploratory Factor Analysis Practices in Organizational Research

James M. Conway; Allen I. Huffcutt

The authors surveyed exploratory factor analysis (EFA) practices in three organizational journals from 1985 to 1999 to investigate purposes for conducting EFA and to update and extend Ford, MacCallum, and Tait’s (1986) review. Ford et al. surveyed the same journals from 1975 to 1984, concluding that researchers often applied EFA poorly (e.g., relying too heavily on principal components analysis [PCA], eigenvalues greater than 1 to choose the number of factors, and orthogonal rotations). Fabrigar, Wegener, MacCallum, and Strahan (1999) reached a similar conclusion based on a much smaller sample of studies. This review of 371 studies shows reason for greater optimism. The tendency to use multiple number-of-factors criteria and oblique rotations has increased somewhat. Most important, the authors find that researchers tend to make better decisions when EFA plays a more consequential role in the research. They stress the importance of careful and thoughtful analysis, including decisions about whether and how EFA should be used.


Journal of Applied Psychology | 1999

Distinguishing contextual performance from task performance for managerial jobs.

James M. Conway

The purpose of this study was to extend previous research on the contextual nd task performance distinction to managerial jobs. It was hypothesized that, unlike results for nonmanagerial work, the job dedication facet of contextual performance would contribute uniquely to overall managerial performance. The interpersonal facilitation facet of contextual performance was expected to be redundant with leadership task performance and therefore not to make a unique contribution. A multitrait-multirater correlation matrix was developed on the basis of meta-analysis. Structural equation modeling results generally supported the hypotheses, although there was some evidence of a unique contribution by interpersonal facilitation. Results also suggested that peers paid more attention to interpersonal facilitation when making overall performance ratings, whereas supervisors paid more attention to task performance.


Journal of Applied Psychology | 2001

Identification and meta-analytic assessment of psychological constructs measured in employment interviews.

Allen I. Huffcutt; James M. Conway; Philip L. Roth; Nancy J. Stone

There has been a growing interest in understanding what constructs are assessed in the employment interview and the properties of those assessments. To address these issues, the authors developed a comprehensive taxonomy of 7 types of constructs that the interview could assess. Analysis of 338 ratings from 47 actual interview studies indicated that basic personality and applied social skills were the most frequently rated constructs in this taxonomy, followed by mental capability and job knowledge and skills. Further analysis suggested that high- and low-structure interviews tend to focus on different constructs. Taking both frequency and validity results into consideration, the findings suggest that at least part of the reason why structured interviews tend to have higher validity is because they focus more on constructs that have a stronger relationship with job performance. Limitations and directions for future research are discussed.


Journal of Applied Psychology | 1995

A Meta-Analysis of Interrater and Internal Consistency Reliability of Selection Interviews

James M. Conway; Robert A. Jako; Deborah F. Goodman

A meta-analysis of 111 interrater reliability coefficients and 49 coefficient alphas from selection interviews was conducted. Moderators of interrater reliability included study design, interviewer training, and 3 dimensions of interview structure ( standardization of questions, of response evaluation, and of combining multiple ratings). Interactions showed that standardizing questions had a stronger moderating effect on reliability when coefficients were from separate ( rather than panel) interviews, and multiple ratings were useful when combined mechanically (there was no evidence of usefulness when combined subjectively). Average correlations (derived from alphas) between ratings were moderated by standardization of questions and number of ratings made. Upper limits of validity were estimated to be.67 for highly structured interviews and.34 for unstructured interviews.


Teaching of Psychology | 2009

Teaching and Learning in the Social Context: A Meta-Analysis of Service Learning's Effects on Academic, Personal, Social, and Citizenship Outcomes

James M. Conway; Elise L. Amel; Daniel P. Gerwien

Service learning places teaching and learning in a social context, facilitating socially responsive knowledge. The purposes of this meta-analysis were to summarize evidence on (a) extent and types of change in participants in service learning programs, (b) specific program elements (moderators) that affect the amount of change in participants, and (c) generalizability of results across educational levels and curricular versus noncurricular service. We included 103 samples and found positive changes for all types of outcomes. Changes were moderate for academic outcomes, small for personal outcomes and citizenship outcomes, and in between for social outcomes. Programs with structured reflection showed larger changes and effects generalized across educational levels. We call for psychologists to increase their use of service learning, and we discuss resources for doing so.


Organizational Research Methods | 2010

What is method variance and how can we cope with it? A panel discussion

Michael T. Brannick; David Chan; James M. Conway; Charles E. Lance; Paul E. Spector

A panel of experts describes the nature of, and remedies for, method variance. In an attempt to help the reader understand the nature of method variance, the authors describe their experiences with method variance both on the giving and the receiving ends of the editorial review process, as well as their interpretation of other reviewers’ comments. They then describe methods of data analysis and research design, which have been used for detecting and eliminating the effects of method variance. Most methods have some utility, but none prevent the researcher from making faulty inferences. The authors conclude with suggestions for resolving disputes about method variance.


Journal of Applied Psychology | 2003

Profiling active and passive nonrespondents to an organizational survey.

Steven G. Rogelberg; James M. Conway; Matthew E. Sederburg; Christiane Spitzmüller; Shahnaz Aziz; William E. Knight

In this field study (N = 405) population profiling was introduced to examine general and specific classes of nonresponse (active vs. passive) to a satisfaction survey. The active nonrespondent group (i.e., purposeful nonresponders) was relatively small (approximately 15%). Active nonrespondents, in comparison with respondents, were less satisfied with the entity sponsoring the survey and were less conscientious. Passive nonrespondents (e.g., forgot), who represented the majority of nonrespondents, were attitudinally similar to respondents but differed with regard to personality. Nonresponse bias does not appear to be a substantive concern for satisfaction type variables--the typical core of an organizational survey. If the survey concerns topics strongly related to Conscientiousness and Agreeableness, the respondent sample may not be representative of the population.


Journal of Applied Psychology | 2004

Revised Estimates of Dimension and Exercise Variance Components in Assessment Center Postexercise Dimension Ratings

Charles E. Lance; Tracy A. Lambert; Amanda G. Gewin; Filip Lievens; James M. Conway

The authors reanalyzed assessment center (AC) multitrait-multimethod (MTMM) matrices containing correlations among postexercise dimension ratings (PEDRs) reported by F. Lievens and J. M. Conway (2001). Unlike F. Lievens and J. M. Conway, who used a correlated dimension-correlated uniqueness model, we used a different set of confirmatory-factor-analysis-based models (1-dimension-correlated Exercise and 1-dimension-correlated uniqueness models) to estimate dimension and exercise variance components in AC PEDRs. Results of reanalyses suggest that, consistent with previous narrative reviews, exercise variance components dominate over dimension variance components after all. Implications for AC construct validity and possible redirections of research on the validity of ACs are discussed.


Journal of Applied Psychology | 2001

Dimension and exercise variance in assessment center scores: A large-scale evaluation of multitrait-multimethod studies

Filip Lievens; James M. Conway

This study addresses 3 questions regarding assessment center construct validity: (a) Are assessment center ratings best thought of as reflecting dimension constructs (dimension model), exercises (exercise model), or a combination? (b) To what extent do dimensions or exercises account for variance? (c) Which design characteristics increase dimension variance? To this end, a large set of multitrait-multimethod studies (N = 34) were analyzed, showing that assessment center ratings were best represented (i.e., in terms of fit and admissible solutions) by a model with correlated dimensions and exercises specified as correlated uniquenesses. In this model, dimension variance equals exercise variance. Significantly more dimension variance was found when fewer dimensions were used and when assessors were psychologists. Use of behavioral checklists, a lower dimension-exercise ratio, and similar exercises also increased dimension variance.


Journal of Management | 1996

Analysis and Design of Multitrait-Multirater Performance Appraisal Studies:

James M. Conway

Becker and Cote (1994) found that the correlated uniqueness model outperformed the confirmatory factor analysis and direct product models for multitrait-multimethod data. The present study analyzed20 multitrait-multirater performance appraisal matrices. The correlated uniqueness model was appropriate significantly more often than in Becker and Cote5 study and the other two models performed poorly. The proportions of trait and method variance in ratings were related to several rating system characteristics such as opportunity for raters to observe ratees and basing rating dimensions on a job analysis. Performance of all three models was better with larger proportions of trait variance and smaller proportions of method variance.

Collaboration


Dive into the James M. Conway's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven G. Rogelberg

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Goh

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Joseph A. Allen

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar

Lamarra Currie

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Charles Mate-Kole

Central Connecticut State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge