Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen Stark is active.

Publication


Featured researches published by Stephen Stark.


Multivariate Behavioral Research | 2001

Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.

Oleksandr S. Chernyshenko; Stephen Stark; Kim Yin Chan; Fritz Drasgow; Bruce Williams

The present study compared the fit of several IRT models to two personality assessment instruments. Data from 13,059 individuals responding to the US-English version of the Fifth Edition of the Sixteen Personality Factor Questionnaire (16PF) and 1,770 individuals responding to Goldbergs 50 item Big Five Personality measure were analyzed. Various issues pertaining to the fit of the IRT models to personality data were considered. We examined two of the most popular parametric models designed for dichotomously scored items (i.e., the two- and three-parameter logistic models) and a parametric model for polytomous items (Samejimas graded response model). Also examined were Levines nonparametric maximum likelihood formula scoring models for dichotomous and polytomous data, which were previously found to provide good fits to several cognitive ability tests (Drasgow, Levine, Tsien, Williams, & Mead, 1995). The two- and three-parameter logistic models fit some scales reasonably well but not others; the graded response model generally did not fit well. The nonparametric formula scoring models provided the best fit of the models considered. Several implications of these findings for personality measurement and personnel selection were described.


Journal of Applied Psychology | 2001

Effects of the testing situation on item responding: cause for concern.

Stephen Stark; Oleksandr S. Chernyshenko; Kim Yin Chan; Wayne C. Lee; Fritz Drasgow

The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.


Military Psychology | 2002

Toward Standardized Measurement of Sexual Harassment: Shortening the SEQ-DoD Using Item Response Theory

Stephen Stark; Oleksandr S. Chernyshenko; Anita R. Lancaster; Fritz Drasgow; Louise F. Fitzgerald

Historically, the U.S. Department of Defense (DoD) has been one of the leaders in researching sexual harassment. Documentation and results of these studies are routinely available through DoD technical reports and publications and through public use data sets to the nonmilitary research community. However, a major shortcoming of both DoDs research and that of the civilian sector is the absence of a standard method of assessing sexual harassment. This article describes how item response theory procedures were applied to shorten one of the most frequently used measures of sexual harassment—the 23-item Sexual Experiences Questionnaire-Department of Defense (SEQ-DoD; Bastian, Lancaster, & Reyst, 1996), which was included in the Status of the Armed Forces Survey: 1995 Form B-Gender Issues (U.S. Department of Defense, 1995). The resulting 16-item measure, titled the SEQ-DoD-s, provides a shortened, standardized measure of sexual harassment for use by military and civilian researchers.


Applied Psychological Measurement | 2002

An EM Approach to Parameter Estimation for the Zinnes and Griggs Paired Comparison IRT Model

Stephen Stark; Fritz Drasgow

Borman et al. recently proposed a computer adaptive performance appraisal system called CARS II that utilizes paired comparison judgments of behavioral stimuli. To implement this approach,the paired comparison ideal point model developed by Zinnes and Griggs was selected. In this article,the authors describe item response and information functions for the Zinnes and Griggs model and present procedures for estimating stimulus and person parameters. Monte carlo simulations were conducted to assess the accuracy of the parameter estimation procedures. The results indicated that at least 400 ratees (i.e.,ratings) are required to obtain reasonably accurate estimates of the stimulus parameters and their standard errors. In addition,latent trait estimation improves as test length increases. The implications of these results for test construction are also discussed.


Educational and Psychological Measurement | 2001

Investigating the Hierarchical Factor Structure of the Fifth Edition of the 16PF: an Application of the Schmid-Leiman Orthogonalization Procedure

Oleksandr S. Chernyshenko; Stephen Stark; Kim Yin Chan

Two issues regarding the factor structure of the Sixteen Personality Factor Questionnaire (16PF) Fifth Edition were investigated: (a) unidimensionality of the 16 noncognitive scales and (b) hierarchical factor structure of the inventory. The Schmid and Leiman orthogonalization procedure was applied to obtain a hierarchical factor solution in which multi-item composites were mapped directly onto first- and second-order orthogonal factors while simultaneously illustrating the relative positions of factors in the hierarchy. Unidimensionality was assessed using modified parallel analysis. The results indicated that the noncognitive multi-item composites could be factored into 16 first-order and 5 second-order orthogonal factors.


Handbook of Cultural Intelligence: Theory, Measurement and Applications; New York: M E Sharpe, Inc | 2008

Cultural intelligence as a mediator of relationships between Openness to Experience and Adaptive Performance

T. Oolders; Oleksandr S. Chernyshenko; Stephen Stark


Archive | 2012

Development of the Tailored Adaptive Personality Assessment System (TAPAS) to Support Army Personnel Selection and Classification Decisions

Fritz Drasgow; Stephen Stark; Oleksandr S. Chernyshenko; Christopher D. Nye; Charles L. Hulin; Leonard A. White


Handbook of Structural Equation Modeling; NY: Guilford Press. | 2012

Graphical representation of structural equation models using path diagrams

R. H. Moon-Ho; Stephen Stark; Oleksandr S. Chernyshenko


Archive | 2018

New Scale Development for Enhanced Suitability Screening

Christopher D. Nye; Rabiah S Muhammad; Heather M Wolters; Fritz Drasgow; Oleksandr S. Chernyshenko; Stephen Stark


Archive | 2017

Moderators of the Tailored Adaptive Personality Assessment System Validity

Stephen Stark; Oleksandr S. Chernyshenko; Christopher D. Nye; Frtiz Drasgow; Leonard A. White

Collaboration


Dive into the Stephen Stark's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kim Yin Chan

United Kingdom Ministry of Defence

View shared research outputs
Top Co-Authors

Avatar

Anita R. Lancaster

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge