Stephen D. Short
College of Charleston
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen D. Short.
Journal of Sex Research | 2017
John Kitchener Sakaluk; Stephen D. Short
Sexuality researchers frequently use exploratory factor analysis (EFA) to illuminate the distinguishable theoretical constructs assessed by a set of variables. EFA entails a substantive number of analytic decisions to be made with respect to sample size determination, and how factors are extracted, rotated, and retained. The available analytic options, however, are not all equally empirically rigorous. We discuss the commonly available options for conducting EFA and which options constitute best practices for EFA. We also present the results of a methodological review of the analytic options for EFA used by sexuality researchers in more than 200 EFAs, published in more than 160 articles and chapters from 1974 to 2014, in a sample of sexuality research journals. Our review reveals that best practices for EFA are actually those least frequently used by sexuality researchers. We introduce freely available analytic resources to help make it easier for sexuality researchers to adhere to best practices when conducting EFAs in their own research.
The Journal of Psychology | 2016
Lisa Thomson Ross; Stephen D. Short; Marina Garofano
ABSTRACT Experiencing unpredictability in the environment has a variety of negative outcomes. However, these are difficult to ascertain due to the lack of a psychometrically sound measure of unpredictability beliefs. This article summarizes the development of the Scale of Unpredictability Beliefs (SUB), which assesses perceptions about unpredictability in ones life, in other people, and in the world. In Study I, college students (N = 305) responded to 68 potential items as well as other scales. Exploratory factor analysis yielded three internally consistent subscales (Self, People, and World; 16 items total). Higher SUB scores correlated with more childhood family unpredictability, greater likelihood of parental alcohol abuse, stronger causal uncertainty, and lower self-efficacy. In Study II, a confirmatory factor analysis supported the three-factor solution (N = 186 college students). SUB scores correlated with personality, childhood family unpredictability, and control beliefs. In most instances the SUB predicted family unpredictability and control beliefs beyond existing unpredictability measures. Study III confirmed the factor structure and replicated family unpredictability associations in an adult sample (N = 483). This article provides preliminary support for this new multi-dimensional, self-report assessment of unpredictability beliefs, and ideas for future research are discussed.
Springer Proceedings in Mathematics & Statistics | 2016
Terrence D. Jorgensen; Benjamin A. Kite; Po-Yi Chen; Stephen D. Short
In multigroup factor analysis, configural measurement invariance is accepted as tenable when researchers either (a) fail to reject the null hypothesis of exact fit using a χ2 test or (b) conclude that a model fits approximately well enough, according to one or more alternative fit indices (AFIs). These criteria fail for two reasons. First, the test of perfect fit confounds model fit with group equivalence, so rejecting the null hypothesis of perfect fit does not imply that the null hypothesis of configural invariance should be rejected. Second, treating common rules of thumb as critical values for judging approximate fit yields inconsistent results across conditions because fixed cutoffs ignore sampling variability of AFIs. As a solution, we propose replacing χ2 and fixed AFI cutoffs with permutation tests. Iterative permutation of group assignment yields an empirical distribution of any fit measure under the null hypothesis of invariance. Simulations show the permutation test of configural invariance controls Type I error rates better than χ2 or AFIs when a model has parsimony error (i.e., negligible misspecification) but the factor structure is equivalent across groups (i.e., the null hypothesis is true).
Psychological Methods | 2017
Terrence D. Jorgensen; Benjamin A. Kite; Po-Yi Chen; Stephen D. Short
Abstract In multigroup factor analysis, different levels of measurement invariance are accepted as tenable when researchers observe a nonsignificant (&Dgr;)&khgr;2 test after imposing certain equality constraints across groups. Large samples yield high power to detect negligible misspecifications, so many researchers prefer alternative fit indices (AFIs). Fixed cutoffs have been proposed for evaluating the effect of invariance constraints on change in AFIs (e.g., Chen, 2007; Cheung & Rensvold, 2002; Meade, Johnson, & Braddy, 2008). We demonstrate that all of these cutoffs have inconsistent Type I error rates. As a solution, we propose replacing &khgr;2 and fixed AFI cutoffs with permutation tests. Randomly permuting group assignment results in average between-groups differences of zero, so iterative permutation yields an empirical distribution of any fit measure under the null hypothesis of invariance across groups. Our simulations show that the permutation test of configural invariance controls Type I error rates better than &khgr;2 or AFIs when the model contains parsimony error (i.e., negligible misspecification) but the factor structure is equivalent across groups (i.e., the null hypothesis is true). For testing metric and scalar invariance, &Dgr;&khgr;2 and permutation yield similar power and nominal Type I error rates, whereas &Dgr;AFIs yield inflated errors in smaller samples. Permuting the maximum modification index among equality constraints control familywise Type I error rates when testing multiple indicators for lack of invariance, but provide similar power as using a Bonferroni adjustment. An applied example and syntax for software are provided.
Psychological Reports | 2015
Thomas Ross; Lisa Thomson Ross; Stephen D. Short; Shayla Cataldo
This study examined the psychometric equivalence of Forms A and B of the Multidimensional Health Locus of Control Scale in a sample of college students (N = 370; M = 19.5 yr.; 318 Caucasians; 281 women). Given the dearth of studies that address the issue of form equivalence directly, this study sought to ascertain whether these forms could be used interchangeably by researchers. Subscales on the two forms had fairly high correlations (range of r = .77–.81), and similar alpha and omega reliability coefficients. Additionally, confirmatory factor analysis revealed both forms fit a three-factor model well. However, paired-sample t tests yielded significant mean differences for all three subscales. Furthermore, the two forms yielded inconsistent associations with relevant measures. Although the observed pattern of associations with social desirability and safe swimming behaviors were similar for Forms A and B, the pattern of differences was not identical for smoking groups and bicycle helmet use groups between forms. Overall, these results suggested that Forms A and B do not meet the strict criteria for parallel forms, but instead should be considered alternative forms.
Evolutionary Psychology | 2015
Stephen D. Short; Patricia H. Hawley
Journal of Social and Clinical Psychology | 2016
Lisa Thomson Ross; Caitlyn O. Hood; Stephen D. Short
Journal of Adolescence | 2015
Nicole Campione-Barr; Anna K. Lindell; Stephen D. Short; Kelly Bassett Greer; Scott D. Drotar
Archive | 2015
John Kitchener Sakaluk; Stephen D. Short; MaRSS Lab
Archive | 2015
John Kitchener Sakaluk; Stephen D. Short; MaRSS Lab