Jeffrey R. Spence
University of Guelph
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey R. Spence.
Perspectives on Psychological Science | 2014
David J. Stanley; Jeffrey R. Spence
Failures to replicate published psychological research findings have contributed to a “crisis of confidence.” Several reasons for these failures have been proposed, the most notable being questionable research practices and data fraud. We examine replication from a different perspective and illustrate that current intuitive expectations for replication are unreasonable. We used computer simulations to create thousands of ideal replications, with the same participants, wherein the only difference across replications was random measurement error. In the first set of simulations, study results differed substantially across replications as a result of measurement error alone. This raises questions about how researchers should interpret failed replication attempts, given the large impact that even modest amounts of measurement error can have on observed associations. In the second set of simulations, we illustrated the difficulties that researchers face when trying to interpret and replicate a published finding. We also assessed the relative importance of both sampling error and measurement error in producing variability in replications. Conventionally, replication attempts are viewed through the lens of verifying or falsifying published findings. We suggest that this is a flawed perspective and that researchers should adjust their expectations concerning replications and shift to a meta-analytic mind-set.
Journal of Management | 2012
D. Lance Ferris; Jeffrey R. Spence; Douglas J. Brown; Daniel Heller
The authors integrated predictions from the group value model of justice with an esteem threat framework of deviance to examine the within-person relation between interpersonal justice and workplace deviance. Using a moderated-mediation approach, they predicted that daily interpersonal injustice would lower daily self-esteem; daily self-esteem would in turn mediate the effect of daily interpersonal injustice and interact with trait self-esteem in predicting daily workplace deviance. Using 1,088 daily diary recordings from 100 employees from various industries, the results generally support the hypothesized model linking daily interpersonal justice and daily workplace deviance, even when the effects of previously established mediators (i.e., affect and job satisfaction) were controlled for. Theoretical and practical implications of the findings are discussed.
Journal of Cross-Cultural Psychology | 2013
Wendi L. Adair; Ivona Hideg; Jeffrey R. Spence
This study examines how the cultural heterogeneity of work teams moderates the way in which team cultural intelligence (CQ) affects the development of team shared values. Utilizing the four-factor model of CQ, we predict how each facet of CQ will impact the development of shared values in relatively early stages of team development differently for culturally homogeneous versus culturally heterogeneous work teams. We operationalize team shared values as the degree to which a broad set of cultural values are similarly endorsed by team members as guiding principles when working in their team. Results show that behavioral and metacognitive CQ had a positive effect on shared values in culturally heterogeneous teams; however, motivational and metacognitive CQ had a negative effect on shared values in culturally homogeneous teams. All effects were observed in the early stages of team development. Having uncovered positive and negative effects of CQ for shared values in work teams, we discuss implications for theory and practice around this form of cultural competence.
Human Performance | 2013
H. A. MacDonald; Lorne M. Sulsky; Jeffrey R. Spence; Douglas J. Brown
We examined differences in the motivation to directly seek performance feedback between Canadian (n = 72) and Chinese (n = 64) participants using a policy-capturing methodology. Results generally support the premise that the motivation to seek performance feedback varies as a function of national culture. Compared to Canadians, image-defense motivation was more predictive of (a) the importance placed on feedback source, and (b) whether feedback is sought in public, for feedback-seeking decisions among Chinese participants. Ego-defense motivation was more predictive of the importance placed on feedback valence for feedback-seeking decisions among Canadian participants than among those from China. We discuss the implications of the study findings and consider future research directions.
Organizational psychology review | 2013
Jeffrey R. Spence; Lisa M. Keeping
Employee performance appraisals are complex events in organizations. They occur in contextually rich environments and have implications for careers, training opportunities, remuneration, and interpersonal relationships. For years, the study of performance appraisals has mirrored this complexity and has revealed a multitude of variables that can influence the accuracy of performance ratings. Of late, the importance of managers’ intentions as a determinant of performance ratings has gained prominence. What is less understood is where these intentions come from and what determines their relative strength or weakness. In the current paper, we present a model that explains the simultaneous presence and strength of multiple rating intentions that managers can have when rating employee performance.
PLOS ONE | 2016
Jeffrey R. Spence; David J. Stanley
A challenge when interpreting replications is determining whether the results of a replication “successfully” replicate the original study. Looking for consistency between two studies is challenging because individual studies are susceptible to many sources of error that can cause study results to deviate from each other and the population effect in unpredictable directions and magnitudes. In the current paper, we derive methods to compute a prediction interval, a range of results that can be expected in a replication due to chance (i.e., sampling error), for means and commonly used indexes of effect size: correlations and d-values. The prediction interval is calculable based on objective study characteristics (i.e., effect size of the original study and sample sizes of the original study and planned replication) even when sample sizes across studies are unequal. The prediction interval provides an a priori method for assessing if the difference between an original and replication result is consistent with what can be expected due to sample error alone. We provide open-source software tools that allow researchers, reviewers, replicators, and editors to easily calculate prediction intervals.
Archive | 2015
Patricia L. Baratta; Jeffrey R. Spence
Abstract The multidimensional structure of boredom poses unique measurement challenges related to scale length and statistical modeling. We systematically address these concerns in two studies. In Study 1, we use item response theory to shorten the 29-item Multidimensional State Boredom Scale (MSBS) (Fahlman et al., 2013). In Study 2, we use structural equation modeling to compare two theoretically consistent multidimensional structures of boredom (superordinate and multivariate) with the most commonly used, yet theoretically inconsistent, structure in boredom research (unidimensional parallel model). Our findings provide support for modeling boredom as multidimensional and demonstrate the impact of model selection on effect sizes and significance.
European Journal of Work and Organizational Psychology | 2018
Patricia L. Baratta; Jeffrey R. Spence
ABSTRACT For centuries, scholars have positioned state boredom as an impediment to organizational productivity and performance given its unpleasant and distracting qualities. However, research on state boredom has been impeded by a lack of definitional consensus and measurement issues. In this article, we sought to advance organizational research on state boredom by developing the State Boredom Inventory (SBI), an 11-item measure grounded in a theoretically derived definition of state boredom. Across five studies using 10 independent samples, we develop the SBI and provide validity evidence for our measure, including content validity and convergent and discriminant validity. Our data support the conceptualization of state boredom as a higher-order multidimensional construct with three underlying dimensions: disengagement, unpleasant low arousal, and inattention.
Advances in Methods and Practices in Psychological Science | 2018
David J. Stanley; Jeffrey R. Spence
Growing awareness of how susceptible research is to errors, coupled with well-documented replication failures, has caused psychological researchers to move toward open science and reproducible research. In this Tutorial, to facilitate reproducible psychological research, we present a tool that creates reproducible tables that follow the American Psychological Association’s (APA’s) style. Our tool, apaTables, automates the creation of APA-style tables for commonly used statistics and analyses in psychological research: correlations, multiple regressions (with and without blocks), standardized mean differences, N-way independent-groups analyses of variance (ANOVAs), within-subjects ANOVAs, and mixed-design ANOVAs. All tables are saved as Microsoft Word documents, so they can be readily incorporated into manuscripts without manual formatting or transcription of values.
Human Resource Management Review | 2011
Jeffrey R. Spence; Lisa M. Keeping