Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. Stanley is active.

Publication


Featured researches published by David J. Stanley.


Journal of Applied Psychology | 2007

Assessing dissimilarity relations under missing data conditions: Evidence from computer simulations

Natalie J. Allen; David J. Stanley; Helen M. Williams; Sarah J. Ross

The extensive research examining relations between group member dissimilarity and outcome measures has yielded inconsistent results. In the present research, the authors used computer simulations to examine the impact that a methodological feature of such research, participant nonresponse, can have on dissimilarity-outcome relations. Results suggest that using only survey responders to calculate dissimilarity typically results in underestimation of true dissimilarity effects and that these effects can occur even when response rates are high.


Perspectives on Psychological Science | 2014

Expectations for Replications: Are Yours Realistic?

David J. Stanley; Jeffrey R. Spence

Failures to replicate published psychological research findings have contributed to a “crisis of confidence.” Several reasons for these failures have been proposed, the most notable being questionable research practices and data fraud. We examine replication from a different perspective and illustrate that current intuitive expectations for replication are unreasonable. We used computer simulations to create thousands of ideal replications, with the same participants, wherein the only difference across replications was random measurement error. In the first set of simulations, study results differed substantially across replications as a result of measurement error alone. This raises questions about how researchers should interpret failed replication attempts, given the large impact that even modest amounts of measurement error can have on observed associations. In the second set of simulations, we illustrated the difficulties that researchers face when trying to interpret and replicate a published finding. We also assessed the relative importance of both sampling error and measurement error in producing variability in replications. Conventionally, replication attempts are viewed through the lens of verifying or falsifying published findings. We suggest that this is a flawed perspective and that researchers should adjust their expectations concerning replications and shift to a meta-analytic mind-set.


Organizational Research Methods | 2007

Assessing the Impact of Nonresponse on Work Group Diversity Effects

Natalie J. Allen; David J. Stanley; Helen M. Williams; Sarah J. Ross

Research examining relations between work group diversity and outcome measures often relies on diversity scores that are calculated on the basis of individual responses to organizational surveys. When employees fail to respond to a survey, however, the resultant diversity score representing their group will be somewhat distorted. The authors conducted a series of computer simulations to examine the extent to which correlations between group diversity scores (derived from continuous or categorical variables) and outcome variables were attenuated by various forms of random and systematic participant nonresponse. Results indicate that random nonresponse, and many forms of systematic nonresponse, substantially attenuate mean observed correlations.


Emotion | 2009

Two-dimensional affective space: a new approach to orienting the axes.

David J. Stanley; John P. Meyer

What are the constructs that underlie affective experiences? Some authors have suggested Valence and Activation, whereas others have suggested Positive Activation and Negative Activation-both approaches are represented by different axis orientations in traditional two-mode (People x Adjectives) factor analysis. The authors provide new evidence for this debate by using three-mode (People x Adjectives x Occasions) parallel factor (PARAFAC) analysis to determine the appropriate axes (and hence constructs) for representing affective experiences. Unlike traditional factor analysis, with PARAFAC different orientations of the axes fit the data differently so it is possible to determine the best fitting axes. In Study 1, the authors assessed the extent to which the PARAFAC procedure was able recover the axes defining a two-dimensional factor space under different conditions. In both Study 2 (N = 112) and Study 3 (N = 349), undergraduate students rated their emotional states on a variety of occasions. The best fitting axes for two-dimensional affective space were Valence and Activation in both studies. Exploration of higher dimensional solutions in Study 3 revealed a three-factor solution that, in addition to an activation factor, supported the separation of positive and negative emotions.


PLOS ONE | 2016

Prediction Interval: What to Expect When You’re Expecting … A Replication

Jeffrey R. Spence; David J. Stanley

A challenge when interpreting replications is determining whether the results of a replication “successfully” replicate the original study. Looking for consistency between two studies is challenging because individual studies are susceptible to many sources of error that can cause study results to deviate from each other and the population effect in unpredictable directions and magnitudes. In the current paper, we derive methods to compute a prediction interval, a range of results that can be expected in a replication due to chance (i.e., sampling error), for means and commonly used indexes of effect size: correlations and d-values. The prediction interval is calculable based on objective study characteristics (i.e., effect size of the original study and sample sizes of the original study and planned replication) even when sample sizes across studies are unequal. The prediction interval provides an a priori method for assessing if the difference between an original and replication result is consistent with what can be expected due to sample error alone. We provide open-source software tools that allow researchers, reviewers, replicators, and editors to easily calculate prediction intervals.


Behavior Research Methods | 2011

Examining workgroup diversity effects: does playing by the (group-retention) rules help or hinder?

David J. Stanley; Natalie J. Allen; Helen M. Williams; Sarah J. Ross

Group diversity researchers are often faced with the problem of calculating diversity indices for groups that are incomplete due to participant nonresponse. Because participant nonresponse may attenuate the correlations that are observed between group diversity scores and outcome variables, some researchers use group-retention rules based on within-group response rates. With this approach, only those groups that have a within-group response rate at, or higher than, the rate prescribed by the group-retention rule are retained for subsequent analyses. We conducted two sets of experiments using computer simulations to determine the usefulness of group-retention rules. We found that group-retention rules are not a substitute for a high response rate and may decrease the accuracy of observed relations, and consequently, we advise against their use in diversity research.


scalable uncertainty management | 2011

Indirect elicitation of NIN-AND trees in causal model acquisition

Yang Xiang; Minh Truong; Jingyu Zhu; David J. Stanley; Blair Nonnecke

To specify a Bayes net, a conditional probability table, often of an effect conditioned on its n causes, needs to be assessed for each node. Its complexity is generally exponential in n and hence how to scale up is important to knowledge engineering. The non-impeding noisy-AND (NIN-AND) tree causal model reduces the complexity to linear while explicitly expressing both reinforcing and undermining interactions among causes. The key challenge to acquisition of such a model from an expert is the elicitation of the NIN-AND tree topology. In this work, we propose and empirically evaluate two methods that indirectly acquire the tree topology through a small subset of elicited multi-causal probabilities. We demonstrate the effectiveness of the methods in both human-based experiments and simulation-based studies.


Canadian Journal of Behavioural Science | 2018

Meta-analysis of the relation between interview anxiety and interview performance.

Deborah M. Powell; David J. Stanley; Kayla N. Brown

We conducted a meta-analysis to estimate the effect of self-reported interview anxiety on job candidates’ interview performance. Correspondingly, we examined the extent to which this relation was moderated by anxiety measurement approaches, type of interview (mock vs. real), timing of the anxiety measurement (before vs. after the interview), age, and gender. The overall meta-analytic correlation of −.19 was moderated by measurement approach and type of interview. Additionally, we evaluated the contributing studies with respect to power/sample size and provide sample size guidance for future research. The overall negative relation of −.19 (a medium effect size in this research area) indicates that anxiety may have a meaningful impact on hiring decisions in competitive situations through a decrease in interview performance. Nous avons réalisé une méta-analyse pour estimer l’effet de l’anxiété autodéclarée face aux entretiens d’embauche sur la performance de candidats lors d’entretiens d’embauche. Nous avons examiné l’ampleur avec laquelle cette relation était modérée par les approches de mesure de l’anxiété, le type d’entretien (fictif versus réel), le moment de la mesure de l’anxiété (avant ou après l’entretien), l’âge et le sexe. La corrélation méta-analytique globale de -0,19 a été modérée par l’approche de mesure et le type d’entretien. Nous avons aussi évalué les études à l’appui de la puissance/taille de l’échantillon et proposons des tailles d’échantillons pour les recherches à venir. La relation négative globale de -0,19 (un effet de taille moyenne dans ce domaine de recherche) indique que l’anxiété peut avoir un impact significatif sur les décisions d’embauche dans les situations de concurrence, comme le démontre la baisse de performance lors de l’entretien d’embauche.


Advances in Methods and Practices in Psychological Science | 2018

Reproducible Tables in Psychology Using the apaTables Package

David J. Stanley; Jeffrey R. Spence

Growing awareness of how susceptible research is to errors, coupled with well-documented replication failures, has caused psychological researchers to move toward open science and reproducible research. In this Tutorial, to facilitate reproducible psychological research, we present a tool that creates reproducible tables that follow the American Psychological Association’s (APA’s) style. Our tool, apaTables, automates the creation of APA-style tables for commonly used statistics and analyses in psychological research: correlations, multiple regressions (with and without blocks), standardized mean differences, N-way independent-groups analyses of variance (ANOVAs), within-subjects ANOVAs, and mixed-design ANOVAs. All tables are saved as Microsoft Word documents, so they can be readily incorporated into manuscripts without manual formatting or transcription of values.


Organization Management Journal | 2011

Interpreting organizational survey results: a critical application of the self-serving bias

Peter A. Hausdorf; Stephen D. Risavy; David J. Stanley

Surveys are used extensively by researchers and practitioners in organizations to measure employee attitudes and assess organizational health. Survey items can reflect a wide range of topics including employee attitudes, perceptions of management, and organizational culture. Surprisingly, the issue of whether employee focused items produce more positive employee responses (vis-à-vis manager or organization focused items) has received little attention. Specifically, there may be self-serving biases in organizational survey responses that may lead to inaccurate diagnosing of organizational problems. We assess the impact of self-serving biases on the pattern of employee responses to organizational surveys. Results from two studies suggest that employees respond more positively to items that are self-focused and less positively to items that are other-focused. Therefore, to the extent that surveys contain both types of items, these biases may influence the diagnosis of organizational problems. In addition, results from the second study suggest that employees glorify themselves for both self-enhancement and social desirability reasons. Implications are discussed.

Collaboration


Dive into the David J. Stanley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Meyer

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Natalie J. Allen

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Sarah J. Ross

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elyse R. Maltin

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kate J. McInnis

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Leah D. Sheppard

University of Western Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge