Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles R. Ebersole is active.

Publication


Featured researches published by Charles R. Ebersole.


Psychological Science | 2014

The Rules of Implicit Evaluation by Race, Religion, and Age

Jordan Axt; Charles R. Ebersole; Brian A. Nosek

The social world is stratified. Social hierarchies are known but often disavowed as anachronisms or unjust. Nonetheless, hierarchies may persist in social memory. In three studies (total N > 200,000), we found evidence of social hierarchies in implicit evaluation by race, religion, and age. Participants implicitly evaluated their own racial group most positively and the remaining racial groups in accordance with the following hierarchy: Whites > Asians > Blacks > Hispanics. Similarly, participants implicitly evaluated their own religion most positively and the remaining religions in accordance with the following hierarchy: Christianity > Judaism > Hinduism or Buddhism > Islam. In a final study, participants of all ages implicitly evaluated age groups following this rule: children > young adults > middle-age adults > older adults. These results suggest that the rules of social evaluation are pervasively embedded in culture and mind.


Proceedings of the National Academy of Sciences of the United States of America | 2018

The preregistration revolution

Brian A. Nosek; Charles R. Ebersole; Alexander DeHaven; David Mellor

Progress in science relies in part on generating hypotheses with existing observations and testing hypotheses with new observations. This distinction between postdiction and prediction is appreciated conceptually but is not respected in practice. Mistaking generation of postdictions with testing of predictions reduces the credibility of research findings. However, ordinary biases in human reasoning, such as hindsight bias, make it hard to avoid this mistake. An effective solution is to define the research questions and analysis plan before observing the research outcomes—a process called preregistration. Preregistration distinguishes analyses and outcomes that result from predictions from those that result from postdictions. A variety of practical strategies are available to make the best possible use of preregistration in circumstances that fall short of the ideal application, such as when the data are preexisting. Services are now available for preregistration across all disciplines, facilitating a rapid increase in the practice. Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.


Social Psychological and Personality Science | 2016

The Effects of Disgust on Moral Judgments: Testing Moderators

David J. Johnson; Jessica Wortman; Felix Cheung; Megan Hein; Richard E. Lucas; M. Brent Donnellan; Charles R. Ebersole; Rachel K. Narr

There is evidence that inducing feelings of disgust increases the severity of moral judgments, but the size of this association has been questioned by a recent meta-analysis. Based on prior research and theory, we tested whether the effects of disgust on moral judgments might be moderated by sensitivity to bodily states (Studies 1 and 2) and the accessibility of mood (Study 2) in two large samples (total N = 1,412). We did not find that disgust directly increased the severity of moral judgments nor did we find evidence that these moderators influenced the effect of disgust. Thus, the current studies do not support large effects for induced disgust and for two presumed moderators of these effects.


PLOS Biology | 2016

Scientists’ Reputations Are Based on Getting It Right, Not Being Right

Charles R. Ebersole; Jordan Axt; Brian A. Nosek

Replication is vital for increasing precision and accuracy of scientific claims. However, when replications “succeed” or “fail,” they could have reputational consequences for the claim’s originators. Surveys of United States adults (N = 4,786), undergraduates (N = 428), and researchers (N = 313) showed that reputational assessments of scientists were based more on how they pursue knowledge and respond to replication evidence, not whether the initial results were true. When comparing one scientist that produced boring but certain results with another that produced exciting but uncertain results, opinion favored the former despite researchers’ belief in more rewards for the latter. Considering idealized views of scientific practices offers an opportunity to address incentives to reward both innovation and verification.


Proceedings of the National Academy of Sciences of the United States of America | 2018

Reply to Ledgerwood: Predictions without analysis plans are inert

Brian A. Nosek; Charles R. Ebersole; Alexander DeHaven; David Mellor

Ledgerwood (1) argues that there are two independent uses of preregistration that are conflated in Nosek et al. (2) and elsewhere: “Preregistering theoretical predictions enables theory falsifiability. Preregistering analysis plans enables type I error control.” We appreciate that the comment elevates the complementary roles of prediction and analysis plans in preregistration. We disagree that they are conflated in the sense of being “two types of preregistration.” To enable theory falsification, we agree that a preregistration should offer a prediction derived from theory and provide the theoretical context. However, a prediction without an analysis plan is inert for falsification. An analysis plan is necessary to specify how the prediction will be tested with … [↵][1]1To whom correspondence should be addressed. Email: nosek{at}virginia.edu. [1]: #xref-corresp-1-1


Advances in Methods and Practices in Psychological Science | 2018

The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network

Hannah Moshontz; Lorne Campbell; Charles R. Ebersole; Hans IJzerman; Heather L. Urry; Patrick S. Forscher; Jon Grahe; Randy J. McCarthy; Erica D. Musser; Protzko

Concerns about the veracity of psychological research have been growing. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions or replicate prior research in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time limited), efficient (in that structures and principles are reused for different projects), decentralized, diverse (in both subjects and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside the network). The PSA and other approaches to crowdsourced psychological science will advance understanding of mental processes and behaviors by enabling rigorous research and systematic examination of its generalizability.


Journal of Experimental Social Psychology | 2016

Many Labs 3: Evaluating participant pool quality across the academic semester via replication

Charles R. Ebersole; Olivia E. Atherton; Aimee L. Belanger; Hayley M Skulborstad; Jill Allen; Jonathan B. Banks; Erica Baranski; Michael J. Bernstein; Diane B. V. Bonfiglio; Leanne Boucher; Elizabeth R. Brown; Nancy I. Budiman; Athena H. Cairo; Colin A. Capaldi; Christopher R. Chartier; Joanne M. Chung; David C. Cicero; Jennifer A. Coleman; John G. Conway; William E. Davis; Thierry Devos; Melody M. Fletcher; Komi German; Jon Grahe; Anthony D. Hermann; Joshua A. Hicks; Nathan Honeycutt; Brandon Thomas Humphrey; Matthew Janus; David J. Johnson


Archive | 2016

An Unintentional, Robust, and Replicable Pro-Black Bias in Social Judgment

Jordan Axt; Charles R. Ebersole; Brian A. Nosek


Journal of Experimental Social Psychology | 2017

Observe, hypothesize, test, repeat: Luttrell, Petty and Xu (2017) demonstrate good science ☆

Charles R. Ebersole; Ravin Alaei; Olivia E. Atherton; Michael J. Bernstein; Mitch Brown; Christopher R. Chartier; Lisa Y. Chung; Anthony D. Hermann; Jennifer A. Joy-Gaba; Marsha J. Line; Nicholas O. Rule; Donald F. Sacco; Leigh Ann Vaughn; Brian A. Nosek


Archive | 2012

A Meta-Analysis of Procedures to Change Implicit Measures

Patrick S. Forscher; Calvin Lai; Jordan Axt; Charles R. Ebersole; Michelle Herman; Patricia G. Devine; Brian A. Nosek

Collaboration


Dive into the Charles R. Ebersole's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordan Axt

University of Virginia

View shared research outputs
Top Co-Authors

Avatar

Calvin Lai

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

İlker Dalğar

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge