Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uri Simonsohn is active.

Publication


Featured researches published by Uri Simonsohn.


Psychological Science | 2011

False-Positive Psychology Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

Joseph P. Simmons; Leif D. Nelson; Uri Simonsohn

In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.


Science | 2015

Promoting an open research culture

Brian A. Nosek; George Alter; George C. Banks; Denny Borsboom; Sara Bowman; S. J. Breckler; Stuart Buck; Christopher D. Chambers; G. Chin; Garret Christensen; M. Contestabile; A. Dafoe; E. Eich; J. Freese; Rachel Glennerster; D. Goroff; Donald P. Green; B. Hesse; Macartan Humphreys; John Ishiyama; Dean Karlan; A. Kraut; Arthur Lupia; P. Mabry; T. Madon; Neil Malhotra; E. Mayo-Wilson; M. McNutt; Edward Miguel; E. Levy Paluck

Author guidelines for journals could help to promote transparency, openness, and reproducibility Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).


Nature Human Behaviour | 2017

A manifesto for reproducible science

Marcus R. Munafò; Brian A. Nosek; Dorothy V. M. Bishop; Katherine S. Button; Christopher D. Chambers; Nathalie Percie du Sert; Uri Simonsohn; Eric-Jan Wagenmakers; Jennifer J. Ware; John P. A. Ioannidis

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.


Psychological Science | 2015

Small Telescopes Detectability and the Evaluation of Replication Results

Uri Simonsohn

This article introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are consistent with an effect size big enough to have been detectable in the original study. The approach is demonstrated by examining replications of three well-known findings. Its benefits include the following: (a) differentiating “unsuccessful” replication attempts (i.e., studies yielding p > .05) that are too noisy from those that actively indicate the effect is undetectably different from zero, (b) “protecting” true findings from underpowered replications, and (c) arriving at intuitively compelling inferences in general and for the revisited replications in particular.


Perspectives on Psychological Science | 2014

P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results

Uri Simonsohn; Leif D. Nelson; Joseph P. Simmons

Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the “choice overload” literature.


Science | 2014

Promoting Transparency in Social Science Research

Edward Miguel; Colin F. Camerer; Katherine Casey; Jacob Cohen; Kevin M. Esterling; Alan S. Gerber; Rachel Glennerster; Donald P. Green; Macartan Humphreys; Guido W. Imbens; David D. Laitin; T. Madon; Leif D. Nelson; Brian A. Nosek; Maya L. Petersen; R. Sedlmayr; Joseph P. Simmons; Uri Simonsohn; M. J. van der Laan

Social scientists should adopt higher transparency standards to improve the quality and credibility of research. There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. Changes have been particularly pronounced in development economics, where hundreds of randomized trials have been carried out over the last decade. When experimentation is difficult or impossible, researchers are using quasi-experimental designs. Governments and advocacy groups display a growing appetite for evidence-based policy-making. In 2005, Mexico established an independent government agency to rigorously evaluate social programs, and in 2012, the U.S. Office of Management and Budget advised federal agencies to present evidence from randomized program evaluations in budget requests (1, 2).


Psychological Science | 2013

Just Post It: The Lesson from Two Cases of Fabricated Data Detected by Statistics Alone

Uri Simonsohn

I argue that requiring authors to post the raw data supporting their published results has the benefit, among many others, of making fraud much less likely to go undetected. I illustrate this point by describing two cases of suspected fraud I identified exclusively through statistical analysis of reported means and standard deviations. Analyses of the raw data behind these published results provided invaluable confirmation of the initial suspicions, ruling out benign explanations (e.g., reporting errors, unusual distributions), identifying additional signs of fabrication, and also ruling out one of the suspected fraud’s explanations for his anomalous results. If journals, granting agencies, universities, or other entities overseeing research promoted or required data posting, it seems inevitable that fraud would be reduced.


Psychological Science | 2011

Round Numbers as Goals Evidence From Baseball, SAT Takers, and the Lab

Devin G. Pope; Uri Simonsohn

Where do people’s reference points come from? We conjectured that round numbers in performance scales act as reference points and that individuals exert effort to perform just above rather than just below such numbers. In Study 1, we found that professional baseball players modify their behavior as the season is about to end, seeking to finish with a batting average just above rather than below .300. In Study 2, we found that high school students are more likely to retake the SAT after obtaining a score just below rather than above a round number. In Study 3, we conducted an experiment employing hypothetical scenarios and found that participants reported a greater desire to exert more effort when their performance was just short of rather than just above a round number.


Science | 2015

SCIENTIFIC STANDARDS. Promoting an open research culture.

Brian A. Nosek; George Alter; George C. Banks; Denny Borsboom; Sara Bowman; S. J. Breckler; Stuart Buck; Christopher D. Chambers; G. Chin; Garret Christensen; M. Contestabile; A. Dafoe; E. Eich; J. Freese; Rachel Glennerster; D. Goroff; Donald P. Green; B. Hesse; Macartan Humphreys; John Ishiyama; Dean Karlan; A. Kraut; Arthur Lupia; P. Mabry; T. Madon; Neil Malhotra; E. Mayo-Wilson; M. McNutt; Edward Miguel; Paluck El

Author guidelines for journals could help to promote transparency, openness, and reproducibility Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).


Science | 2015

Promoting an open research culture: Author guidelines for journals could help to promote transparency, openness, and reproducibility

Brian A. Nosek; George Alter; George C. Banks; Denny Borsboom; Sara Bowman; S. J. Breckler; Stuart Buck; Christopher D. Chambers; G. Chin; Garret Christensen; M. Contestabile; A. Dafoe; E. Eich; J. Freese; Rachel Glennerster; D. Goroff; Donald P. Green; B. Hesse; Macartan Humphreys; John Ishiyama; Dean Karlan; A. Kraut; Arthur Lupia; P. Mabry; T. Madon; Neil Malhotra; E. Mayo-Wilson; M. McNutt; Edward Miguel; E. Levy Paluck

Author guidelines for journals could help to promote transparency, openness, and reproducibility Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).

Collaboration


Dive into the Uri Simonsohn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward Miguel

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel Glennerster

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

T. Madon

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge