Gabor Simonovits
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabor Simonovits.
Science | 2014
Annie Franco; Neil Malhotra; Gabor Simonovits
The file drawer is full. Should we worry? Experiments that produce null results face a higher barrier to publication than those that yield statistically significant differences. Whether this is a problem depends on how many null but otherwise valid results might be trapped in the file drawer. Franco et al. use a Time-sharing Experiments in the Social Sciences archive of nearly 250 peer-reviewed proposals of social science experiments conducted on nationally representative samples. They find that only 10 out of 48 null results were published, whereas 56 out of 91 studies with strongly significant results made it into a journal. Science, this issue p. 1502 Fully half of peer-reviewed and implemented social science experiments are not published. We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.
Social Psychological and Personality Science | 2016
Annie Franco; Neil Malhotra; Gabor Simonovits
Many scholars have raised concerns about the credibility of empirical findings in psychology, arguing that the proportion of false positives reported in the published literature dramatically exceeds the rate implied by standard significance levels. A major contributor of false positives is the practice of reporting a subset of the potentially relevant statistical analyses pertaining to a research project. This study is the first to provide direct evidence of selective underreporting in psychology experiments. To overcome the problem that the complete experimental design and full set of measured variables are not accessible for most published research, we identify a population of published psychology experiments from a competitive grant program for which questionnaires and data are made publicly available because of an institutional rule. We find that about 40% of studies fail to fully report all experimental conditions and about 70% of studies do not report all outcome variables included in the questionnaire. Reported effect sizes are about twice as large as unreported effect sizes and are about 3 times more likely to be statistically significant.
Journal of Experimental Political Science | 2017
Annie Franco; Neil Malhotra; Gabor Simonovits; L. J. Zigerell
Weighting techniques are employed to generalize results from survey experiments to populations of theoretical and substantive interest. Although weighting is often viewed as a second-order methodological issue, these adjustment methods invoke untestable assumptions about the nature of sample selection and potential heterogeneity in the treatment effect. Therefore, although weighting is a useful technique in estimating population quantities, it can introduce bias and also be used as a researcher degree of freedom. We review survey experiments published in three major journals from 2000–2015 and find that there are no standard operating procedures for weighting survey experiments. We argue that all survey experiments should report the sample average treatment effect (SATE). Researchers seeking to generalize to a broader population can weight to estimate the population average treatment effect (PATE), but should discuss the construction and application of weights in a detailed and transparent manner given the possibility that weighting can introduce bias.
The Journal of Politics | 2018
Erik Peterson; Gabor Simonovits
What happens after issue frames shape public opinion? We offer an account of the downstream effects of issue frames on candidate choice. We then use three studies combining issue framing experiments with conjoint candidate choice experiments to directly assess these downstream effects. Despite an ideal setting for elite influence on public opinion, we find that frames ultimately have modest effects on how the public later evaluates politicians. Our theoretical framework highlights two sources of this disconnect. Frame-induced opinion change is only one component, often outweighed by other factors, in candidate choice, and the issues most amenable to framing are the least relevant for evaluating candidates. This introduces a new consideration into debates about the political consequences of issue frames. Even after they change the public’s policy opinions, issue frames may still have limited implications for other political outcomes.
Journal of Experimental Political Science | 2017
Erik Peterson; Gabor Simonovits
Can politicians use targeted messages to offset position taking that would otherwise reduce their public support? We examine the effect of a politician’s justification for their tax policy stance on public opinion and identify limits on the ability of justifications to generate leeway for incongruent position taking on this issue. We draw on political communication research to establish expectations about the heterogeneous effects of justifications that employ either evidence or values based on whether or not constituents agree with the position a politician takes. In two survey experiments, we find small changes in support in response to these types of messages among targeted groups, but rule out large benefits for politicians to selectively target policy justifications toward subsets of the public. We also highlight a potential cost to selective messaging by showing that when these targeted messages reach unintended audiences they can backfire and reduce a candidate’s support.
Electoral Studies | 2012
Gabor Simonovits
Electoral Studies | 2014
Eric Chen; Gabor Simonovits; Jon A. Krosnick; Josh Pasek
Political Analysis | 2015
Annie Franco; Neil Malhotra; Gabor Simonovits
Public Choice | 2014
Aron Kiss; Gabor Simonovits
Political Behavior | 2015
Gabor Simonovits