Emma Heikensten
Stockholm School of Economics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emma Heikensten.
Science | 2016
Colin F. Camerer; Anna Dreber; Eskil Forsell; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Johan Almenberg; Adam Altmejd; Taizan Chan; Emma Heikensten; Felix Holzmeister; Taisuke Imai; Siri Isaksson; Gideon Nave; Thomas Pfeiffer; Michael Razen; Hang Wu
Another social science looks at itself Experimental economists have joined the reproducibility discussion by replicating selected published experiments from two top-tier journals in economics. Camerer et al. found that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction. This proportion is somewhat lower than unaffiliated experts were willing to bet in an associated prediction market, but roughly in line with expectations from sample sizes and P values. Science, this issue p. 1433 By several metrics, economics experiments do replicate, although not as often as predicted. The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.
Nature Human Behaviour | 2018
Colin F. Camerer; Anna Dreber; Felix Holzmeister; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Gideon Nave; Brian A. Nosek; Thomas Pfeiffer; Adam Altmejd; Nick Buttrick; Taizan Chan; Yiling Chen; Eskil Forsell; Anup Gampa; Emma Heikensten; Lily Hummer; Taisuke Imai; Siri Isaksson; Dylan Manfredi; Julia Rose; Eric-Jan Wagenmakers; Hang Wu
Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.Camerer et al. carried out replications of 21 Science and Nature social science experiments, successfully replicating 13 out of 21 (62%). Effect sizes of replications were about half of the size of the originals.
Royal Society Open Science | 2015
Marcus R. Munafò; Thomas Pfeiffer; Adam Altmejd; Emma Heikensten; Johan Almenberg; Alexander J Bird; Yiling Chen; Brad Wilson; Magnus Johannesson; Anna Dreber
The 2014 Research Excellence Framework (REF2014) was conducted to assess the quality of research carried out at higher education institutions in the UK over a 6 year period. However, the process was criticized for being expensive and bureaucratic, and it was argued that similar information could be obtained more simply from various existing metrics. We were interested in whether a prediction market on the outcome of REF2014 for 33 chemistry departments in the UK would provide information similar to that obtained during the REF2014 process. Prediction markets have become increasingly popular as a means of capturing what is colloquially known as the ‘wisdom of crowds’, and enable individuals to trade ‘bets’ on whether a specific outcome will occur or not. These have been shown to be successful at predicting various outcomes in a number of domains (e.g. sport, entertainment and politics), but have rarely been tested against outcomes based on expert judgements such as those that formed the basis of REF2014.
Archive | 2016
Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai
Archive | 2016
Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai
Archive | 2016
Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson
Archive | 2016
Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson
Archive | 2016
Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai
Archive | 2016
Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai
Archive | 2016
Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson