Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siri Isaksson is active.

Publication


Featured researches published by Siri Isaksson.


Science | 2016

Evaluating replicability of laboratory experiments in economics

Colin F. Camerer; Anna Dreber; Eskil Forsell; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Johan Almenberg; Adam Altmejd; Taizan Chan; Emma Heikensten; Felix Holzmeister; Taisuke Imai; Siri Isaksson; Gideon Nave; Thomas Pfeiffer; Michael Razen; Hang Wu

Another social science looks at itself Experimental economists have joined the reproducibility discussion by replicating selected published experiments from two top-tier journals in economics. Camerer et al. found that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction. This proportion is somewhat lower than unaffiliated experts were willing to bet in an associated prediction market, but roughly in line with expectations from sample sizes and P values. Science, this issue p. 1433 By several metrics, economics experiments do replicate, although not as often as predicted. The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Using prediction markets to estimate the reproducibility of scientific research

Anna Dreber; Thomas Pfeiffer; Johan Almenberg; Siri Isaksson; Brad Wilson; Yiling Chen; Brian A. Nosek; Magnus Johannesson

Significance There is increasing concern about the reproducibility of scientific research. For example, the costs associated with irreproducible preclinical research alone have recently been estimated at US


Nature Human Behaviour | 2018

Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

Colin F. Camerer; Anna Dreber; Felix Holzmeister; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Gideon Nave; Brian A. Nosek; Thomas Pfeiffer; Adam Altmejd; Nick Buttrick; Taizan Chan; Yiling Chen; Eskil Forsell; Anup Gampa; Emma Heikensten; Lily Hummer; Taisuke Imai; Siri Isaksson; Dylan Manfredi; Julia Rose; Eric-Jan Wagenmakers; Hang Wu

28 billion a year in the United States. However, there are currently no mechanisms in place to quickly identify findings that are unlikely to replicate. We show that prediction markets are well suited to bridge this gap. Prediction markets set up to estimate the reproducibility of 44 studies published in prominent psychology journals and replicated in The Reproducibility Project: Psychology predict the outcomes of the replications well and outperform a survey of individual forecasts. Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants’ individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a “statistically significant” finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.


Archive | 2016

Replication of Duffy & Puzzello 2014

Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai

Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.Camerer et al. carried out replications of 21 Science and Nature social science experiments, successfully replicating 13 out of 21 (62%). Effect sizes of replications were about half of the size of the originals.


Archive | 2016

Replication of Chen & Chen 2011

Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai


Archive | 2016

Replication of Kidd and Castano (2013)

Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson


Archive | 2016

Replication of Rand et al (2012)

Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson


Archive | 2016

Replication of Kuziemko et al. 2014

Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai


Archive | 2016

Replication of Abeler et al. (AER 2011)

Eskil Forsell; Magnus Johannesson; Colin F. Camerer; Adam Altmejd; Siri Isaksson; Emma Heikensten; Anna Dreber Almenberg; Gideon Nave; Thomas Pfeiffer; Taisuke Imai


Archive | 2016

Replication of Hauser et al (2014)

Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson

Collaboration


Dive into the Siri Isaksson's collaboration.

Top Co-Authors

Avatar

Adam Altmejd

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar

Emma Heikensten

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin F. Camerer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gideon Nave

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Taisuke Imai

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eskil Forsell

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Dreber

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar

Johan Almenberg

Stockholm School of Economics

View shared research outputs
Researchain Logo
Decentralizing Knowledge