Felix Holzmeister
University of Innsbruck
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Felix Holzmeister.
Science | 2016
Colin F. Camerer; Anna Dreber; Eskil Forsell; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Johan Almenberg; Adam Altmejd; Taizan Chan; Emma Heikensten; Felix Holzmeister; Taisuke Imai; Siri Isaksson; Gideon Nave; Thomas Pfeiffer; Michael Razen; Hang Wu
Another social science looks at itself Experimental economists have joined the reproducibility discussion by replicating selected published experiments from two top-tier journals in economics. Camerer et al. found that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction. This proportion is somewhat lower than unaffiliated experts were willing to bet in an associated prediction market, but roughly in line with expectations from sample sizes and P values. Science, this issue p. 1433 By several metrics, economics experiments do replicate, although not as often as predicted. The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.
Nature Human Behaviour | 2018
Colin F. Camerer; Anna Dreber; Felix Holzmeister; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Gideon Nave; Brian A. Nosek; Thomas Pfeiffer; Adam Altmejd; Nick Buttrick; Taizan Chan; Yiling Chen; Eskil Forsell; Anup Gampa; Emma Heikensten; Lily Hummer; Taisuke Imai; Siri Isaksson; Dylan Manfredi; Julia Rose; Eric-Jan Wagenmakers; Hang Wu
Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.Camerer et al. carried out replications of 21 Science and Nature social science experiments, successfully replicating 13 out of 21 (62%). Effect sizes of replications were about half of the size of the originals.
Advances in Methods and Practices in Psychological Science | 2018
Bruno Verschuere; Ewout H. Meijer; Ariane Jim; Katherine Hoogesteyn; Robin Orthey; Randy J. McCarthy; John J. Skowronski; Oguz Ali Acar; Balazs Aczel; Bence E. Bakos; Fernando Barbosa; Ernest Baskin; Laurent Bègue; Gershon Ben-Shakhar; Angie R. Birt; Lisa Blatz; Steve D. Charman; Aline Claesen; Samuel L. Clay; Sean P. Coary; Jan Crusius; Jacqueline R. Evans; Noa Feldman; Fernando Ferreira-Santos; Matthias Gamer; Sara Gomes; Marta González-Iraizoz; Felix Holzmeister; Juergen Huber; Andrea Isoni
The self-concept maintenance theory holds that many people will cheat in order to maximize self-profit, but only to the extent that they can do so while maintaining a positive self-concept. Mazar, Amir, and Ariely (2008, Experiment 1) gave participants an opportunity and incentive to cheat on a problem-solving task. Prior to that task, participants either recalled the Ten Commandments (a moral reminder) or recalled 10 books they had read in high school (a neutral task). Results were consistent with the self-concept maintenance theory. When given the opportunity to cheat, participants given the moral-reminder priming task reported solving 1.45 fewer matrices than did those given a neutral prime (Cohen’s d = 0.48); moral reminders reduced cheating. Mazar et al.’s article is among the most cited in deception research, but their Experiment 1 has not been replicated directly. This Registered Replication Report describes the aggregated result of 25 direct replications (total N = 5,786), all of which followed the same preregistered protocol. In the primary meta-analysis (19 replications, total n = 4,674), participants who were given an opportunity to cheat reported solving 0.11 more matrices if they were given a moral reminder than if they were given a neutral reminder (95% confidence interval = [−0.09, 0.31]). This small effect was numerically in the opposite direction of the effect observed in the original study (Cohen’s d = −0.04).
Advances in Methods and Practices in Psychological Science | 2018
Randy J. McCarthy; John J. Skowronski; Bruno Verschuere; Ewout H. Meijer; Ariane Jim; Katherine Hoogesteyn; Robin Orthey; Oguz Ali Acar; Balazs Aczel; Bence E. Bakos; Fernando Barbosa; Ernest Baskin; Laurent Bègue; Gershon Ben-Shakhar; Angie R. Birt; Lisa Blatz; Steve D. Charman; Aline Claesen; Samuel L. Clay; Sean P. Coary; Jan Crusius; Jacqueline R. Evans; Noa Feldman; Fernando Ferreira-Santos; Matthias Gamer; Coby Gerlsma; Sara Gomes; Marta González-Iraizoz; Felix Holzmeister; Juergen Huber
Srull and Wyer (1979) demonstrated that exposing participants to more hostility-related stimuli caused them subsequently to interpret ambiguous behaviors as more hostile. In their Experiment 1, participants descrambled sets of words to form sentences. In one condition, 80% of the descrambled sentences described hostile behaviors, and in another condition, 20% described hostile behaviors. Following the descrambling task, all participants read a vignette about a man named Donald who behaved in an ambiguously hostile manner and then rated him on a set of personality traits. Next, participants rated the hostility of various ambiguously hostile behaviors (all ratings on scales from 0 to 10). Participants who descrambled mostly hostile sentences rated Donald and the ambiguous behaviors as approximately 3 scale points more hostile than did those who descrambled mostly neutral sentences. This Registered Replication Report describes the results of 26 independent replications (N = 7,373 in the total sample; k = 22 labs and N = 5,610 in the primary analyses) of Srull and Wyer’s Experiment 1, each of which followed a preregistered and vetted protocol. A random-effects meta-analysis showed that the protagonist was seen as 0.08 scale points more hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% confidence interval, CI = [0.004, 0.16]). The ambiguously hostile behaviors were seen as 0.08 points less hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% CI = [−0.18, 0.01]). Although the confidence interval for one outcome excluded zero and the observed effect was in the predicted direction, these results suggest that the currently used methods do not produce an assimilative priming effect that is practically and routinely detectable.
PLOS ONE | 2018
Matthias Stefan; Felix Holzmeister; Alexander Müllauer; Michael Kirchler
The integration of ethnical minorities has been a hotly discussed topic in the political, societal, and economic debate. Persistent discrimination of ethnical minorities can hinder successful integration. Given that unequal access to investment and financing opportunities can cause social and economic disparities due to inferior economic prospects, we conducted a field experiment on ethnical discrimination in the finance sector with 1,218 banks in seven European countries. We contacted banks via e-mail, either with domestic or Arabic sounding names, asking for contact details only. We find pronounced discrimination in terms of a substantially lower response rate to e-mails from Arabic senders. Remarkably, the observed discrimination effect is robust for loan- and investment-related requests, across rural and urban locations of banks, and across countries.
Journal of Behavioral and Experimental Finance | 2016
Felix Holzmeister; Armin Pfurtscheller
Journal of Behavioral and Experimental Finance | 2017
Felix Holzmeister
Archive | 2018
Felix Holzmeister; Colin F. Camerer; Anna Dreber Almenberg; Teck-Hua Ho; Juergen Huber; Magnus Johannesson; Michael Kirchler; Brian A. Nosek; Johan Almenberg; Adam Altmejd
Archive | 2016
Felix Holzmeister; Colin F. Camerer; Taisuke Imai; Dylan Manfredi; Gideon Nave
Archive | 2016
Felix Holzmeister; Anna Dreber Almenberg; Magnus Johannesson; Adam Altmejd; Emma Heikensten; Siri Isaksson