Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robbie C. M. van Aert is active.

Publication


Featured researches published by Robbie C. M. van Aert.


Science | 2016

Response to Comment on "Estimating the reproducibility of psychological science"

Christopher Jon Anderson; Štěpán Bahník; Michael Barnett-Cowan; Frank A. Bosco; Jesse Chandler; Christopher R. Chartier; Felix Cheung; Cody D. Christopherson; Andreas Cordes; Edward Cremata; Nicolás Della Penna; Vivien Estel; Anna Fedor; Stanka A. Fitneva; Michael C. Frank; James A. Grange; Joshua K. Hartshorne; Fred Hasselman; Felix Henninger; Marije van der Hulst; Kai J. Jonas; Calvin Lai; Carmel A. Levitan; Jeremy K. Miller; Katherine Sledge Moore; Johannes Meixner; Marcus R. Munafò; Koen Ilja Neijenhuijs; Gustav Nilsonne; Brian A. Nosek

Gilbert et al. conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.


PLOS ONE | 2014

Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results

Marcel A.L.M. van Assen; Robbie C. M. van Aert; Michèle B. Nuijten; Jelte M. Wicherts

Background De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4). Methods and Findings Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Conclusion Publishing everything is more effective than only reporting significant outcomes.


Frontiers in Psychology | 2016

Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking

Jelte M. Wicherts; Coosje Lisabet Sterre Veldkamp; Hilde Augusteijn; Marjan Bakker; Robbie C. M. van Aert; Marcel A.L.M. van Assen

The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.


Perspectives on Psychological Science | 2016

Conducting Meta-Analyses Based on p Values Reservations and Recommendations for Applying p-Uniform and p-Curve

Robbie C. M. van Aert; Jelte M. Wicherts; Marcel A.L.M. van Assen

Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform.


PeerJ | 2016

Distributions of p-values smaller than .05 in psychology : What is going on?

C.H.J. Hartgerink; Robbie C. M. van Aert; Michèle B. Nuijten; Jelte M. Wicherts; Marcel A.L.M. van Assen

Previous studies provided mixed findings on pecularities in p-value distributions in psychology. This paper examined 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of p-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time. We indeed found evidence for a bump just below .05 in the distribution of exactly reported p-values in the journals Developmental Psychology, Journal of Applied Psychology, and Journal of Personality and Social Psychology, but the bump did not increase over the years and disappeared when using recalculated p-values. We found clear and direct evidence for the QRP “incorrect rounding of p-value” (John, Loewenstein & Prelec, 2012) in all psychology journals. Finally, we also investigated monotonic excess of p-values, an effect of certain QRPs that has been neglected in previous research, and developed two measures to detect this by modeling the distributions of statistically significant p-values. Using simulations and applying the two measures to the retrieved test results, we argue that, although one of the measures suggests the use of QRPs in psychology, it is difficult to draw general conclusions concerning QRPs based on modeling of p-value distributions.


PLOS ONE | 2017

Bayesian evaluation of effect size after replicating an original study

Robbie C. M. van Aert; Marcel A.L.M. van Assen

The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Standard analyses fail to show that US studies overestimate effect sizes in softer research

Michèle B. Nuijten; Marcel A.L.M. van Assen; Robbie C. M. van Aert; Jelte M. Wicherts

Fanelli and Ioannidis (1) have recently hypothesized that scientific biases are worsened by the relatively high publication pressures in the United States and by the use of “softer” methodologies in much of the behavioral sciences. The authors analyzed nearly 1,200 studies from 82 meta-analyses and found more extreme effect sizes in studies from the United States, and when using soft behavioral (BE) versus less-soft biobehavioral (BB) and nonbehavioral (NB) methods. Their results are based on nonstandard analyses, withas the dependent variable, where is the effect size (log of the odds ratio) of study i in meta-analysis j, and is the summary effect size of …


Statistics in Medicine | 2018

Multistep estimators of the between-study variance: The relationship with the Paule-Mandel estimator: Multi-step estimators of the between-study variance

Robbie C. M. van Aert; Dan Jackson

A wide variety of estimators of the between‐study variance are available in random‐effects meta‐analysis. Many, but not all, of these estimators are based on the method of moments. The DerSimonian‐Laird estimator is widely used in applications, but the Paule‐Mandel estimator is an alternative that is now recommended. Recently, DerSimonian and Kacker have developed two‐step moment‐based estimators of the between‐study variance. We extend these two‐step estimators so that multiple (more than two) steps are used. We establish the surprising result that the multistep estimator tends towards the Paule‐Mandel estimator as the number of steps becomes large. Hence, the iterative scheme underlying our new multistep estimator provides a hitherto unknown relationship between two‐step estimators and Paule‐Mandel estimator. Our analysis suggests that two‐step estimators are not necessarily distinct estimators in their own right; instead, they are quantities that are closely related to the usual iterative scheme that is used to calculate the Paule‐Mandel estimate. The relationship that we establish between the multistep and Paule‐Mandel estimator is another justification for the use of the latter estimator. Two‐step and multistep estimators are perhaps best conceptualized as approximate Paule‐Mandel estimators.


Behavior Research Methods | 2018

Examining reproducibility in psychology : A hybrid method for combining a statistically significant original study and a replication

Robbie C. M. van Aert; Marcel A.L.M. van Assen

The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter problems when using traditional meta-analysis techniques. The original study’s effect size is most probably overestimated because it is statistically significant, and this bias is not taken into consideration in traditional meta-analysis. We have developed a hybrid method that does take the statistical significance of an original study into account and enables (a) accurate effect size estimation, (b) estimation of a confidence interval, and (c) testing of the null hypothesis of no effect. We analytically approximate the performance of the hybrid method and describe its statistical properties. By applying the hybrid method to data from the Reproducibility Project: Psychology (Open Science Collaboration, 2015), we demonstrate that the conclusions based on the hybrid method are often in line with those of the replication, suggesting that many published psychological studies have smaller effect sizes than those reported in the original study, and that some effects may even be absent. We offer hands-on guidelines for how to statistically combine an original study and replication, and have developed a Web-based application (https://rvanaert.shinyapps.io/hybrid) for applying the hybrid method. Electronic supplementary material The online version of this article (10.3758/s13428-017-0967-6) contains supplementary material, which is available to authorized users.


Archive | 2017

Preprint: The Effect of Publication Bias on the Assessment of Heterogeneity

Hilde Augusteijn; Robbie C. M. van Aert; Marcel A.L.M. van Assen

Collaboration


Dive into the Robbie C. M. van Aert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Hasselman

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Kai J. Jonas

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marije van der Hulst

Erasmus University Rotterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge