Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michèle B. Nuijten is active.

Publication


Featured researches published by Michèle B. Nuijten.


Behavior Research Methods | 2016

The prevalence of statistical reporting errors in psychology (1985-2013).

Michèle B. Nuijten; C.H.J. Hartgerink; Marcel A.L.M. van Assen; Sacha Epskamp; Jelte M. Wicherts

This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the new R package “statcheck.” statcheck retrieved null-hypothesis significance testing (NHST) results from over half of the articles from this period. In line with earlier research, we found that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. In contrast to earlier findings, we found that the average prevalence of inconsistent p-values has been stable over the years or has declined. The prevalence of gross inconsistencies was higher in p-values reported as significant than in p-values reported as nonsignificant. This could indicate a systematic bias in favor of significant results. Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.


PLOS ONE | 2014

Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results

Marcel A.L.M. van Assen; Robbie C. M. van Aert; Michèle B. Nuijten; Jelte M. Wicherts

Background De Winter and Happee [1] examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that “selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective” (p.4). Methods and Findings Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing. Conclusion Publishing everything is more effective than only reporting significant outcomes.


Nature | 2017

Five ways to fix statistics

Jeff Leek; Blakeley B. McShane; Andrew Gelman; David Colquhoun; Michèle B. Nuijten; Steven N. Goodman

As debate rumbles on about how and how much poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science. The common theme? The problem is not our maths, but ourselves. As debate rumbles on about how and how much poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science. The common theme? The problem is not our maths, but ourselves. Illustration by David Parkins


Behavior Research Methods | 2015

A default Bayesian hypothesis test for mediation

Michèle B. Nuijten; Ruud Wetzels; Dora Matzke; Conor V. Dolan; Eric-Jan Wagenmakers

In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301–322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys–Zellner–Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).


PLOS ONE | 2014

Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

Coosje Lisabet Sterre Veldkamp; Michèle B. Nuijten; Linda Dominguez-Alvarez; Marcel A.L.M. van Assen; Jelte M. Wicherts

Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.


PeerJ | 2016

Distributions of p-values smaller than .05 in psychology : What is going on?

C.H.J. Hartgerink; Robbie C. M. van Aert; Michèle B. Nuijten; Jelte M. Wicherts; Marcel A.L.M. van Assen

Previous studies provided mixed findings on pecularities in p-value distributions in psychology. This paper examined 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of p-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time. We indeed found evidence for a bump just below .05 in the distribution of exactly reported p-values in the journals Developmental Psychology, Journal of Applied Psychology, and Journal of Personality and Social Psychology, but the bump did not increase over the years and disappeared when using recalculated p-values. We found clear and direct evidence for the QRP “incorrect rounding of p-value” (John, Loewenstein & Prelec, 2012) in all psychology journals. Finally, we also investigated monotonic excess of p-values, an effect of certain QRPs that has been neglected in previous research, and developed two measures to detect this by modeling the distributions of statistically significant p-values. Using simulations and applying the two measures to the retrieved test results, we argue that, although one of the measures suggests the use of QRPs in psychology, it is difficult to draw general conclusions concerning QRPs based on modeling of p-value distributions.


Review of General Psychology | 2015

The Replication Paradox: Combining Studies can Decrease Accuracy of Effect Size Estimates

Michèle B. Nuijten; Marcel A.L.M. van Assen; Coosje Lisabet Sterre Veldkamp; Jelte M. Wicherts

Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies’ sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration).


Proceedings of the National Academy of Sciences of the United States of America | 2014

Standard analyses fail to show that US studies overestimate effect sizes in softer research

Michèle B. Nuijten; Marcel A.L.M. van Assen; Robbie C. M. van Aert; Jelte M. Wicherts

Fanelli and Ioannidis (1) have recently hypothesized that scientific biases are worsened by the relatively high publication pressures in the United States and by the use of “softer” methodologies in much of the behavioral sciences. The authors analyzed nearly 1,200 studies from 82 meta-analyses and found more extreme effect sizes in studies from the United States, and when using soft behavioral (BE) versus less-soft biobehavioral (BB) and nonbehavioral (NB) methods. Their results are based on nonstandard analyses, withas the dependent variable, where is the effect size (log of the odds ratio) of study i in meta-analysis j, and is the summary effect size of …


PLOS ONE | 2018

Statistical reporting inconsistencies in experimental philosophy

Matteo Colombo; Georgi Duev; Michèle B. Nuijten; Jan Sprenger

Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.


Archive | 2017

Psychologische stoornissen als complexe netwerken

Gabriela Lunansky; Michèle B. Nuijten; Marie K. Deserno; Angélique O. J. Cramer; Denny Borsboom

In dit hoofdstuk wordt een overzicht gegeven van de theorie en de methoden die horen bij het netwerkperspectief. Allereerst wordt stilgestaan bij het latente-variabelenmodel en het verschil met het netwerkperspectief van psychopathologie. De theoretische verschillen tussen beide perspectieven zullen daarbij worden besproken. Daarna wordt de architectuur van netwerken besproken, waarbij wordt ingegaan op wat de entiteiten in het netwerk representeren en hoe ze moeten worden geinterpreteerd. Vervolgens wordt er gekeken naar de laatste bevindingen hoe een psychopathologisch netwerk zich ontwikkelt in de tijd, en bespreken we hoe individuele netwerken geschat kunnen worden. Tot slot worden mogelijk nieuwe behandelstrategieen besproken, door te kijken naar de implicaties van het netwerkperspectief voor de klinische praktijk.

Collaboration


Dive into the Michèle B. Nuijten's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge