Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jelte M. Wicherts is active.

Publication


Featured researches published by Jelte M. Wicherts.


American Psychologist | 2006

The poor availability of psychological research data for reanalysis.

Jelte M. Wicherts; Denny Borsboom; Judith Kats; Dylan Molenaar

The origin of the present comment lies in a failed attempt to obtain, through e-mailed requests, data reported in 141 empirical articles recently published by the American Psychological Association (APA). Our original aim was to reanalyze these data sets to assess the robustness of the research findings to outliers. We never got that far. In June 2005, we contacted the corresponding author of every article that appeared in the last two 2004 issues of four major APA journals. Because their articles had been published in APA journals, we were certain that all of the authors had signed the APA Certification of Compliance With APA Ethical Principles, which includes the principle on sharing data for reanalysis. Unfortunately, 6 months later, after writing more than 400 e-mails--and sending some corresponding authors detailed descriptions of our study aims, approvals of our ethical committee, signed assurances not to share data with others, and even our full resumes-we ended up with a meager 38 positive reactions and the actual data sets from 64 studies (25.7% of the total number of 249 data sets). This means that 73% of the authors did not share their data.


Psychological Review | 2006

A dynamical model of general intelligence: The positive manifold of intelligence by mutualism

Han L. J. van der Maas; Conor V. Dolan; Raoul P. P. P. Grasman; Jelte M. Wicherts; Hilde M. Huizenga; Maartje E. J. Raijmakers

Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by positing a dominant latent variable, the g factor, associated with a single quantitative cognitive or biological process or capacity. In this article, a new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role. It is shown that the positive manifold emerges purely by positive beneficial interactions between cognitive processes during development. A single underlying g factor plays no role in the model. The model offers explanations of important findings in intelligence research, such as the hierarchical factor structure of intelligence, the low predictability of intelligence from early childhood performance, the integration/differentiation effect, the increase in heritability of g, and the Jensen effect, and is consistent with current explanations of the Flynn effect.


European Journal of Personality | 2013

Recommendations for increasing replicability in psychology

Jens B. Asendorpf; Mark Conner; Filip De Fruyt; Jan De Houwer; Jaap J. A. Denissen; Klaus Fiedler; Susann Fiedler; David C. Funder; Reinhold Kliegl; Brian A. Nosek; Marco Perugini; Brent W. Roberts; Manfred Schmitt; Marcel A. G. van Aken; Hannelore Weber; Jelte M. Wicherts

Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward. Copyright


Netherlands Journal of Psychology | 2007

What is intelligence? Beyond the Flynn effect

Jelte M. Wicherts

Around their 18th birthday, basically all Dutch males born between 1934 and 1964 unknowingly took part in a study of the malleability of intelligence. When these young men appeared before the Dutch military draft board, they took a non-verbal IQ test based on Raven’s (1960) Progressive Matrices. With a little help from Piet Vroon, James Flynn (1987) discovered that those born in 1934 (cohort of 1952) scored on average 20 IQ points lower on the test than those born in 1964 (cohort of 1982). This suggested that in only 30 years, the Dutch male population had shown an increase of more than one standard deviation in average IQ. Flynn (1987) also documented this gain in average IQ in 13 other countries over the course of the 20th century and the effect is now commonly known as the Flynn effect. The Flynn effect raises many questions: How can IQ be substantially heritable, yet show such strong gains that appear to be due to environmental factors? Were Dutch males in 1982 so much smarter than Dutch males in 1952?


Perspectives on Psychological Science | 2012

The Rules of the Game Called Psychological Science

Marjan Bakker; Annette van Dijk; Jelte M. Wicherts

If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies.


Behavior Research Methods | 2011

The (mis)reporting of statistical results in psychology journals

Marjan Bakker; Jelte M. Wicherts

In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers’ expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.


Journal of Sex Research | 2008

Women's Scores on the Sexual Inhibition/Sexual Excitation Scales (SIS/SES): Gender Similarities and Differences

Deanna Carpenter; Erick Janssen; Cynthia A. Graham; Harrie C. M. Vorst; Jelte M. Wicherts

The Sexual Inhibition/Sexual Excitation Scales (SIS/SES) assess individual propensities to become sexually aroused and to inhibit arousal. Prior analyses of mens SIS/SES data (Janssen, Vorst, Finn, & Bancroft, 2002a) yielded one excitation factor (SES) and two inhibitory factors (SIS1/Threat of Performance Failure and SIS2/Threat of Performance Consequences). The current study utilized a dataset of 2,045 undergraduates (1,067 women and 978 men) to examine the psychometric properties of womens SIS/SES scores. Women scored higher on sexual inhibition and lower on sexual excitation compared with men. The convergent/discriminant validity of womens SIS/SES scores globally resembled mens, but showed stronger associations with other sexuality − related measures and less pronounced relationships with measures of general behavioral approach/avoidance. The test–retest reliability of mens and womens SIS/SES scores were similar, but individual items exhibited differential relevance to mens and womens arousal. An exploratory factor analysis of womens scores was utilized to further examine shared and unshared themes.


PLOS ONE | 2015

The ordinal effects of ostracism : A meta-analysis of 120 cyberball studies

C.H.J. Hartgerink; Ilja van Beest; Jelte M. Wicherts; Kipling D. Williams

We examined 120 Cyberball studies (N = 11,869) to determine the effect size of ostracism and conditions under which the effect may be reversed, eliminated, or small. Our analyses showed that (1) the average ostracism effect is large (d > |1.4|) and (2) generalizes across structural aspects (number of players, ostracism duration, number of tosses, type of needs scale), sampling aspects (gender, age, country), and types of dependent measure (interpersonal, intrapersonal, fundamental needs). Further, we test Williams’s (2009) proposition that the immediate impact of ostracism is resistant to moderation, but that moderation is more likely to be observed in delayed measures. Our findings suggest that (3) both first and last measures are susceptible to moderation and (4) time passed since being ostracized does not predict effect sizes of the last measure. Thus, support for this proposition is tenuous and we suggest modifications to the temporal need-threat model of ostracism.


Journal of Personality and Social Psychology | 2005

Stereotype Threat and Group Differences in Test Performance: A Question of Measurement Invariance

Jelte M. Wicherts; Conor V. Dolan; David J. Hessen

Studies into the effects of stereotype threat (ST) on test performance have shed new light on race and sex differences in achievement and intelligence test scores. In this article, the authors relate ST theory to the psychometric concept of measurement invariance and show that ST effects may be viewed as a source of measurement bias. As such, ST effects are detectable by means of multi-group confirmatory factor analysis. This enables research into the generalizability of ST effects to real-life or high-stakes testing. The modeling approach is described in detail and applied to 3 experiments in which the amount of ST for minorities and women was manipulated. Results indicate that ST results in measurement bias of intelligence and mathematics tests.


Behavior Research Methods | 2016

The prevalence of statistical reporting errors in psychology (1985-2013).

Michèle B. Nuijten; C.H.J. Hartgerink; Marcel A.L.M. van Assen; Sacha Epskamp; Jelte M. Wicherts

This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the new R package “statcheck.” statcheck retrieved null-hypothesis significance testing (NHST) results from over half of the articles from this period. In line with earlier research, we found that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. In contrast to earlier findings, we found that the average prevalence of inconsistent p-values has been stable over the years or has declined. The prevalence of gross inconsistencies was higher in p-values reported as significant than in p-values reported as nonsignificant. This could indicate a systematic bias in favor of significant results. Possible solutions for the high prevalence of reporting inconsistencies could be to encourage sharing data, to let co-authors check results in a so-called “co-pilot model,” and to use statcheck to flag possible inconsistencies in one’s own manuscript or during the review process.

Collaboration


Dive into the Jelte M. Wicherts's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge