Coosje Lisabet Sterre Veldkamp
Tilburg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Coosje Lisabet Sterre Veldkamp.
Frontiers in Psychology | 2016
Jelte M. Wicherts; Coosje Lisabet Sterre Veldkamp; Hilde Augusteijn; Marjan Bakker; Robbie C. M. van Aert; Marcel A.L.M. van Assen
The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.
PLOS ONE | 2014
Coosje Lisabet Sterre Veldkamp; Michèle B. Nuijten; Linda Dominguez-Alvarez; Marcel A.L.M. van Assen; Jelte M. Wicherts
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.
PLOS ONE | 2017
Franca Agnoli; Jelte M. Wicherts; Coosje Lisabet Sterre Veldkamp; Paolo Albiero; Roberto Cubelli
A survey in the United States revealed that an alarmingly large percentage of university psychologists admitted having used questionable research practices that can contaminate the research literature with false positive and biased findings. We conducted a replication of this study among Italian research psychologists to investigate whether these findings generalize to other countries. All the original materials were translated into Italian, and members of the Italian Association of Psychology were invited to participate via an online survey. The percentages of Italian psychologists who admitted to having used ten questionable research practices were similar to the results obtained in the United States although there were small but significant differences in self-admission rates for some QRPs. Nearly all researchers (88%) admitted using at least one of the practices, and researchers generally considered a practice possibly defensible if they admitted using it, but Italian researchers were much less likely than US researchers to consider a practice defensible. Participants’ estimates of the percentage of researchers who have used these practices were greater than the self-admission rates, and participants estimated that researchers would be unlikely to admit it. In written responses, participants argued that some of these practices are not questionable and they have used some practices because reviewers and journals demand it. The similarity of results obtained in the United States, this study, and a related study conducted in Germany suggest that adoption of these practices is an international phenomenon and is likely due to systemic features of the international research and publication processes.
Review of General Psychology | 2015
Michèle B. Nuijten; Marcel A.L.M. van Assen; Coosje Lisabet Sterre Veldkamp; Jelte M. Wicherts
Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies’ sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration).
Psychometrika | 2016
Klaas Sijtsma; Coosje Lisabet Sterre Veldkamp; Jelte M. Wicherts
We respond to the commentaries Waldman and Lilienfeld (Psychometrika, 2015) and Wigboldus and Dotch (Psychometrika, 2015) provided in response to Sijtsma’s (Sijtsma in Psychometrika, 2015) discussion article on questionable research practices. Specifically, we discuss the fear of an increased dichotomy between substantive and statistical aspects of research that may arise when the latter aspects are laid entirely in the hands of a statistician, remedies for false positives and replication failure, and the status of data exploration, and we provide a re-definition of the concept of questionable research practices.
PLOS ONE | 2016
Joeri K. Tijdink; L.M. Bouter; Coosje Lisabet Sterre Veldkamp; Peter M. van de Ven; Jelte M. Wicherts; Yvo M. Smulders
Background Personality influences decision making and ethical considerations. Its influence on the occurrence of research misbehavior has never been studied. This study aims to determine the association between personality traits and self-reported questionable research practices and research misconduct. We hypothesized that narcissistic, Machiavellianistic and psychopathic traits as well as self-esteem are associated with research misbehavior. Methods Included in this cross-sectional study design were 535 Dutch biomedical scientists (response rate 65%) from all hierarchical layers of 4 university medical centers in the Netherlands. We used validated personality questionnaires such as the Dark Triad (narcissism, psychopathy, and Machiavellianism), Rosenbergs Self-Esteem Scale, the Publication Pressure Questionnaire (PPQ), and also demographic and job-specific characteristics to investigate the association of personality traits with a composite research misbehavior severity score. Findings Machiavellianism was positively associated (beta 1.28, CI 1.06–1.53) with self-reported research misbehavior, while narcissism, psychopathy and self-esteem were not. Exploratory analysis revealed that narcissism and research misconduct were more severe among persons in higher academic ranks (i.e., professors) (p<0.01 and p<0.001, respectively), and self-esteem scores and publication pressure were lower (p<0.001 and p<0.01, respectively) as compared to postgraduate PhD fellows. Conclusions Machiavellianism may be a risk factor for research misbehaviour. Narcissism and research misbehaviour were more prevalent among biomedical scientists in higher academic positions. These results suggest that personality has an impact on research behavior and should be taken into account in fostering responsible conduct of research.
Accountability in Research | 2017
Coosje Lisabet Sterre Veldkamp; C.H.J. Hartgerink; Marcel A.L.M. van Assen; Jelte M. Wicherts
ABSTRACT Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the “storybook image of the scientist” is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than to other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and Ph.D. students, and higher levels to Ph.D. students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one’s own group than to people in other groups may decrease scientists’ willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science.
The Lancet | 2016
Sven Stringer; Coosje Lisabet Sterre Veldkamp
26 www.thelancet.com Vol 388 July 2, 2016 disappears. However, if unhappiness were to affect mortality, it would probably do so by fi rst aff ecting health. As the authors discuss, health and happiness are strongly correlated. Since the two variables were measured at the same timepoint, what drives this strong correlation is impossible to tell from the reported data. In other words, are unhealthy people unhappy because they are unhealthy or can unhappiness also decrease health? Adjustment for health does not answer this fundamental question. For example, health has a decreased effect on mortality after correction for happiness. This result simply reflects the strong correlation between health and happiness and would not warrant the conclusion that the eff ect of health on mortality is smaller than what was previously thought. To answer this question, health and happiness should be recorded at several timepoints and whether or not unhappy people tend to become unhealthy after adjustment for unhealthy behaviour should be tested. Therefore, to interpret the interesting results of Liu and colleagues as a defi nitive contradiction of previous results, suggesting that happiness can affect health and mortality, would be premature.
Collabra: Psychology | 2017
Michèle B. Nuijten; Jeroen Borghuis; Coosje Lisabet Sterre Veldkamp; Linda Dominguez-Alvarez; Marcel A.L.M. van Assen; Jelte M. Wicherts
Archive | 2017
Michèle B. Nuijten; Jeroen Borghuis; Coosje Lisabet Sterre Veldkamp; Linda Dominguez Alvarez; Marcel A.L.M. van Assen; Jelte M. Wicherts