Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Denny Borsboom is active.

Publication


Featured researches published by Denny Borsboom.


Science | 2015

Promoting an open research culture

Brian A. Nosek; George Alter; George C. Banks; Denny Borsboom; Sara Bowman; S. J. Breckler; Stuart Buck; Christopher D. Chambers; G. Chin; Garret Christensen; M. Contestabile; A. Dafoe; E. Eich; J. Freese; Rachel Glennerster; D. Goroff; Donald P. Green; B. Hesse; Macartan Humphreys; John Ishiyama; Dean Karlan; A. Kraut; Arthur Lupia; P. Mabry; T. Madon; Neil Malhotra; E. Mayo-Wilson; M. McNutt; Edward Miguel; E. Levy Paluck

Author guidelines for journals could help to promote transparency, openness, and reproducibility Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).


Psychological Review | 2004

The concept of validity

Denny Borsboom; Gideon J. Mellenbergh; Jaap van Heerden

This article advances a simple conception of test validity: A test is valid for measuring an attribute if (a) the attribute exists and (b) variations in the attribute causally produce variation in the measurement outcomes. This conception is shown to diverge from current validity theory in several respects. In particular, the emphasis in the proposed conception is on ontology, reference, and causality, whereas current validity theory focuses on epistemology, meaning, and correlation. It is argued that the proposed conception is not only simpler but also theoretically superior to the position taken in the existing literature. Further, it has clear theoretical and practical implications for validation research. Most important, validation research must not be directed at the relation between the measured attribute and other attributes but at the processes that convey the effect of the measured attribute on the test scores.


American Psychologist | 2006

The poor availability of psychological research data for reanalysis.

Jelte M. Wicherts; Denny Borsboom; Judith Kats; Dylan Molenaar

The origin of the present comment lies in a failed attempt to obtain, through e-mailed requests, data reported in 141 empirical articles recently published by the American Psychological Association (APA). Our original aim was to reanalyze these data sets to assess the robustness of the research findings to outliers. We never got that far. In June 2005, we contacted the corresponding author of every article that appeared in the last two 2004 issues of four major APA journals. Because their articles had been published in APA journals, we were certain that all of the authors had signed the APA Certification of Compliance With APA Ethical Principles, which includes the principle on sharing data for reanalysis. Unfortunately, 6 months later, after writing more than 400 e-mails--and sending some corresponding authors detailed descriptions of our study aims, approvals of our ethical committee, signed assurances not to share data with others, and even our full resumes-we ended up with a meager 38 positive reactions and the actual data sets from 64 studies (25.7% of the total number of 249 data sets). This means that 73% of the authors did not share their data.


Psychological Review | 2003

The theoretical status of latent variables

Denny Borsboom; Gideon J. Mellenbergh; Jaap van Heerden

This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed. It is maintained that this relation can be interpreted as a causal one but that in measurement models for interindividual differences the relation does not apply to the level of the individual person. To substantiate intraindividual causal conclusions, one must explicitly represent individual level processes in the measurement model. Several research strategies that may be useful in this respect are discussed, and a typology of constructs is proposed on the basis of this analysis. The need to link individual processes to latent variable models for interindividual differences is emphasized.


Annual Review of Clinical Psychology | 2013

Network Analysis: An Integrative Approach to the Structure of Psychopathology

Denny Borsboom; Angélique O. J. Cramer

In network approaches to psychopathology, disorders result from the causal interplay between symptoms (e.g., worry → insomnia → fatigue), possibly involving feedback loops (e.g., a person may engage in substance abuse to forget the problems that arose due to substance abuse). The present review examines methodologies suited to identify such symptom networks and discusses network analysis techniques that may be used to extract clinically and scientifically useful information from such networks (e.g., which symptom is most central in a persons network). The authors also show how network analysis techniques may be used to construct simulation models that mimic symptom dynamics. Network approaches naturally explain the limited success of traditional research strategies, which are typically based on the idea that symptoms are manifestations of some common underlying factor, while offering promising methodological alternatives. In addition, these techniques may offer possibilities to guide and evaluate therapeutic interventions.


Structural Equation Modeling | 2009

[Review of: R.L. Brennan (2006) Educational measurement. - 4th ed.]

Denny Borsboom

Not counting its editor, R. L. Brennan, who has earned my eternal admiration for successfully managing what must have been a monstrous project, I am probably the fourth person alive— with Wainer (2007), Green (2008), and Cizek (2008)—to have read the fourth edition of Educational Measurement from cover to cover. Given the late date of this book review, I expect this to remain the case, as this book is unlikely to be read in its entirety except by those intent on reviewing it. On the back flap, Educational Measurement is marketed as “the bible in its field,” and as far as readability goes, the comparison to the Holy Scripture is quite apt. Educational Measurement is not a page turner. Its limited readability need not be an insurmountable problem for the book, however, because over the years the editions of Educational Measurement have grown to become reference texts more than anything else. In that respect, the book definitely stands its ground. In fact, there is nothing that compares to it. If Educational Measurement contains a chapter on what you are looking for, call it A, then you can be sure (with one or two exceptions) that the relevant chapter will (a) provide an exhaustive list of the many ways in which A has been done in the past, (b) discuss the manifold of problems people encountered when doing A, (c) review a dozen or more empirical studies designed to figure out the best way of doing A (usually inconclusive), and (d) urge researchers to do more in the way of validation research with respect to A. Although, at times, the uniform adoption of the laundry list model makes reading this book feel like a mild form of torture, it works well in terms of organization, and I have to say that I did learn very much about educational testing, as most chapters are of high quality in terms of scholarship and content. However, despite the many topics that receive detailed attention in the book and its considerable size (779 pages; Wainer, 2007, estimates its weight at 4.5 pounds), some of the most interesting facts about this monumental volume concern the things it does not contain. There are some glaring omissions that, as an outsider (not an American, not an educational tester, not a construct validity enthusiast), stared me quite directly in the face, but appear to have gone


Behavioral and Brain Sciences | 2010

Comorbidity: A network perspective

Angélique O. J. Cramer; Lourens J. Waldorp; Han L. J. van der Maas; Denny Borsboom

The pivotal problem of comorbidity research lies in the psychometric foundation it rests on, that is, latent variable theory, in which a mental disorder is viewed as a latent variable that causes a constellation of symptoms. From this perspective, comorbidity is a (bi)directional relationship between multiple latent variables. We argue that such a latent variable perspective encounters serious problems in the study of comorbidity, and offer a radically different conceptualization in terms of a network approach, where comorbidity is hypothesized to arise from direct relations between symptoms of multiple disorders. We propose a method to visualize comorbidity networks and, based on an empirical network for major depression and generalized anxiety, we argue that this approach generates realistic hypotheses about pathways to comorbidity, overlapping symptoms, and diagnostic boundaries, that are not naturally accommodated by latent variable models: Some pathways to comorbidity through the symptom space are more likely than others; those pathways generally have the same direction (i.e., from symptoms of one disorder to symptoms of the other); overlapping symptoms play an important role in comorbidity; and boundaries between diagnostic categories are necessarily fuzzy.


Journal of Personality and Social Psychology | 2011

Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011).

Eric-Jan Wagenmakers; Ruud Wetzels; Denny Borsboom; Han L. J. van der Maas

Does psi exist? D. J. Bem (2011) conducted 9 studies with over 1,000 participants in an attempt to demonstrate that future events retroactively affect peoples responses. Here we discuss several limitations of Bems experiments on psi; in particular, we show that the data analysis was partly exploratory and that one-sided p values may overstate the statistical evidence against the null hypothesis. We reanalyze Bems data with a default Bayesian t test and show that the evidence for psi is weak to nonexistent. We argue that in order to convince a skeptical audience of a controversial claim, one needs to conduct strictly confirmatory studies and analyze the results with statistical tests that are conservative rather than liberal. We conclude that Bems p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.


Perspectives on Psychological Science | 2012

An Agenda for Purely Confirmatory Research

Eric-Jan Wagenmakers; Ruud Wetzels; Denny Borsboom; Han L. J. van der Maas; Rogier A. Kievit

The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled “exploratory.” We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.


Psychometrika | 2006

The attack of the psychometricians

Denny Borsboom

This paper analyzes the theoretical, pragmatic, and substantive factors that have hampered the integration between psychology and psychometrics. Theoretical factors include the operationalist mode of thinking which is common throughout psychology, the dominance of classical test theory, and the use of “construct validity” as a catch-all category for a range of challenging psychometric problems. Pragmatic factors include the lack of interest in mathematically precise thinking in psychology, inadequate representation of psychometric modeling in major statistics programs, and insufficient mathematical training in the psychological curriculum. Substantive factors relate to the absence of psychological theories that are sufficiently strong to motivate the structure of psychometric models. Following the identification of these problems, a number of promising recent developments are discussed, and suggestions are made to further the integration of psychology and psychometrics.

Collaboration


Dive into the Denny Borsboom's collaboration.

Top Co-Authors

Avatar

Angélique O. J. Cramer

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rogier A. Kievit

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar

Kenneth S. Kendler

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Francis Tuerlinckx

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert A. Schoevers

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Steven H. Aggen

Virginia Commonwealth University

View shared research outputs
Researchain Logo
Decentralizing Knowledge