Matthijs J. Warrens
Leiden University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthijs J. Warrens.
Journal of Classification | 2008
Matthijs J. Warrens
It is shown that one can calculate the Hubert-Arabie adjusted Rand index by first forming the fourfold contingency table counting the number of pairs of objects that were placed in the same cluster in both partitions, in the same cluster in one partition but in different clusters in the other partition, and in different clusters in both, and then computing Cohen’s κ on this fourfold table.
Psychometrika | 2008
Matthijs J. Warrens
This paper studies correction for chance in coefficients that are linear functions of the observed proportion of agreement. The paper unifies and extends various results on correction for chance in the literature. A specific class of coefficients is used to illustrate the results derived in this paper. Coefficients in this class, e.g. the simple matching coefficient and the Dice/Sørenson coefficient, become equivalent after correction for chance, irrespective of what expectation is used. The coefficients become either Cohen’s kappa, Scott’s pi, Mak’s rho, Goodman and Kruskal’s lambda, or Hamann’s eta, depending on what expectation is considered appropriate. Both a multicategorical generalization and a multivariate generalization are discussed.
Journal of Classification | 2010
Matthijs J. Warrens
Suppose two judges each classify a group of objects into one of several nominal categories. It has been observed in the literature that, for fixed observed agreement between the judges, Cohen’s kappa penalizes judges with similar marginals compared to judges who produce different marginals. This paper presents a formal proof of this phenomenon.
Journal of Classification | 2008
Matthijs J. Warrens
Bounds of association coefficients for binary variables are derived using the arithmetic-geometric-harmonic mean inequality. More precisely, it is shown which presence/absence coefficients are bounds with respect to each other. Using the new bounds it is investigated whether a coefficient is in general closer to either its upper or its lower bound.
Journal of Classification | 2008
Matthijs J. Warrens
Many similarity coefficients for binary data are defined as fractions. For certain resemblance measures the denominator may become zero. If the denominator is zero the value of the coefficient is indeterminate. It is shown that the seriousness of the indeterminacy problem differs with the resemblance measures. Following Batagelj and Bren (1995) we remove the indeterminacies by defining appropriate values in critical cases.
Quarterly Journal of Experimental Psychology | 2006
Lorenza S. Colzato; Matthijs J. Warrens; Bernhard Hommel
Individual performance was compared across three different tasks that tap into the binding of stimulus features in perception, the binding of action features in action planning, and the emergence of stimulus–response bindings (“event files”). Within a task correlations between the size of binding effects were found within visual perception (e.g., the strength of shape–location binding correlated positively with the strength of shape–colour binding) but not between perception and action planning, suggesting different, domain-specific binding mechanisms. To some degree, binding strength was predicted by priming effects of the respective features, especially if these features varied on a dimension that matched the current attentional set.
Journal of Classification | 2009
Matthijs J. Warrens
Abstractk-Adic formulations (for groups of objects of size k) of a variety of 2-adic similarity coefficients (for pairs of objects) for binary (presence/absence) data are presented. The formulations are not functions of 2-adic similarity coefficients. Instead, the main objective of the the paper is to present k-adic formulations that reflect certain basic characteristics of, and have a similar interpretation as, their 2-adic versions. Two major classes are distinguished. The first class is referred to as Bennani-Heiser similarity coefficients, which contains all coefficients that can be defined using just the matches, the number of attributes that are present and that are absent in k objects, and the total number of attributes. The coefficients in the second class can be formulated as functions of Dice’s association indices.
Journal of remote sensing | 2015
Matthijs J. Warrens
An important task in remote sensing is accuracy assessment of classified imagery. The error matrix is a widely used approach for expressing the accuracy. In the remote-sensing literature, various accuracy measures have been developed for summarizing the information in an error matrix. Two relatively new measures are the so-called quantity disagreement and allocation disagreement. Before the new measures can become standard tools in accuracy assessment, it is important that their properties are understood. In this note, it is shown how the map-level disagreement measures are related to the corresponding category-level disagreement measures.
Journal of Classification | 2015
Matthijs J. Warrens
Cronbach’s alpha is an estimate of the reliability of a test score if the items are essentially tau-equivalent. Several authors have derived results that provide alternative interpretations of alpha. These interpretations are also valid if essential tau-equivalency does not hold. For example, alpha is the mean of all split-half reliabilities if the test is split into two halves that are equal in size. This note presents several connections between Cronbach’s alpha and the Spearman-Brown formula. The results provide new interpretations of Cronbach’s alpha, the stepped down alpha, and standardized alpha, that are also valid in the case that essential tau-equivalency or parallel equivalency do not hold. The main result is that the stepped down alpha is a weighted average of the alphas of all subtests of a specific size, where the weights are the denominators of the subtest alphas. Thus, the stepped down alpha can be interpreted as an average subtest alpha. Furthermore, we may calculate the stepped down alpha without using the Spearman-Brown formula.
Advanced Data Analysis and Classification | 2013
Matthijs J. Warrens
Cohen’s weighted kappa is a popular descriptive statistic for summarizing interrater agreement on an ordinal scale. An agreement table with