Geoffrey J. Iverson
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Geoffrey J. Iverson.
Psychonomic Bulletin & Review | 2009
Jeffrey N. Rouder; Paul L. Speckman; Dongchu Sun; Richard D. Morey; Geoffrey J. Iverson
Progress in science often comes from discovering invariances in relationships among variables; these invariances often correspond to null hypotheses. As is commonly known, it is not possible to state evidence for the null hypothesis in conventional significance testing. Here we highlight a Bayes factor alternative to the conventional t test that will allow researchers to express preference for either the null hypothesis or the alternative. The Bayes factor has a natural and straightforward interpretation, is based on reasonable assumptions, and has better properties than other methods of inference that have been advocated in the psychological literature. To facilitate use of the Bayes factor, we provide an easy-to-use, Web-based program that performs the necessary calculations.
Perspectives on Psychological Science | 2011
Ruud Wetzels; Dora Matzke; Michael D. Lee; Jeffrey N. Rouder; Geoffrey J. Iverson; Eric-Jan Wagenmakers
Statistical inference in psychology has traditionally relied heavily on p-value significance testing. This approach to drawing conclusions from data, however, has been widely criticized, and two types of remedies have been advocated. The first proposal is to supplement p values with complementary measures of evidence, such as effect sizes. The second is to replace inference with Bayesian measures of evidence, such as the Bayes factor. The authors provide a practical comparison of p values, effect sizes, and default Bayes factors as measures of statistical evidence, using 855 recently published t tests in psychology. The comparison yields two main results. First, although p values and default Bayes factors almost always agree about what hypothesis is better supported by the data, the measures often disagree about the strength of this support; for 70% of the data sets for which the p value falls between .01 and .05, the default Bayes factor indicates that the evidence is only anecdotal. Second, effect sizes can provide additional evidence to p values and default Bayes factors. The authors conclude that the Bayesian approach is comparatively prudent, preventing researchers from overestimating the evidence in favor of an effect.
Psychological Review | 1993
Murray Glanzer; John K. Adams; Geoffrey J. Iverson; Kisok Kim
Three regularities in recognition memory are described with supporting data: the mirror effect, the order of receiver operating characteristic slopes, and the symmetry of movement of underlying distributions. The derivation of these regularities from attention/likelihood theory is demonstrated. The theorys central concept, which distinguishes it from other theories, is the following: Ss make recognition decisions by combining information about new and old items, the combination made in the form of likelihood ratios. The central role of the likelihood ratios extends the implications of signal detection theory for recognition memory. Attention/likelihood theory is fitted to data of 2 series of experiments. One series involves yes-no tests and confidence ratings, the other forced-choice experiments. It is argued that the regularities require a revision of most current theories of recognition memory.
Journal of The Optical Society of America A-optics Image Science and Vision | 1993
Michael D'Zmura; Geoffrey J. Iverson
Changing a scenes illuminant causes the chromatic properties of reflected lights to change. This change in the lights from surfaces provides spectral information about surface reflectances and illuminants. We examine conditions under which these properties may be recovered by using bilinear models. Necessary conditions that follow from comparing the number of equations and the number of unknowns in the recovery procedure are not sufficient for unique recovery. Necessary and sufficient conditions follow from demanding a one-to-one relationship between quantum catch data and sets of lit surfaces. We present an algorithm for determining whether spectral descriptions of lights and surfaces can be recovered uniquely from reflected lights.
Journal of The Optical Society of America A-optics Image Science and Vision | 1993
Michael D'Zmura; Geoffrey J. Iverson
Our analysis of color constancy in a companion paper [J. Opt. Soc. Am A 10, 2148 (1993)] provided an algorithm that lets one test how well linear color constancy schemes work. Here we present the results of applying the algorithm to a large parametric class of color constancy problems involving bilinear models that relate photoreceptoral spectral sensitivities, surface reflectance functions, and illuminant spectral power distributions. These results, supported by simulation and further analysis, provide a detailed classification of two-stage linear methods for recovering the spectral properties of reflectances and illuminants from reflected lights.
Statistics for Social and Behavioral Sciences | 2008
Eric-Jan Wagenmakers; Michael D. Lee; Tom Lodewyckx; Geoffrey J. Iverson
Throughout this book, the topic of order restricted inference is dealt with almost exclusively from a Bayesian perspective. Some readers may wonder why the other main school for statistical inference – frequentist inference – has received so little attention here. Isn’t it true that in the field of psychology, almost all inference is frequentist inference?
Journal of Experimental Psychology: Learning, Memory and Cognition | 1991
Murray Glanzer; John K. Adams; Geoffrey J. Iverson
The mirror effect is a strong regularity in recognition memory: If there are two conditions, A and B, with A giving higher recognition accuracy, then old items in A are recognized as old better than old items in B, and also new items in A are recognized as new better than new items in B. The mirror effect is explained by attention/likelihood theory, which also makes several new, counterintuitive predictions. One is that any variable, such as forgetting, that affects recognition changes the responses to new as well as old stimuli. In terms of underlying distributions, forgetting produces concentering, the bilateral movement of distributions, both new (noise) and old (signal), toward a midpoint. Data from two forced-choice experiments are reported that support the prediction of concentering and other predictions drawn from the theory. It is argued that current theories of memory, which are strength theories, cannot handle these regularities.
Psychonomic Bulletin & Review | 2009
Geoffrey J. Iverson; Michael D. Lee; Eric-Jan Wagenmakers
The probability of “replication,” prep, has been proposed as a means of identifying replicable and reliable effects in the psychological sciences. We conduct a basic test of prep that reveals that it misestimates the true probability of replication, especially for small effects. We show how these general problems with prep play out in practice, when it is applied to predict the replicability of observed effects over a series of experiments. Our results show that, over any plausible series of experiments, the true probabilities of replication will be very different from those predicted by prep. We discuss some basic problems in the formulation of prep that are responsible for its poor performance, and conclude that prep is not a useful statistic for psychological science.
Psychological Methods | 2010
Geoffrey J. Iverson; Eric-Jan Wagenmakers; Michael D. Lee
The purpose of the recently proposed prep statistic is to estimate the probability of concurrence, that is, the probability that a replicate experiment yields an effect of the same sign (Killeen, 2005a). The influential journal Psychological Science endorses prep and recommends its use over that of traditional methods. Here we show that prep overestimates the probability of concurrence. This is because prep was derived under the assumption that all effect sizes in the population are equally likely a priori. In many situations, however, it is advisable also to entertain a null hypothesis of no or approximately no effect. We show how the posterior probability of the null hypothesis is sensitive to a priori considerations and to the evidence provided by the data; and the higher the posterior probability of the null hypothesis, the smaller the probability of concurrence. When the null hypothesis and the alternative hypothesis are equally likely a priori, prep may overestimate the probability of concurrence by 30% and more. We conclude that prep provides an upper bound on the probability of concurrence, a bound that brings with it the danger of having researchers believe that their experimental effects are much more reliable than they actually are.
Journal of The Optical Society of America A-optics Image Science and Vision | 1994
Geoffrey J. Iverson; Michael D'Zmura
We examine conditions under which the spectral properties of lights and surfaces may be recovered by a trichromatic visual system that uses bilinear models. We derive criteria for perfect recovery, formulated in terms of invariant properties of model matrices, for situations in which either two or three lights are shown sequentially on a set of surfaces.