Geoff Cumming
La Trobe University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Geoff Cumming.
Psychological Science | 2014
Geoff Cumming
We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.
American Psychologist | 2005
Geoff Cumming; Sue Finch
Wider use in psychology of confidence intervals (CIs), especially as error bars in figures, is a desirable development. However, psychologists seldom use CIs and may not understand them well. The authors discuss the interpretation of figures with error bars and analyze the relationship between CIs and statistical significance testing. They propose 7 rules of eye to guide the inferential use of figures with error bars. These include general principles: Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals. They also include guidelines for inferential interpretation of the overlap of CIs on independent group means. Wider use of interval estimation in psychology has the potential to improve research communication substantially.
Educational and Psychological Measurement | 2001
Geoff Cumming; Sue Finch
Reform of statistical practice in the social and behavioral sciences requires wider use of confidence intervals (CIs), effect size measures, and meta-analysis. The authors discuss four reasons for promoting use of CIs: They (a) are readily interpretable, (b) are linked to familiar statistical significance tests, (c) can encourage meta-analytic thinking, and (d) give information about precision. The authors discuss calculation of CIs for a basic standardized effect size measure, Cohen’s δ (also known as Cohen’s d), and contrast these with the familiar CIs for original score means. CIs for δ require use of noncentral t distributions, which the authors apply also to statistical power and simple meta-analysis of standardized effect sizes. They provide the ESCI graphical software, which runs under Microsoft Excel, to illustrate the discussion. Wider use of CIs for δ and other effect size measures should help promote highly desirable reform of statistical practice in the social sciences.
Journal of Experimental Medicine | 2007
Geoff Cumming; Fiona Fidler; David L. Vaux
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.
Risk Analysis | 2010
Andrew Speirs-Bridge; Fiona Fidler; Marissa F. McBride; Louisa Flander; Geoff Cumming; Mark A. Burgman
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3-point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4-step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta-analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)-a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4-step procedure is more likely to reduce overconfidence than the 3-point procedure (Cohens d = 0.61, [0.04, 1.18]).
Clinical & Experimental Allergy | 1971
J. Morrison Smith; L. K. Harding; Geoff Cumming
A study of the prevalence of asthma in school children in Birmingham which was first carried out in 1956–57 was repeated in 1968–69. There has been an increase in the prevalence of definitely diagnosed asthma from 1.8% to 2.3%, not including an even higher number of children (3.2%) with wheezing. A considerably higher prevalence in boys than in girls was again found both for definite asthma and for wheezing but the tendency to recovery in boys with definite asthma was slight whereas there was a marked recovery in cases of wheezing which almost certainly represented mild asthma.
Psychological Science | 2007
Geoff Cumming; Fiona Fidler; Martine Louise. Leonard; Pavel Kalinowski; Ashton Dayne. Christiansen; Anita Mary. Kleinig; Jessica Lo; Natalie McMenamin; Sarah Wilson
We investigated whether statistical practices in psychology have been changing since 1998. Early in this period, the American Psychological Association (APA) Task Force on Statistical Inference (TFSI; Wilkinson & TFSI, 1999) advocated improved statistical practices, including reporting effect sizes and confidence intervals (CIs). The APA (2001) Publication Manual included expanded advice on effect sizes and stated that CIs are ‘‘in general, the best reporting strategy’’ (p. 22). Any changes since 1998 may, of course, have had many causes other than those two sources of advice.
Psychological Methods | 2006
Geoff Cumming; Robert Maillardet
Confidence intervals (CIs) give information about replication, but many researchers have misconceptions about this information. One problem is that the percentage of future replication means captured by a particular CI varies markedly, depending on where in relation to the population mean that CI falls. The authors investigated the distribution of this percentage for varsigma known and unknown, for various sample sizes, and for robust CIs. The distribution has strong negative skew: Most 95% CIs will capture around 90% or more of replication means, but some will capture a much lower proportion. On average, a 95% CI will include just 83.4% of future replication means. The authors present figures designed to assist understanding of what CIs say about replication, and they also extend the discussion to explain how p values give information about replication.
Psychonomic Bulletin & Review | 2013
Robert B. Michael; Eryn J. Newman; Matti Vuorre; Geoff Cumming; Maryanne Garry
The persuasive power of brain images has captivated scholars in many disciplines. Like others, we too were intrigued by the finding that a brain image makes accompanying information more credible (McCabe & Castel in Cognition 107:343-352, 2008). But when our attempts to build on this effect failed, we instead ran a series of systematic replications of the original study—comprising 10 experiments and nearly 2,000 subjects. When we combined the original data with ours in a meta-analysis, we arrived at a more precise estimate of the effect, determining that a brain image exerted little to no influence. The persistent meme of the influential brain image should be viewed with a critical eye.
Journal of Consulting and Clinical Psychology | 2005
Fiona Fidler; Geoff Cumming; Neil Thomason; Dominique Pannuzzo; Julian Smith; Penny Fyffe; Holly Edmonds; Claire Harrington; Rachel Schmitt
Philip Kendalls (1997) editorial encouraged authors in the Journal of Consulting and Clinical Psychology (JCCP) to report effect sizes and clinical significance. The present authors assessed the influence of that editorial--and other American Psychological Association initiatives to improve statistical practices--by examining 239 JCCP articles published from 1993 to 2001. For analysis of variance, reporting of means and standardized effect sizes increased over that period, but the rate of effect size reporting for other types of analyses surveyed remained low. Confidence interval reporting increased little, reaching 17% in 2001. By 2001, the percentage of articles considering clinical (not only statistical) significance was 40%, compared with 36% in 1996. In a follow-up survey of JCCP authors (N=62), many expressed positive attitudes toward statistical reform. Substantially improving statistical practices may require stricter editorial policies and further guidance for authors on reporting and interpreting measures.