Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fiona Fidler is active.

Publication


Featured researches published by Fiona Fidler.


Journal of Experimental Medicine | 2007

Error bars in experimental biology

Geoff Cumming; Fiona Fidler; David L. Vaux

Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.


Conservation Biology | 2012

Eliciting Expert Knowledge in Conservation Science

Tara G. Martin; Mark A. Burgman; Fiona Fidler; Petra M. Kuhnert; Samantha Low-Choy; Marissa F. McBride; Kerrie Mengersen

Expert knowledge is used widely in the science and practice of conservation because of the complexity of problems, relative lack of data, and the imminent nature of many conservation decisions. Expert knowledge is substantive information on a particular topic that is not widely known by others. An expert is someone who holds this knowledge and who is often deferred to in its interpretation. We refer to predictions by experts of what may happen in a particular context as expert judgments. In general, an expert-elicitation approach consists of five steps: deciding how information will be used, determining what to elicit, designing the elicitation process, performing the elicitation, and translating the elicited information into quantitative statements that can be used in a model or directly to make decisions. This last step is known as encoding. Some of the considerations in eliciting expert knowledge include determining how to work with multiple experts and how to combine multiple judgments, minimizing bias in the elicited information, and verifying the accuracy of expert information. We highlight structured elicitation techniques that, if adopted, will improve the accuracy and information content of expert judgment and ensure uncertainty is captured accurately. We suggest four aspects of an expert elicitation exercise be examined to determine its comprehensiveness and effectiveness: study design and context, elicitation design, elicitation method, and elicitation output. Just as the reliability of empirical data depends on the rigor with which it was acquired so too does that of expert knowledge.


Risk Analysis | 2010

Reducing overconfidence in the interval judgments of experts.

Andrew Speirs-Bridge; Fiona Fidler; Marissa F. McBride; Louisa Flander; Geoff Cumming; Mark A. Burgman

Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3-point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4-step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta-analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)-a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4-step procedure is more likely to reduce overconfidence than the 3-point procedure (Cohens d = 0.61, [0.04, 1.18]).


Nature Human Behaviour | 2018

Redefine Statistical Significance

Daniel J. Benjamin; James O. Berger; Magnus Johannesson; Brian A. Nosek; Eric-Jan Wagenmakers; Richard A. Berk; Kenneth A. Bollen; Björn Brembs; Lawrence D. Brown; Colin F. Camerer; David Cesarini; Christopher D. Chambers; Merlise A. Clyde; Thomas D. Cook; Paul De Boeck; Zoltan Dienes; Anna Dreber; Kenny Easwaran; Charles Efferson; Ernst Fehr; Fiona Fidler; Andy P. Field; Malcolm R. Forster; Edward I. George; Richard Gonzalez; Steven N. Goodman; Edwin J. Green; Donald P. Green; Anthony G. Greenwald; Jarrod D. Hadfield

We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.


PLOS ONE | 2011

Expert status and performance.

Mark A. Burgman; Marissa F. McBride; Raquel Ashton; Andrew Speirs-Bridge; Louisa Flander; Bonnie C. Wintle; Fiona Fidler; Libby Rumpff; Charles Twardy

Expert judgements are essential when time and resources are stretched or we face novel dilemmas requiring fast solutions. Good advice can save lives and large sums of money. Typically, experts are defined by their qualifications, track record and experience [1], [2]. The social expectation hypothesis argues that more highly regarded and more experienced experts will give better advice. We asked experts to predict how they will perform, and how their peers will perform, on sets of questions. The results indicate that the way experts regard each other is consistent, but unfortunately, ranks are a poor guide to actual performance. Expert advice will be more accurate if technical decisions routinely use broadly-defined expert groups, structured question protocols and feedback.


Psychological Science | 2007

Statistical Reform in Psychology Is Anything Changing

Geoff Cumming; Fiona Fidler; Martine Louise. Leonard; Pavel Kalinowski; Ashton Dayne. Christiansen; Anita Mary. Kleinig; Jessica Lo; Natalie McMenamin; Sarah Wilson

We investigated whether statistical practices in psychology have been changing since 1998. Early in this period, the American Psychological Association (APA) Task Force on Statistical Inference (TFSI; Wilkinson & TFSI, 1999) advocated improved statistical practices, including reporting effect sizes and confidence intervals (CIs). The APA (2001) Publication Manual included expanded advice on effect sizes and stated that CIs are ‘‘in general, the best reporting strategy’’ (p. 22). Any changes since 1998 may, of course, have had many causes other than those two sources of advice.


Educational and Psychological Measurement | 2001

Computing Correct Confidence Intervals for ANOVA Fixed- and Random-Effects Effect Sizes.

Fiona Fidler; Bruce Thompson

Most textbooks explain how to compute confidence intervals for means, correlation coefficients, and other statistics using “central” test distributions (e.g., t, F) that are appropriate for such statistics. However, few textbooks explain how to use “noncentral” test distributions (e.g., noncentral t, noncentral F) to evaluate power or to compute confidence intervals for effect sizes. This article illustrates the computation of confidence intervals for effect sizes for some ANOVA applications; the use of intervals invoking noncentral distributions is made practical by newer software. Greater emphasis on both effect sizes and confidence intervals was recommended by the APA Task Force on Statistical Inference and is consistent with the editorial policies of the 17 journals that now explicitly require effect size reporting.


Educational and Psychological Measurement | 2002

The Fifth edition of the Apa Publication Manual: Why its Statistics Recommendations are so Controversial

Fiona Fidler

The fifth edition of the Publication Manual of the American Psychological Association (APA) draws on recommendations for improving statistical practices made by the APA Task Force on Statistical Inference (TFSI). The manual now acknowledges the controversy over null hypothesis significance testing (NHST) and includes both a stronger recommendation to report effect sizes and a new recommendation to report confidence intervals. Drawing on interviews with some critics and other interested parties, the present review identifies a number of deficiencies in the new manual. These include lack of follow-through with appropriate explanations and examples of how to report statistics that are now recommended. At this stage, the discipline would be well served by a response to these criticisms and a debate over needed modifications.


Organization Science | 2011

PERSPECTIVE---Researchers Should Make Thoughtful Assessments Instead of Null-Hypothesis Significance Tests

Andreas Schwab; Eric Abrahamson; William H. Starbuck; Fiona Fidler

Null-hypothesis significance tests (NHSTs) have received much criticism, especially during the last two decades. Yet many behavioral and social scientists are unaware that NHSTs have drawn increasing criticism, so this essay summarizes key criticisms. The essay also recommends alternative ways of assessing research findings. Although these recommendations are not complex, they do involve ways of thinking that many behavioral and social scientists find novel. Instead of making NHSTs, researchers should adapt their research assessments to specific contexts and specific research goals, and then explain their rationales for selecting assessment indicators. Researchers should show the substantive importance of findings by reporting effect sizes and should acknowledge uncertainty by stating confidence intervals. By comparing data with naive hypotheses rather than with null hypotheses, researchers can challenge themselves to develop better theories. Parsimonious models are easier to understand, and they generalize more reliably. Robust statistical methods tolerate deviations from assumptions about samples.


Journal of Consulting and Clinical Psychology | 2005

Toward Improved Statistical Reporting in the Journal of Consulting and Clinical Psychology.

Fiona Fidler; Geoff Cumming; Neil Thomason; Dominique Pannuzzo; Julian Smith; Penny Fyffe; Holly Edmonds; Claire Harrington; Rachel Schmitt

Philip Kendalls (1997) editorial encouraged authors in the Journal of Consulting and Clinical Psychology (JCCP) to report effect sizes and clinical significance. The present authors assessed the influence of that editorial--and other American Psychological Association initiatives to improve statistical practices--by examining 239 JCCP articles published from 1993 to 2001. For analysis of variance, reporting of means and standardized effect sizes increased over that period, but the rate of effect size reporting for other types of analyses surveyed remained low. Confidence interval reporting increased little, reaching 17% in 2001. By 2001, the percentage of articles considering clinical (not only statistical) significance was 40%, compared with 36% in 1996. In a follow-up survey of JCCP authors (N=62), many expressed positive attitudes toward statistical reform. Substantially improving statistical practices may require stricter editorial policies and further guidance for authors on reporting and interpreting measures.

Collaboration


Dive into the Fiona Fidler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinichi Nakagawa

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Yung En Chee

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge