Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David R. Fox is active.

Publication


Featured researches published by David R. Fox.


Water Resources Research | 1999

A Bayesian Approach to parameter estimation and pooling in nonlinear flood event models

Edward P. Campbell; David R. Fox; Bryson C. Bates

A Bayesian procedure is presented for parameter estimation in nonlinear flood event models. We derive a pooling diagnostic using Bayes factors to identify when it is reasonable to pool model parameters across storm events. A case study involving a quasi-distributed, nonlinear flood event model and five watersheds in the southwest of Western Australia is presented to illustrate the capabilities and utility of the procedure. The results indicate that Markov chain Monte Carlo methods based on the Metropolis-Hastings algorithm are useful tools for parameter estimation. We find that pooling is not justified for the model and data at hand. This suggests that current practices in nonlinear flood event modeling may be in need of urgent review.


Environmental Science and Pollution Research | 2014

Revisions to the derivation of the Australian and New Zealand guidelines for toxicants in fresh and marine waters.

M. St. J. Warne; Graeme E. Batley; O. Braga; John Chapman; David R. Fox; Christopher W. Hickey; J.L. Stauber; R. Van Dam

The Australian and New Zealand Guidelines for Fresh and Marine Water Quality are a key document in the Australian National Water Quality Management Strategy. These guidelines released in 2000 are currently being reviewed and updated. The revision is being co-ordinated by the Australian Department of Sustainability, Environment, Water, Population and Communities, while technical matters are dealt with by a series of Working Groups. The revision will be evolutionary in nature reflecting the latest scientific developments and a range of stakeholder desires. Key changes will be: increasing the types and sources of data that can be used; working collaboratively with industry to permit the use of commercial-in-confidence data; increasing the minimum data requirements; including a measure of the uncertainty of the trigger value; improving the software used to calculate trigger values; increasing the rigour of site-specific trigger values; improving the method for assessing the reliability of the trigger values; and providing guidance of measures of toxicity and toxicological endpoints that may, in the near future, be appropriate for trigger value derivation. These changes will markedly improve the number and quality of the trigger values that can be derived and will increase end-users’ ability to understand and implement the guidelines in a scientifically rigorous manner.


Ecotoxicology and Environmental Safety | 2010

A Bayesian approach for determining the no effect concentration and hazardous concentration in ecotoxicology

David R. Fox

This paper describes a Bayesian modeling approach to the estimation of the no effect concentration (NEC) and the hazardous concentration (HC(x)) as an alternative to conventional methods based on NOECs - the no observed effect concentration. The advantage of the proposed method is that it combines a plausible model for dose-response data with prior information or belief about the models parameters to generate posterior distributions for the parameters - one of those being the NEC. The posterior distribution can be used to derive point and interval estimates for the NEC as well as providing uncertainty bounds when used in the development of a species sensitivity distribution (SSD). This latter feature is particularly attractive and overcomes a recognized deficiency of the NOEC-based approach. Examples using previously published data sets are provided which illustrate how the NEC/HC(x) estimation problem is re-cast and solved in this Bayesian framework.


Oecologia | 1995

Tests for density dependence revisited

David R. Fox; James Ridsdill-Smith

We have examined a number of statistical issues associated with methods for evaluating different tests of density dependence. The lack of definitive standards and benchmarks for conducting simulation studies makes it difficult to assess the performance of various tests. The biological researcher has a bewildering choice of statistical tests for testing density dependence and the list is growing. The most recent additions have been based on computationally intensive methods such as permutation tests and boot-strapping. We believe the computational effort and time involved will preclude their widespread adoption until: (1) these methods have been fully explored under a wide range of conditions and shown to be demonstrably superior than other, simpler methods, and (2) general purpose software is made available for performing the calculations. We have advocated the use of Bulmers (first) test as a de facto standard for comparative studies on the grounds of its simplicity, applicability, and satisfactory performance under a variety of conditions. We show that, in terms of power, Bulmers test is robust to certain departures from normality although, as noted by other authors, it is affected by temporal trends in the data. We are not convinced that the reported differences in power between Bulmers test and the randomisation test of Pollard et al. (1987) justifies the adoption of the latter. Nor do we believe a compelling case has been established for the parametric bootstrap likelihood ratio test of Dennis and Taper (1994). Bulmers test is essentially a test of the serial correlation in the (log) abundance data and is affected by the presence of autocorrelated errors. In such cases the test cannot distinguish between the autoregressive effect in the errors and a true density dependent effect in the time series data. We suspect other tests may be similarly affected, although this is an area for further research. We have also noted that in the presence of autocorrelation, the type I error rates can be substantially different from the assumed level of significance, implying that in such cases the test is based on a faulty significance region. We have indicated both qualitatively and quantitatively how autoregressive error terms can affect the power of Bulmers test, although we suggest that more work is required in this area. These apparent inadequacies of Bulmers test should not be interpreted as a failure of the statistical procedure since the test was not intended to be used with autocorrelated error terms.


Integrated Environmental Assessment and Management | 2012

WHAT TO DO WITH NOECS/NOELS—PROHIBITION OR INNOVATION?

David R. Fox; Elise Billoir; Sandrine Charles; Marie Laure Delignette-Muller; Christelle Lopes

This study attempted to be as realistic as possible when evaluating third-hand PAH residues resulting from 1 cigarette. Smokers were not asked to change their smoking habits, except to continuously hold the cigarette in 1 hand during the entire duration of the cigarette’s burning. Hand size (that is, the adsorptive surface area), duration of smoking, and environmental conditions such as wind, temperature, and humidity, in addition to other factors, may potentially influence PAH concentration. We conducted our third-hand smoke studies outdoors under environmental conditions, and therefore hypothesize that a similar study conducted in the more stable conditions of an indoor environment may reveal higher levels of contaminant residues on surfaces and smokers’ bodies. Moir et al. (2008) quantified PAH concentrations in second-hand tobacco smoke, defined as environmental tobacco smoke that is inhaled involuntarily or passively by someone who is not smoking. Using their study and our data set, we carried out a ‘‘back of the envelope’’ calculation to estimate the percentage of sidestream smoke (i.e., secondhand smoke) that becomes third-hand smoke. We conclude that the PAH inventory on 1 hand of a smoker represents 0.1% to 6% of that emitted from sidestream smoke. Third-hand PAH residues on a smoker’s hand represent only a fraction of the total PAH reservoir for a smoker (compared to residues on all exposed skin and clothing). We have begun to quantify this load of chemicals as the first step in assessing the potential for smokers to act as vectors for impairment of indoor air quality. To completely capture the health risk posed by third-hand smoke, further studies from our research group and others need to address the off-gassing or desorption potential of these compounds and more fully evaluate the significance of third-hand smoke residues in impairing indoor air quality and/or increasing PAH exposure to subpopulations such as children. A thorough ranking of the importance of this exposure route compared to other exposures modes (e.g., release of PAHs from cooking methods such as open fires, incense burning, indoor tobacco smoking, etc.) also remain to be quantified.


Human and Ecological Risk Assessment | 2006

Statistical Issues in Ecological Risk Assessment

David R. Fox

ABSTRACT Ecological risk assessment (ERA) is concerned with making decisions about the natural environment under uncertainty. Statistical methodology provides a natural framework for risk characterization and manipulation with many quantitative ERAs relying heavily on Neyman-Pearson hypothesis testing and other frequentist modes of inference. Bayesian statistical methods are becoming increasingly popular in ERA as they are seen to provide legitimate ways of incorporating subjective belief or expert opinion in the form of prior probability distributions. This article explores some of the concepts, strengths and weaknesses, and difficulties associated with both paradigms. The main points are illustrated with an example of setting a risk-based “trigger” level for uranium concentrations in the Magela Creek catchment of the Northern Territory of Australia.


Integrated Environmental Assessment and Management | 2012

Response to Landis and Chapman (2011).

David R. Fox

Congratulations to the authors for their concise summary of the flaws, frailties, and limitations of ANOVA-based toxicity metrics. Having made similar calls for a transition to model-based inference, I am naturally fully supportive of the Landis and Chapman ‘‘fatwa’’ on bankrupt statistical methods in ecotoxicology. This timely article should, and no doubt will, generate much discussion among ecotoxicologists. We should expect pockets of resistance to emerge. Strident supporters of current practice will no doubt appeal to the long history of ‘‘achievement’’ that has accompanied the use of NOECs, NOELs, and LOELs, whereas xenophobia may generate some ‘‘push-back’’ and inertia to change. Although there should be open discussion of the merits of various modeling and data analysis techniques, I hope we can move beyond some of the age-old debates such as the subjectiveness of choosing a prior distribution in a Bayesian analysis. Statisticians spent many years and countless journal pages to this and other modeling issues and although not diminishing the importance of those discussions, I do not believe the practice of ecotoxicology will be well-served by resurrecting them. Another often-cited criticism of model-based approaches to the derivation of toxicity measures is the claimed arbitrariness of model selection. Although it is true that there is a plethora of candidate mathematical functions to represent a concentration-response (C-R) curve, since when did scientists find that problematic? Indeed, anyone who has carried out a bivariate regression of ‘‘y on x’’ will have been confronted with issues of nonlinearity, heteroscedasticity, nonnormal residuals, and lack-of-fit. We deal with these ‘‘problems’’ in a variety of ways—we either choose to ignore them or we do something about it by transforming the data and/or using a different functional form. The art of modeling and inference is parsimony—to achieve a good representation of the data at hand using a model that is both plausible and simple. Although the model-fitting process can be usefully guided by so-called goodness-of-fit statistics such as errorsums-of squares, deviance, and Akaike’s Information Criterion, the bottom line is that the modeler drives the process and makes many and varied decisions as how to proceed. To paraphrase eminent statistician George Box, ‘‘all models are wrong, it’s just that some are useful.’’ Landis and Chapman (2011) have been quite blunt: ‘‘we call on the Editors-in-Chief of the 2 SETAC journals to ban statistical hypothesis tests for the reporting of exposureresponse from their journals,’’ and they were no doubt encouraged by the infamous case of a similar ban by the editor of the American Journal of Epidemiology. What was not acknowledged, however, was the ensuing uproar and a retreat from that position. In a recent article on the future of statisticians and statistical science, I noted that ‘‘the philosophical debates about null hypothesis significance testing (NHST) have been with us for many years and the attempts of a single misguided journal editor to deny the existence of a well-established mode of statistical inference were inevitably doomed from the beginning’’ (Fox 2010). So although I agree wholeheartedly with the present call to elevate the statistical rigor in ecotoxicology, I find myself in disagreement with calls for outright ban on some modes of statistical analysis. I do not believe forcing one particular mode of thinking over another is the way to proceed. The critical issue as I see it is one of education rather than regulation.


Integrated Environmental Assessment and Management | 2010

Statistics and ecotoxicology: Shotgun marriage or enduring partnership?

David R. Fox

There are cultures in which people believe that some objects have magical powers; anthropologists call these objects fetishes. In our society, statistics are a sort of fetish. . Statistics direct our concern; they show us what we ought to worry about and how much we ought to worry. In a sense, the social problem becomes the statistic and, because we treat statistics as true and incontrovertible, they achieve a kind of fetishlike, magical control over how we view social problems. We think of statistics as facts that we discover, not numbers we create. (Best 2001)


The American Statistician | 2010

Desired and Feared—Quo vadis or Quid agis?

David R. Fox

The recent article by Meng (2009a) continues a long tradition of articles in this journal dealing with the future of statistics and statisticians. For over 25 years many of these accounts painted varying shades of the same grim picture—that our continued existence is under threat; the challenges are great; respect has been in short supply; and our future is bleak. In this article I suggest we spend less time scanning the cross-disciplinary borders for new intrusions and rather than shoring up the fortress, we open up the borders. I share Meng’s upbeat enthusiasm for a bright future while recognizing much remains to be done to increase our relevance and effectiveness.


Computers and Electronics in Agriculture | 2016

A scoping study to assess the precision of an automated radiolocation animal tracking system

Don Menzies; Kym P. Patison; David R. Fox; Dave Swain

The mean spatial precision for the ARATS ear tags was ?22m.Signal propagation effects and meteorological parameters affected spatial precision.The time between transmissions showed no effect on the spatial precision. The spatial precision of a new automated radiolocation animal tracking system (ARATS) was studied in a small-scale (~5ha) trial site. Twelve static tags, in a four by three grid, transmitted for 28days. The 12 tags recorded 36,452 transmissions with a mean transmission per tag of 3037. Each transmission included the tag number, date and time and the calculated longitude and latitude. The mean location and then the Euclidean distance from the mean location for each tag were calculated in order to derive location precision per tag. The overall precision for the 12 tags was ?22m with a SD of 49m with the most and least precise tags having precisions of ?8m and ?51m, respectively. As with other geolocation technologies, it would appear that structures in the environment cause signal propagation effects including multipath and non-line-of-sight, which result in errors in the derived locations.The distance from the mean data was log transformed (log10) and summarised in order to present all data over a 24-h period. There was a statistically significant decrease in precision between 11:00 and 17:00h. These data were correlated with meteorological parameters for the period of the trial, again summarised over 24h, with temperature, humidity, wind speed and pressure all having significant correlations with the precision data.The variance between individual tag transmissions were compared to see whether the distance between derived locations increased as time between transmissions increased. The means for each tag showed the same variance as the mean precision values, that is the more precise tags had lower means and the less precise tags had higher means. However, no tags showed a trend towards an increase in the distance between locations as the time between transmissions increased.In order to assess whether there was any spatial variability in the derived locations, the variability in distance between tags was compared for all tag combinations. Tags that were proximal to each other had shorter distances between the mean derived locations and less variance, whereas tags farther apart had large distances and large variance in the mean derived locations.The ARATS assessed in this static evaluation showed a lower level of spatial precision than commercially available global positioning systems. However the system could still have application when used to derive proximal associations between animals in low stocking-rate, extensive grazing situations such as are present in northern Australia.

Collaboration


Dive into the David R. Fox's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerrie Mengersen

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Murray Logan

Australian Institute of Marine Science

View shared research outputs
Top Co-Authors

Avatar

Sandra Johnson

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Terry Walshe

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wayne G. Landis

Western Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Beilin

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge