Andrew C. Parnell
University College Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew C. Parnell.
PLOS ONE | 2010
Andrew C. Parnell; Richard Inger; Stuart Bearhop; Andrew L. Jackson
Background Stable isotope analysis is increasingly being utilised across broad areas of ecology and biology. Key to much of this work is the use of mixing models to estimate the proportion of sources contributing to a mixture such as in diet estimation. Methodology By accurately reflecting natural variation and uncertainty to generate robust probability estimates of source proportions, the application of Bayesian methods to stable isotope mixing models promises to enable researchers to address an array of new questions, and approach current questions with greater insight and honesty. Conclusions We outline a framework that builds on recently published Bayesian isotopic mixing models and present a new open source R package, SIAR. The formulation in R will allow for continued and rapid development of this core model into an all-encompassing single analysis suite for stable isotope research.
Journal of Animal Ecology | 2011
Andrew L. Jackson; Richard Inger; Andrew C. Parnell; Stuart Bearhop
1. The use of stable isotope data to infer characteristics of community structure and niche width of community members has become increasingly common. Although these developments have provided ecologists with new perspectives, their full impact has been hampered by an inability to statistically compare individual communities using descriptive metrics. 2. We solve these issues by reformulating the metrics in a Bayesian framework. This reformulation takes account of uncertainty in the sampled data and naturally incorporates error arising from the sampling process, propagating it through to the derived metrics. 3. Furthermore, we develop novel multivariate ellipse-based metrics as an alternative to the currently employed Convex Hull methods when applied to single community members. We show that unlike Convex Hulls, the ellipses are unbiased with respect to sample size, and their estimation via Bayesian inference allows robust comparison to be made among data sets comprising different sample sizes. 4. These new metrics, which we call SIBER (Stable Isotope Bayesian Ellipses in R), open up more avenues for direct comparison of isotopic niches across communities. The computational code to calculate the new metrics is implemented in the free-to-download package Stable Isotope Analysis for the R statistical environment.
Ecology Letters | 2009
Andrew L. Jackson; Richard Inger; Stuart Bearhop; Andrew C. Parnell
The application of Bayesian methods to stable isotopic mixing problems, including inference of diet has the potential to revolutionise ecological research. Using simulated data we show that a recently published model MixSIR fails to correctly identify the true underlying dietary proportions more than 50% of the time and fails with increasing frequency as additional unquantified error is added. While the source of the fundamental failure remains elusive, mitigating solutions are suggested for dealing with additional unquantified variation. Moreover, MixSIR uses a formulation for a prior distribution that results in an opaque and unintuitive covariance structure.
Geology | 2009
Andrew C. Kemp; Benjamin P. Horton; Stephen J. Culver; D. Reide Corbett; Orson van de Plassche; W. Roland Gehrels; Bruce C. Douglas; Andrew C. Parnell
We provide records of relative sea level since A.D. 1500 from two salt marshes in North Carolina to complement existing tide-gauge records and to determine when recent rates of accelerated sea-level rise commenced. Reconstructions were developed using foraminifera-based transfer functions and composite chronologies, which were validated against regional twentieth century tide-gauge records. The measured rate of relative sea-level rise in North Carolina during the twentieth century was 3.0–3.3 mm/a, consisting of a background rate of ~1 mm/a, plus an abrupt increase of 2.2 mm/a, which began between A.D. 1879 and 1915. This acceleration is broadly synchronous with other studies from the Atlantic coast. The magnitude of the acceleration at both sites is larger than at sites farther north along the U.S. and Canadian Atlantic coast and may be indicative of a latitudinal trend.
Proteomics | 2014
Belinda Hernández; Andrew C. Parnell; Stephen R. Pennington
Proteomic biomarker discovery has led to the identification of numerous potential candidates for disease diagnosis, prognosis, and prediction of response to therapy. However, very few of these identified candidate biomarkers reach clinical validation and go on to be routinely used in clinical practice. One particular issue with biomarker discovery is the identification of significantly changing proteins in the initial discovery experiment that do not validate when subsequently tested on separate patient sample cohorts. Here, we seek to highlight some of the statistical challenges surrounding the analysis of LC‐MS proteomic data for biomarker candidate discovery. We show that common statistical algorithms run on data with low sample sizes can overfit and yield misleading misclassification rates and AUC values. A common solution to this problem is to prefilter variables (via, e.g. ANOVA and or use of correction methods such as Bonferonni or false discovery rate) to give a smaller dataset and reduce the size of the apparent statistical challenge. However, we show that this exacerbates the problem yielding even higher performance metrics while reducing the predictive accuracy of the biomarker panel. To illustrate some of these limitations, we have run simulation analyses with known biomarkers. For our chosen algorithm (random forests), we show that the above problems are substantially reduced if a sufficient number of samples are analyzed and the data are not prefiltered. Our view is that LC‐MS proteomic biomarker discovery data should be analyzed without prefiltering and that increasing the sample size in biomarker discovery experiments should be a very high priority.
Science | 2017
Jeremy S. Hoffman; Peter U. Clark; Andrew C. Parnell; Feng He
Sea surface temperatures of the past Understanding how warm intervals affected sea level in the past is vital for projecting how human activities will affect it in the future. Hoffman et al. compiled estimates of sea surface temperatures during the last interglacial period, which lasted from about 129,000 to 116,000 years ago. The global mean annual values were ∼0.5°C warmer than they were 150 years ago and indistinguishable from the 1995–2014 mean. This is a sobering point, because sea levels during the last interglacial period were 6 to 9 m higher than they are now. Science, this issue p. 276 Sea surface temperatures during the last interglaciation were like those of today. The last interglaciation (LIG, 129 to 116 thousand years ago) was the most recent time in Earth’s history when global mean sea level was substantially higher than it is at present. However, reconstructions of LIG global temperature remain uncertain, with estimates ranging from no significant difference to nearly 2°C warmer than present-day temperatures. Here we use a network of sea-surface temperature (SST) records to reconstruct spatiotemporal variability in regional and global SSTs during the LIG. Our results indicate that peak LIG global mean annual SSTs were 0.5 ± 0.3°C warmer than the climatological mean from 1870 to 1889 and indistinguishable from the 1995 to 2014 mean. LIG warming in the extratropical latitudes occurred in response to boreal insolation and the bipolar seesaw, whereas tropical SSTs were slightly cooler than the 1870 to 1889 mean in response to reduced mean annual insolation.
PLOS ONE | 2013
Valentina Franco-Trecu; Massimiliano Drago; Federico G. Riet-Sapriza; Andrew C. Parnell; Rosina Frau; Pablo Inchausti
There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method.
Geology | 2014
David De Vleeschouwer; Andrew C. Parnell
Dealing with uncertainties is inherent to the scientific process. In the process of building geologic time scales, the reported uncertainties are at least as important as the estimates of the numerical ages. Currently all time scales for the Devonian are based on conventional age-depth models, constructed by linear or cubic interpolation between different dated positions. Unfortunately, such models tend to produce overoptimistic confidence intervals. In this study we apply Bayesian statistics to the Devonian time scale to better incorporate stratigraphic and radioisotopic uncertainty. This approach yields a Devonian time scale characterized by increasing uncertainty with growing stratigraphic distance from a radioisotopically dated sample. This feature is absent from The Geologic Time Scale 2012 ; therefore, that time scale is overoptimistic. We further constrain the obtained time scale by incorporating astrochronological duration estimates for the Givetian and Frasnian stages. The combination of radioisotopic dating and astrochronology results in a reduction of the uncertainty on the numerical age of the stage boundaries concerned by several million years. For example, we estimate the age of the Frasnian- Famennian boundary at 373.9 ± 1.4 Ma.
Nature Communications | 2017
Charles M. Rubin; Benjamin P. Horton; Kerry Sieh; Jessica E. Pilarczyk; Patrick Daly; Nazli Ismail; Andrew C. Parnell
The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here we present an extraordinary 7,400 year stratigraphic sequence of prehistoric tsunami deposits from a coastal cave in Aceh, Indonesia. This record demonstrates that at least 11 prehistoric tsunamis struck the Aceh coast between 7,400 and 2,900 years ago. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda megathrust ruptures as large as that of the 2004 Indian Ocean tsunami.
The Annals of Applied Statistics | 2015
Niamh Cahill; Andrew C. Kemp; Benjamin P. Horton; Andrew C. Parnell
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The input data to our model are tidegauge measurements and proxy reconstructions from cores of coastal sediment. These data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. The model we propose places a Gaussian process prior on the rate of sea-level change, which is then integrated and set in an errors-in-variables framework to take account of age uncertainty. The resulting model captures the continuous and dynamic evolution of sea-level change with full consideration of all sources of uncertainty. We demonstrate the performance of our model using two real (and previously published) example data sets. The global tide-gauge data set indicates that sea-level rise increased from a rate with a posterior mean of 1.13 mm/yr in 1880 AD (0.89 to 1.28 mm/yr 95% credible interval for the posterior mean) to a posterior mean rate of 1.92 mm/yr in 2009 AD (1.84 to 2.03 mm/yr 95% credible interval for the posterior mean). The proxy reconstruction from North Carolina (USA) after correction for land-level change shows the 2000 AD rate of rise to have a posterior mean of 2.44 mm/yr (1.91 to 3.01 mm/yr 95% credible interval). This is unprecedented in at least the last 2000 years.