Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ilaria Prosdocimi is active.

Publication


Featured researches published by Ilaria Prosdocimi.


Water Resources Research | 2015

Detection and attribution of urbanization effect on flood extremes using nonstationary flood‐frequency models

Ilaria Prosdocimi; Thomas R. Kjeldsen; James Miller

Abstract This study investigates whether long‐term changes in observed series of high flows can be attributed to changes in land use via nonstationary flood‐frequency analyses. A point process characterization of threshold exceedances is used, which allows for direct inclusion of covariates in the model; as well as a nonstationary model for block maxima series. In particular, changes in annual, winter, and summer block maxima and peaks over threshold extracted from gauged instantaneous flows records in two hydrologically similar catchments located in proximity to one another in northern England are investigated. The study catchment is characterized by large increases in urbanization levels in recent decades, while the paired control catchment has remained undeveloped during the study period (1970–2010). To avoid the potential confounding effect of natural variability, a covariate which summarizes key climatological properties is included in the flood‐frequency model. A significant effect of the increasing urbanization levels on high flows is detected, in particular in the summer season. Point process models appear to be superior to block maxima models in their ability to detect the effect of the increase in urbanization levels on high flows.


Biometrics | 2012

Robust Estimation of Mean and Dispersion Functions in Extended Generalized Additive Models

Christophe Croux; Irène Gijbels; Ilaria Prosdocimi

Generalized linear models are a widely used method to obtain parametric estimates for the mean function. They have been further extended to allow the relationship between the mean function and the covariates to be more flexible via generalized additive models. However, the fixed variance structure can in many cases be too restrictive. The extended quasilikelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covariates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this article, we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.


Water Resources Research | 2015

A bivariate extension of the Hosking and Wallis goodness-of-fit measure for regional distributions

Thomas R. Kjeldsen; Ilaria Prosdocimi

This study presents a bivariate extension of the goodness-of-fit measure for regional frequency distributions developed by Hosking and Wallis [1993] for use with the method of L-moments. Utilising the approximate joint normal distribution of the regional L-skewness and L-kurtosis, a graphical representation of the confidence region on the L-moment diagram can be constructed as an ellipsoid. Candidate distributions can then be accepted where the corresponding theoretical relationship between the L-skewness and L-kurtosis intersects the confidence region, and the chosen distribution would be the one that minimises the Mahalanobis distance measure. Based on a set of Monte Carlo simulations it is demonstrated that the new bivariate measure generally selects the true population distribution more frequently than the original method. Results are presented to show that the new measure remains robust when applied to regions where the level of inter-site correlation is at a level found in real world regions. Finally the method is applied to two different case studies involving annual maximum peak flow data from Italian and British catchments to identify suitable regional frequency distributions.


Water Resources Research | 2017

Statistical distributions for monthly aggregations of precipitation and streamflow in drought indicator applications

Cecilia Svensson; Jamie Hannaford; Ilaria Prosdocimi

Drought indicators are used as triggers for action and so are the foundation of drought monitoring and early warning. The computation of drought indicators like the standardized precipitation index (SPI) and standardized streamflow index (SSI) require a statistical probability distribution to be fitted to the observed data. Both precipitation and streamflow have a lower bound at zero, and their empirical distributions tend to have positive skewness. For deriving the SPI, the Gamma distribution has therefore often been a natural choice. The concept of the SSI is newer and there is no consensus regarding distribution. In the present study, twelve different probability distributions are fitted to streamflow and catchment average precipitation for four durations (1, 3, 6, and 12 months), for 121 catchments throughout the United Kingdom. The more flexible three- and four-parameter distributions generally do not have a lower bound at zero, and hence may attach some probability to values below zero. As a result, there is a censoring of the possible values of the calculated SPIs and SSIs. This can be avoided by using one of the bounded distributions, such as the reasonably flexible three-parameter Tweedie distribution, which has a lower bound (and potentially mass) at zero. The Tweedie distribution has only recently been applied to precipitation data, and only for a few sites. We find it fits both precipitation and streamflow data nearly as well as the best of the traditionally used three-parameter distributions, and should improve the accuracy of drought indices used for monitoring and early warning.


Journal of Flood Risk Management | 2018

Assessing the element of surprise of record‐breaking flood events

Thomas R. Kjeldsen; Ilaria Prosdocimi

The occurrence of record-breaking flood events continuous to cause damage and disruption despite significant investments in flood defences, suggesting that these events are in some sense surprising. This study develops a new statistical test to help assess if a flood event can be considered surprising or not. The test statistic is derived from annual maximum series (AMS) of extreme events, and Monte Carlo simulations were used to derive critical values for a range of significance levels based on a Generalized Logistic distribution. The method is tested on a national dataset of AMS of peak flow from the United Kingdom, and is found to correctly identify recent large event that have been identified elsewhere as causing a significant change in UK flood management policy. No temporal trend in the frequency or magnitude of surprising events was identified, and no link could be established between the occurrences of surprising events and large-scale drivers. Finally, the implications of the findings for future research needs into the most extreme flood events are discussed.


Stochastic Environmental Research and Risk Assessment | 2018

German tanks and historical records: the estimation of the time coverage of ungauged extreme events

Ilaria Prosdocimi

The use of historical data can significantly reduce the uncertainty around estimates of the magnitude of rare events obtained with extreme value statistical models. For historical data to be included in the statistical analysis a number of their properties, e.g. their number and magnitude, need to be known with a reasonable level of confidence. Another key aspect of the historical data which needs to be known is the coverage period of the historical information, i.e. the period of time over which it is assumed that all large events above a certain threshold are known. It might be the case though, that it is not possible to easily retrieve with sufficient confidence information on the coverage period, which therefore needs to be estimated. In this paper methods to perform such estimation are introduced and evaluated. The statistical definition of the problem corresponds to estimating the size of a population for which only few data points are available. This problem is generally refereed to as the German tanks problem, which arose during the second world war, when statistical estimates of the number of tanks available to the German army were obtained. Different estimators can be derived using different statistical estimation approaches, with the maximum spacing estimator being the minimum-variance unbiased estimator. The properties of three estimators are investigated by means of a simulation study, both for the simple estimation of the historical coverage and for the estimation of the extreme value statistical model. The maximum spacing estimator is confirmed to be a good approach to the estimation of the historical period coverage for practical use and its application for a case study in Britain is presented.


Journal of Applied Statistics | 2011

Smooth estimation of mean and dispersion function in extended generalized additive models with application to italian induced abortion data

Irène Gijbels; Ilaria Prosdocimi

We analyse data on abortion rate (AR) in Italy with a particular focus on different behaviours in different regions in Italy. The aim is to try to reveal the relationship between the AR and several covariates that describe in some way the modernity of the region and the condition of the women there. The data are mostly underdispersed and the degree of underdispersion also varies with the covariates. To analyse these data, recent techniques for flexible modelling of a mean and dispersion function in a double exponential family framework are further developed now in a generalized additive model context for dealing with the multivariate set-up. The appealing unified framework and approach even allow to semi-parametric modelling of the covariates without any additional efforts. The methodology is illustrated on ozone-level data and leads to interesting findings in the Italian abortion data.


Natural Hazards and Earth System Sciences | 2017

Developing drought impact functions for drought risk management

Sophie Bachmair; Cecilia Svensson; Ilaria Prosdocimi; Jamie Hannaford; Kerstin Stahl

Drought management frameworks are dependent on methods for monitoring and prediction, but quantifying the hazard alone is arguably not sufficient; the negative consequences that may arise from a lack of precipitation must also be predicted if droughts are to be better managed. However, the link between drought intensity, expressed by some hydrometeorological indicator, and the occurrence of drought impacts has only recently begun to be addressed. One challenge is the paucity of information on ecological and socio-economic consequences of drought. This study tests the potential for 15 developing empirical “drought impact functions” based on drought indicators (Standardized Precipitation and Standardized Precipitation Evaporation Index) as predictors, and text-based reports on drought impacts as a surrogate variable for drought damage. While there have been studies exploiting textual evidence of drought impacts, a systematic assessment of the effect of impact quantification method and different functional relationships for modeling drought impacts is missing. Using SouthEast England as a case study we tested the potential of three different data-driven models for predicting drought impacts 20 quantified from text-based reports; logistic regression, zero-altered negative binomial regression (“hurdle model”), and an ensemble regression tree approach (“random forest”). The logistic regression model can only be applied to a binary impact/no impact time series, whereas the other two models can additionally predict the full counts of impact occurrence at each time point. While modeling binary data results in the lowest prediction uncertainty, modeling the full counts has the advantage of also providing a measure of impact severity, and the counts were found to be predictable within reasonable 25 limits. However, there were noticeable differences in skill between modeling methodologies. For binary data the logistic regression and the random forest model performed similarly well based on leave-one-out cross-validation. For count data the random forest outperformed the hurdle model. The between-model differences occurred for total drought impacts as well as for two subsets of impact categories (water supply and freshwater ecosystem impacts). In addition, different ways of defining the impact counts were investigated, and were found to have little influence on the prediction skill. For all models 30 we found a positive effect of including impact information of the preceding month as a predictor in addition to the hydrometeorological indicators. We conclude that, although having some limitations, text-based reports on drought impacts can Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-187, 2017 Manuscript under review for journal Nat. Hazards Earth Syst. Sci. Discussion started: 31 May 2017 c


Hydrological Sciences Journal-journal Des Sciences Hydrologiques | 2017

On the use of a four-parameter kappa distribution in regional frequency analysis

Thomas R. Kjeldsen; Hyunjun Ahn; Ilaria Prosdocimi

ABSTRACT New developments are presented enabling the using a four-parameter kappa distribution with the widely used regional goodness-of-fit methods as part of an index flood regional frequency analysis based on the method of L-moments. The framework was successfully applied to 564 pooling groups and was found to significantly improve the probabilistic description of British flood flow compared to existing procedures. Based on results from an extensive data analysis it is argued that the successful application of the kappa distribution renders the use of the traditional three-parameter distributions such as the generalized extreme value (GEV) and generalized logistic (GLO) distributions obsolete, except for large and relatively dry catchments. The importance of these findings is discussed in terms of the sensitivity of design floods to distribution choice.


Communications in Statistics-theory and Methods | 2012

Flexible Mean and Dispersion Function Estimation in Extended Generalized Additive Models

Irène Gijbels; Ilaria Prosdocimi

Real data may expose a larger (or smaller) variability than assumed in an exponential family modeling, the basis of Generalized linear models and additive models. To analyze such data, smooth estimation of the mean and the dispersion function has been introduced in extended generalized additive models using P-splines techniques. This methodology is further explored here by allowing for the modeling of some of the covariates parametrically and some nonparametrically. The main contribution in this article is a simulation study investigating the finite-sample performance of the P-spline estimation technique in these extended models, including comparisons with a standard generalized additive modeling approach, as well as with a hierarchical modeling approach.

Collaboration


Dive into the Ilaria Prosdocimi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa Stewart

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irène Gijbels

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rob Lamb

Lancaster University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge