Anne-Catherine Favre
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anne-Catherine Favre.
Computational Statistics & Data Analysis | 2006
David Huard; Guillaume ívin; Anne-Catherine Favre
In recent years, the use of copulas has grown extremely fast and with it, the need for a simple and reliable method to choose the right copula family. Existing methods pose numerous difficulties and none is entirely satisfactory. We propose a Bayesian method to select the most probable copula family among a given set. The copula parameters are treated as nuisance variables, and hence do not have to be estimated. Furthermore, by a parameterization of the copula density in terms of Kendalls @t, the prior on the parameter is replaced by a prior on @t, conceptually more meaningful. The prior on @t, common to all families in the set of tested copulas, serves as a basis for their comparison. Using simulated data sets, we study the reliability of the method and observe the following: (1) the frequency of successful identification approaches 100% as the sample size increases, (2) for weakly correlated variables, larger samples are necessary for reliable identification.
Computational Statistics & Data Analysis | 2006
Salaheddine El Adlouni; Anne-Catherine Favre; Bernard Bobée
One major challenge with the modelization of complex problems using Markov chain Monte Carlo (MCMC) methods is the determination of the length of the chain in order to reach convergence. This paper is devoted to parametric empirical methods testing the stationarity. We compare the methods of Gelman and Rubin, Yu and Mykland, Raftery and Lewis, Geweke, Riemann sums and the subsampling. These methods are tested using three examples: the simple case of the generation of a normal random variable, a bivariate mixture of normal models and a practical case taken from hydrology, namely the shifting level model. Results show that no method works in every case. We therefore suggest a joint use of these techniques. The importance of determining carefully the burn-in period is also highlighted.
Journal of Climate | 2014
Jérémy Chardon; Benoît Hingray; Anne-Catherine Favre; Philemon Autin; Joël Gailhard; Isabella Zin; Charles Obled
AbstractHigh-resolution weather scenarios generated for climate change impact studies from the output of climate models must be spatially consistent. Analog models (AMs) offer a high potential for the generation of such scenarios. For each prediction day, the scenario they provide is the weather observed for days in a historical archive that are analogous according to different predictors. When the same “analog date” is chosen for a prediction at several sites, spatial consistency is automatically satisfied. The optimal predictors and consequently the optimal analog dates, however, are expected to depend on the location for which the prediction is to be made.In the present work, the predictor (1000- and 500-hPa geopotential heights) domain of a benchmark AM is optimized for the probabilistic daily prediction of 8981 local precipitation “stations” over France. The corresponding 8981 locally domain-optimized AMs are used to explore the spatial transferability and similarity of the optimal analog dates obtai...
Water Resources Research | 2017
Manuela I. Brunner; Daniel Viviroli; Anna E. Sikorska; Olivier Vannier; Anne-Catherine Favre; Jan Seibert
Accurate estimates of flood peaks, corresponding volumes, and hydrographs are required to design safe and cost-effective hydraulic structures. In this paper, we propose a statistical approach for the estimation of the design variables peak and volume by constructing synthetic design hydrographs for different flood types such as flash-floods, short-rain floods, long-rain floods, and rain-on-snow floods. Our approach relies on the fitting of probability density functions to observed flood hydrographs of a certain flood type and accounts for the dependence between peak discharge and flood volume. It makes use of the statistical information contained in the data and retains the process information of the flood type. The method was tested based on data from 39 mesoscale catchments in Switzerland and provides catchment specific and flood type specific synthetic design hydrographs for all of these catchments. We demonstrate that flood type specific synthetic design hydrographs are meaningful in flood-risk management when combined with knowledge on the seasonality and the frequency of different flood types.
Water Resources Research | 2017
Eric Deal; Anne-Catherine Favre; Jean Braun
Rainfall is an important driver of erosion processes. The mean rainfall rate is often used to account for the erosive impact of a particular climate. However, for some erosion processes, erosion rate is a nonlinear function of rainfall, e.g. due to a threshold for erosion. When this is the case, it is important to take into account the full distribution of rainfall, instead of just the mean. In light of this, we have characterized the variability of daily rainfall over the Himalayan orogen using high spatial and temporal resolution rainfall data sets. We find significant variations in rainfall variability over the Himalayan orogen, with increasing rainfall variability to the west and north of the orogen. By taking into account variability of rainfall in addition to mean rainfall rate, we find a pattern of rainfall that, from a geomorphological perspective, is significantly different from mean rainfall rate alone. Using these findings we argue that short-term rainfall variability may help explain observed short and long term erosion rates in the Himalayan orogen.
Water Resources Research | 2015
Patricia Tencaliec; Anne-Catherine Favre; Clémentine Prieur; Thibault Mathevet
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
Water Resources Research | 2018
Manuela I. Brunner; Anna E. Sikorska; Reinhard Furrer; Anne-Catherine Favre
Design hydrographs described by peak discharge, hydrograph volume, and hydrograph shapeare essential for engineering tasks involving storage. Such design hydrographs are inherently uncertain asare classical flood estimates focusing on peak discharge only. Various sources of uncertainty contribute tothe total uncertainty of synthetic design hydrographs for gauged and ungauged catchments. These com-prise model uncertainties, sampling uncertainty, and uncertainty due to the choice of a regionalizationmethod. A quantification of the uncertainties associated with flood estimates is essential for reliable deci-sion making and allows for the identification of important uncertainty sources. We therefore propose anuncertainty assessment framework for the quantification of the uncertainty associated with synthetic designhydrographs. The framework is based on bootstrap simulations and consists of three levels of complexity.On the first level, we assess the uncertainty due to individual uncertainty sources. On the second level, wequantify the total uncertainty of design hydrographs for gauged catchments and the total uncertainty ofregionalizing them to ungauged catchments but independently from the construction uncertainty. On thethird level, we assess the coupled uncertainty of synthetic design hydrographs in ungauged catchments,jointly considering construction and regionalization uncertainty. We find that the most important sources ofuncertainty in design hydrograph construction are the record length and the choice of the flood samplingstrategy. The total uncertainty of design hydrographs in ungauged catchments depends on the catchmentproperties and is not negligible in our case.
Journal of Hydrometeorology | 2016
Jérémy Chardon; Anne-Catherine Favre; Benoît Hingray
AbstractThe effects of spatial aggregation on the skill of downscaled precipitation predictions obtained over an 8 × 8 km2 grid from circulation analogs for metropolitan France are explored. The Safran precipitation reanalysis and an analog approach are used to downscale the precipitation where the predictors are taken from the 40-yr ECMWF Re-Analysis (ERA-40). Prediction skill—characterized by the continuous ranked probability score (CRPS), its skill score, and its decomposition—is generally found to continuously increase with spatial aggregation. The increase is also greater when the spatial correlation of precipitation is lower. This effect is shown from an empirical experiment carried out with a fully uncorrelated dataset, generated from a space-shake experiment, where the precipitation time series of each grid cell is randomly assigned to another grid cell. The underlying mechanisms of this effect are further highlighted with synthetic predictions simulated using a stochastic spatiotemporal generator...
Computational Statistics & Data Analysis | 2004
Anne-Catherine Favre; Alina Matei; Yves Tillé
The Cox algorithm allows to round randomly and unbiasedly a table of real numbers without modifying the marginal totals. One possible use of this method is the random imputation of a qualitative variable in survey sampling. A modification of the Cox algorithm is proposed in order to take into account a weighting system, which is commonly used in survey sampling. The use of this new method allows to construct a controlled imputation method that reduces the imputation variance.
Water Resources Research | 2018
Manuela I. Brunner; Daniel Viviroli; Reinhard Furrer; Jan Seibert; Anne-Catherine Favre
Flood hydrograph shapes contain valuable information on the flood-generation mechanisms of a catchment. To make good use of this information, we express flood hydrograph shapes as continuous functions using a functional data approach. We propose a clustering approach based on functional data for flood hydrograph shapes to identify a set of representative hydrograph shapes on a catchment scale and use these catchment-specific sets of representative hydrographs to establish regions of catchments with similar flood reactivity on a regional scale. We applied this approach to flood samples of 163 medium-size Swiss catchments. The results indicate that three representative hydrograph shapes sufficiently describe the hydrograph shape variability within a catchment and therefore can be used as a proxy for the flood behavior of a catchment. These catchment-specific sets of three hydrographs were used to group the catchments into three reactivity regions of similar flood behavior. These regions were not only characterized by similar hydrograph shapes and reactivity but also by event magnitudes and triggering event conditions. We envision these regions to be useful in regionalization studies, regional flood frequency analyses, and to allow for the construction of synthetic design hydrographs in ungauged catchments. The clustering approach based on functional data which establishes these regions is very flexible and has the potential to be extended to other geographical regions or towards the use in climate impact studies.