Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerrit Schoups is active.

Publication


Featured researches published by Gerrit Schoups.


Water Resources Research | 2010

A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non‐Gaussian errors

Gerrit Schoups; Jasper A. Vrugt

Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance. Here we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. A formal generalized likelihood function is presented, which extends the applicability of previously used likelihood functions to situations where residual errors are correlated, heteroscedastic, and non?Gaussian with varying degrees of kurtosis and skewness. The approach focuses on a correct statistical description of the data and the total model residuals, without separating out various error sources. Application to Bayesian uncertainty analysis of a conceptual rainfall?runoff model simultaneously identifies the hydrologic model parameters and the appropriate statistical distribution of the residual errors. When applied to daily rainfall?runoff data from a humid basin we find that (1) residual errors are much better described by a heteroscedastic, first?order, auto?correlated error model with a Laplacian distribution function characterized by heavier tails than a Gaussian distribution; and (2) compared to a standard least?squares approach, proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and different parameter uncertainty estimates that are less sensitive to the particular time period used for inference. Application to daily rainfall?runoff data from a semiarid basin with more significant residual errors and systematic underprediction of peak flows shows that (1) multiplicative bias factors can be used to compensate for some of the largest errors and (2) a skewed error distribution yields improved estimates of predictive uncertainty in this semiarid basin with near?zero flows. We conclude that the presented methodology provides improved estimates of parameter and total prediction uncertainty and should be useful for handling complex residual errors in other hydrologic regression models as well.


Proceedings of the National Academy of Sciences of the United States of America | 2005

Sustainability of irrigated agriculture in the San Joaquin Valley, California

Gerrit Schoups; Jan W. Hopmans; Chuck Young; Jasper A. Vrugt; Wesley W. Wallender; Ken K. Tanji; Sorab Panday

The sustainability of irrigated agriculture in many arid and semiarid areas of the world is at risk because of a combination of several interrelated factors, including lack of fresh water, lack of drainage, the presence of high water tables, and salinization of soil and groundwater resources. Nowhere in the United States are these issues more apparent than in the San Joaquin Valley of California. A solid understanding of salinization processes at regional spatial and decadal time scales is required to evaluate the sustainability of irrigated agriculture. A hydro-salinity model was developed to integrate subsurface hydrology with reactive salt transport for a 1,400-km2 study area in the San Joaquin Valley. The model was used to reconstruct historical changes in salt storage by irrigated agriculture over the past 60 years. We show that patterns in soil and groundwater salinity were caused by spatial variations in soil hydrology, the change from local groundwater to snowmelt water as the main irrigation water supply, and by occasional droughts. Gypsum dissolution was a critical component of the regional salt balance. Although results show that the total salt input and output were about equal for the past 20 years, the model also predicts salinization of the deeper aquifers, thereby questioning the sustainability of irrigated agriculture.


Water Resources Research | 2010

Gamma distribution models for transit time estimation in catchments: physical interpretation of parameters and implications for time-variant transit time assessment.

Markus Hrachowitz; Chris Soulsby; Doerthe Tetzlaff; I. A. Malcolm; Gerrit Schoups

In hydrological tracer studies, the gamma distribution can serve as an appropriate transit time distribution (TTD) as it allows more flexibility to account for nonlinearities in the behavior of catchment systems than the more commonly used exponential distribution. However, it is unclear which physical interpretation can be ascribed to its two parameters (?, ?). In this study, long?term tracer data from three contrasting catchments in the Scottish Highlands were used for a comparative assessment of interannual variability in TTDs and resulting mean transit times (MTT = ??) inferred by the gamma distribution model. In addition, spatial variation in the long?term average TTDs from these and six additional catchments was also assessed. The temporal analysis showed that the ? parameter was controlled by precipitation intensities above catchment?specific thresholds. In contrast, the ? parameter, which showed little temporal variability and no relationship with precipitation intensity, was found to be closely related to catchment landscape organization, notably the hydrological characteristics of the dominant soils and the drainage density. The relationship between ? and precipitation intensity was used to express ? as a time?varying function within the framework of lumped convolution integrals to examine the nonstationarity of TTDs. The resulting time?variant TTDs provided more detailed and potentially useful information about the temporal dynamics and the timing of solute fluxes. It was shown that in the wet, cool climatic conditions of the Scottish Highlands, the transit times from the time?variant TTD were roughly consistent with the variations of MTTs revealed by interannual analysis.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Reclaiming freshwater sustainability in the Cadillac Desert

John L. Sabo; Tushar Sinha; Laura C. Bowling; Gerrit Schoups; Wesley W. Wallender; Michael E. Campana; Keith A. Cherkauer; Pam L. Fuller; William L. Graf; Jan W. Hopmans; John S. Kominoski; Carissa Taylor; Stanley W. Trimble; Robert H. Webb; Ellen Wohl

Increasing human appropriation of freshwater resources presents a tangible limit to the sustainability of cities, agriculture, and ecosystems in the western United States. Marc Reisner tackles this theme in his 1986 classic Cadillac Desert: The American West and Its Disappearing Water. Reisners analysis paints a portrait of region-wide hydrologic dysfunction in the western United States, suggesting that the storage capacity of reservoirs will be impaired by sediment infilling, croplands will be rendered infertile by salt, and water scarcity will pit growing desert cities against agribusiness in the face of dwindling water resources. Here we evaluate these claims using the best available data and scientific tools. Our analysis provides strong scientific support for many of Reisners claims, except the notion that reservoir storage is imminently threatened by sediment. More broadly, we estimate that the equivalent of nearly 76% of streamflow in the Cadillac Desert region is currently appropriated by humans, and this figure could rise to nearly 86% under a doubling of the regions population. Thus, Reisners incisive journalism led him to the same conclusions as those rendered by copious data, modern scientific tools, and the application of a more genuine scientific method. We close with a prospectus for reclaiming freshwater sustainability in the Cadillac Desert, including a suite of recommendations for reducing region-wide human appropriation of streamflow to a target level of 60%.


Water Resources Research | 2008

Model complexity control for hydrologic prediction

Gerrit Schoups; N. C. van de Giesen; Hubert H. G. Savenije

A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaikes information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.


Water Resources Research | 2010

Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

Gerrit Schoups; Jasper A. Vrugt; Fabrizio Fenicia; N. C. van de Giesen

Conceptual rainfall?runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first?order, explicit, fixed?step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first?order, explicit, fixed?step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall?runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman?Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive?step finite difference schemes and an operator splitting method. Our results show that adaptive?step, second?order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed?step or adaptive?step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive?step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.


Hydrological Processes | 2014

Catchment properties, function, and conceptual model representation: Is there a correspondence?

Fabrizio Fenicia; Dmitri Kavetski; Hubert H. G. Savenije; Martyn P. Clark; Gerrit Schoups; Laurent Pfister; Jim E Freer

This study investigates the possible correspondence between catchment structure, as represented by perceptual hydrological models developed from fieldwork investigations, and mathematical model structures, selected on the basis of reproducing observed catchment hydrographs. Three Luxembourgish headwater catchments are considered, where previous fieldwork suggested distinct flow-generating mechanisms and hydrological dynamics. A set of lumped conceptual model structures are hypothesized and implemented using the SUPERFLEX framework. Following parameter calibration, the model performance is examined in terms of predictive accuracy, quantification of uncertainty, and the ability to reproduce the flow–duration curve signature. Our key research question is whether differences in the performance of the conceptual model structures can be interpreted based on the dominant catchment processes suggested from fieldwork investigations. For example, we propose that the permeable bedrock and the presence of multiple aquifers in the Huewelerbach catchment may explain the superior performance of model structures with storage elements connected in parallel. Conversely, model structures with serial connections perform better in the Weierbach and Wollefsbach catchments, which are characterized by impermeable bedrock and dominated by lateral flow. The presence of threshold dynamics in the Weierbach and Wollefsbach catchments may favour nonlinear models, while the smoother dynamics of the larger Huewelerbach catchment were suitably reproduced by linear models. It is also shown how hydrologically distinct processes can be effectively described by the same mathematical model components. Major research questions are reviewed, including the correspondence between hydrological processes at different levels of scale and how best to synthesize the experimentalists and modellers perspectives. Copyright


Water Resources Research | 2012

Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

Joerg Rings; Jasper A. Vrugt; Gerrit Schoups; Johan Alexander Huisman; Harry Vereecken

Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).


Journal of Hazardous Materials | 2014

Corrosion rate estimations of microscale zerovalent iron particles via direct hydrogen production measurements.

Milica Velimirovic; Luca Carniato; Queenie Simons; Gerrit Schoups; Piet Seuntjens; Leen Bastiaens

In this study, the aging behavior of microscale zerovalent iron (mZVI) particles was investigated by quantifying the hydrogen gas generated by anaerobic mZVI corrosion in batch degradation experiments. Granular iron and nanoscale zerovalent iron (nZVI) particles were included in this study as controls. Firstly, experiments in liquid medium (without aquifer material) were performed and revealed that mZVI particles have approximately a 10-30 times lower corrosion rate than nZVI particles. A good correlation was found between surface area normalized corrosion rate (RSA) and reaction rate constants (kSA) of PCE, TCE, cDCE and 1,1,1-TCA. Generally, particles with higher degradation rates also have faster corrosion rates, but exceptions do exists. In a second phase, the hydrogen evolution was also monitored during batch tests in the presence of aquifer material and real groundwater. A 4-9 times higher corrosion rate of mZVI particles was observed under the natural environment in comparison with the aquifer free artificial condition, which can be attributed to the low pH of the aquifer and its buffer capacity. A corrosion model was calibrated on the batch experiments to take into account the inhibitory effects of the corrosion products (dissolved iron, hydrogen and OH(-)) on the iron corrosion rate.


Journal of Contaminant Hydrology | 2012

Predicting longevity of iron permeable reactive barriers using multiple iron deactivation models

Luca Carniato; Gerrit Schoups; Piet Seuntjens; T Van Nooten; Queenie Simons; Leen Bastiaens

In this study we investigate the model uncertainties involved in predicting long-term permeable reactive barrier (PRB) remediation efficiency based on a lab-scale column experiment under accelerated flow conditions. A PRB consisting of 20% iron and 80% sand was simulated in a laboratory-scale column and contaminated groundwater was pumped into the column for approximately 1 year at an average groundwater velocity of 3.7 E-1 m d(-1). Dissolved contaminants (PCE, TCE, cis-DCE, trans-DCE and VC) and inorganic (Ca(2+), Fe(2+), TIC and pH) concentrations were measured in groundwater sampled at different times and at eight different distances along the column. These measurements were used to calibrate a multi-component reactive transport model, which subsequently provided predictions of long-term PRB efficiency under reduced flow conditions (i.e., groundwater velocity of 1.4 E-3m d(-1)), representative of a field site of interest in this study. Iron reactive surface reduction due to mineral precipitation and iron dissolution was simulated using four different models. All models were able to reasonably well reproduce the column experiment measurements, whereas the extrapolated long-term efficiency under different flow rates was significantly different between the different models. These results highlight significant model uncertainties associated with extrapolating long-term PRB performance based on lab-scale column experiments. These uncertainties should be accounted for at the PRB design phase, and may be reduced by independent experiments and field observations aimed at a better understanding of reactive surface deactivation mechanisms in iron PRBs.

Collaboration


Dive into the Gerrit Schoups's collaboration.

Top Co-Authors

Avatar

Jan W. Hopmans

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabrizio Fenicia

Swiss Federal Institute of Aquatic Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Nick van de Giesen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

N. C. van de Giesen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Harter

University of California

View shared research outputs
Top Co-Authors

Avatar

Hubert H. G. Savenije

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Luca Carniato

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chuck Young

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge