Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Ratto is active.

Publication


Featured researches published by Marco Ratto.


Computer Physics Communications | 2010

Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index

Andrea Saltelli; Paola Annoni; Ivano Azzini; Francesca Campolongo; Marco Ratto; Stefano Tarantola

Abstract Variance based methods have assessed themselves as versatile and effective among the various available techniques for sensitivity analysis of model output. Practitioners can in principle describe the sensitivity pattern of a model Y = f ( X 1 , X 2 , … , X k ) with k uncertain input factors via a full decomposition of the variance V of Y into terms depending on the factors and their interactions. More often practitioners are satisfied with computing just k first order effects and k total effects, the latter describing synthetically interactions among input factors. In sensitivity analysis a key concern is the computational cost of the analysis, defined in terms of number of evaluations of f ( X 1 , X 2 , … , X k ) needed to complete the analysis, as f ( X 1 , X 2 , … , X k ) is often in the form of a numerical model which may take long processing time. While the computational cost is relatively cheap and weakly dependent on k for estimating first order effects, it remains expensive and strictly k-dependent for total effect indices. In the present note we compare existing and new practices for this index and offer recommendations on which to use.


Reliability Engineering & System Safety | 2006

Sensitivity analysis practices: Strategies for model-based inference

Andrea Saltelli; Marco Ratto; Stefano Tarantola; Francesca Campolongo

Fourteen years after Sciences review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having “sensitivity analysis†as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on “one-factor-at-a-time†(OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.


Computer Physics Communications | 2001

Sensitivity Analysis in Model Calibration: GSA-GLUE Approach.

Marco Ratto; Stefano Tarantola; Andrea Saltelli

A new approach is presented applicable in framework of model calibration to observed data. The approach consists of a combination of the Generalized Likelihood Uncertainty Estimation technique (GLUE) and Global Sensitivity Analysis (GSA). The method is based on multiple model evaluations. The GSA is a quantitative, model independent approach and is based on estimating the fractional contribution of each input factor to the variance of the model output, also accounting for interaction terms. In GLUE, the model runs are classified according to a likelihood measure, conditioning each run to observations. In calibration procedures, strong interaction is observed between model parameters, due to model over-parameterization. The use of likelihood measures allows an estimate of the posterior joint pdf of parameters. By performing a GSA to the likelihood measure, input factors mainly driving model runs with good fit to data are identified. Moreover GSA allows highlighting the basic features of the interaction structure. Any other tool subsequently adopted to represent in more detail the interaction structure, from correlation coefficients to Principal Component Analysis to Bayesian networks to tree-structured density estimation, confirms the general features identified by GSA.


Environmental Modelling and Software | 2012

Position Paper: A general framework for Dynamic Emulation Modelling in environmental problems

Andrea Castelletti; Stefano Galelli; Marco Ratto; Rodolfo Soncini-Sessa; Peter C. Young

Emulation modelling is an effective way of overcoming the large computational burden associated with the process-based models traditionally adopted by the environmental modelling community. An emulator is a low-order, computationally efficient model identified from the original large model and then used to replace it for computationally intensive applications. As the number and forms of the problem that benefit from the identification and subsequent use of an emulator is very large, emulation modelling has emerged in different sectors of science, engineering and social science. For this reason, a variety of different strategies and techniques have been proposed in the last few years. The main aim of the paper is to provide an introduction to emulation modelling, together with a unified strategy for its application, so that modellers from different disciplines can better appreciate how it may be applied in their area of expertise. Particular emphasis is devoted to Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original process-based model, with consequent advantages in a wide variety of problem areas. The different techniques and approaches to DEMo are considered in two macro categories: structure-based methods, where the mathematical structure of the original model is manipulated to a simpler, more computationally efficient form; and data-based approaches, where the emulator is identified and estimated from a data-set generated from planned experiments conducted on the large simulation model. The main contribution of the paper is a unified, six-step procedure that can be applied to most kinds of dynamic emulation problem.


Environmental Modelling and Software | 2012

Editorial: Emulation techniques for the reduction and sensitivity analysis of complex environmental models

Marco Ratto; Andrea Castelletti; Andrea Pagano

Emulation (also denoted as metamodelling in the literature) is an important and expanding area of research and represents one of the major advances in the study of complex mathematical models, with applications ranging from model reduction to sensitivity analysis. Despite the stunning increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems, i.e. when they are combined with optimization routines. Therefore, a combination of techniques for complex model reduction with procedures for data assimilation and learning-based control could help to bridge the gap between science and the operational use of models for decision-making. Furthermore sensitivity analysis is a well known and established tool for evaluating robustness of model based results in management and planning, and is often performed in tandem with emulation. Indeed, emulators provide an efficient means for doing a sensitivity analysis for large and expensive models. This thematic issue aims at providing a guide and reference for modellers in choosing appropriate emulation modelling approaches and understanding their features. Tools and applications of sensitivity analysis in the context of environmental modelling are also addressed, which is a typical complement of emulation in most applications. We hope that this thematic issue provides a useful benchmark in the academic literature for this important and expanding area of research, and will create an opportunity for dialogue between methodological and user-focused research.


Reliability Engineering & System Safety | 2009

Non-parametric estimation of conditional moments for sensitivity analysis

Marco Ratto; Andrea Pagano; Peter C. Young

In this paper, we consider the non-parametric estimation of conditional moments, which is useful for applications in global sensitivity analysis (GSA) and in the more general emulation framework. The estimation is based on the state-dependent parameter (SDP) estimation approach and allows for the estimation of conditional moments of order larger than unity. This allows one to identify a wider spectrum of parameter sensitivities with respect to the variance-based main effects, like shifts in the variance, skewness or kurtosis of the model output, so adding valuable information for the analyst, at a small computational cost.


Environmental Modelling and Software | 2012

A comparison of eight metamodeling techniques for the simulation of N2O fluxes and N leaching from corn crops

Nathalie Villa-Vialaneix; Marco Follador; Marco Ratto; Adrian Leip

The environmental costs of intensive farming activities are often under-estimated or not traded by the market, even though they play an important role in addressing future societys needs. The estimation of nitrogen (N) dynamics is thus an important issue which demands detailed simulation based methods and their integrated use to correctly represent complex and non-linear interactions into cropping systems. To calculate the N2O flux and N leaching from European arable lands, a modeling framework has been developed by linking the CAPRI agro-economic dataset with the DNDC-EUROPE bio-geo-chemical model. But, despite the great power of modern calculators, their use at continental scale is often too computationally costly. By comparing several statistical methods this paper aims to design a metamodel able to approximate the expensive code of the detailed modeling approach, devising the best compromise between estimation performance and simulation speed. We describe the use of two parametric (linear) models and six non-parametric approaches: two methods based on splines (ACOSSO and SDR), one method based on kriging (DACE), a neural networks method (multilayer perceptron, MLP), SVM and a bagging method (random forest, RF). This analysis shows that, as long as few data are available to train the model, splines approaches lead to best results, while when the size of training dataset increases, SVM and RF provide faster and more accurate solutions.


Chemical Reviews | 2012

Update 1 of: Sensitivity analysis for chemical models.

Andrea Saltelli; Marco Ratto; Stefano Tarantola; Francesca Campolongo

Chemists routinely create models of reaction systems to understand reaction mechanisms, kinetic properties, process yields under various operating conditions, or the impact of chemicals on manhumans and the environment. As opposed to concise physical laws, these models are attempts to mimic the system by hypothesizing, extracting, and encoding system features (e.g., a potentially relevant reaction pathway), within a process that can hardly be formalized scientifically. Amodelwill hopefully help to corroborate or falsify a given description of reality, e.g., by validating a reaction scheme for a photochemical process in the atmosphere, and possibly to influence reality, e.g., by allowing the identification of optimal operating conditions for an industrial process or suggesting mitigating strategies for an undesired environmental impact. These models are customarily built in the presence of uncertainties of various levels, in the pathway, in the order of the kinetics associated to the pathway, in the numerical value of the kinetic and thermodynamic constants for that pathway, and so on. Propagating via the model all these uncertainties onto the model output of interest, e.g., the yield of a process, is the job of uncertainty analysis. Determining the strength of the relation between a given uncertain input and the output is the job of sensitivity analysis. A straightforward implementation of the “sensitivity” concept is provided by model output derivatives. If the model output of interest isY, its sensitivity to an input factorXi is simplyY0Xi= ∂Y/∂Xi. This measure tells how sensitive the output is to a perturbation of the input. For discrete input factors, local sensitivities might be impossible to evaluate as wide perturbations of the input would be implied. If a measure independent from the units used for Y and Xi is needed, S r Xi = (X 0 i /Y )(∂Y/∂Xi), which denotes the socalled elasticity coefficient, can be used, where Xi is the nominal value of factor Xi and Y 0 is the value taken by Y when all input factors are at their nominal value. The nominal (or reference, or design) value Xi can be the mean (or median) value when an uncertainty distribution (either empirical or hypothesized) is available. In this latter case an alternative measure is SXi = (σXi/σY)(∂Y/∂Xi), where the standard deviations σXi, σY are uncertainty analysis’ input and output, respectively, in the sense that σXi comes from the available knowledge onXi, while σYmust be inferred using the model. Wheras SXi is a dimensionless version of the pure derivative (∂Y/∂Xi) and, hence, still a purely local measure (i.e., relative to the point where the derivative is taken), SXi depends upon the uncertain range of factor Xi, and is in this sense a more informative measure.Coeteris paribus, factors with larger standard deviations, have more chance to contribute significantly to the uncertainty in the output. Local, derivative-based sensitivity measures can be efficiently computed by an array of techniques, ranging from automated differentiation (where the computer program that implements the model is modified so that the sensitivities are computed with a modicum of extra execution time) to direct methods (where the differential equations describing the model are solved directly in terms of species concentrations and their derivatives). There is a vast amount of literature on these sensitivity measures. 10,11 The majority of sensitivity analyses met with in chemistry and physics are local and derivative-based. Local sensitivities are useful for a variety of applications, such as the solution of inverse problems, e.g., relatingmacroscopic observables of a system, such as kinetic constants, to the quantum mechanics properties of the system, or the analysis of runaway and parametric sensitivity of various types of chemical reactors. Contexts where local sensitivity has been widely used are as follows: (1) to understand the reaction path, mechanism, or rate-determining steps in a detailed


Technometrics | 2011

Statistical emulation of large linear dynamic models

Peter C. Young; Marco Ratto

The article describes a new methodology for the emulation of high-order, dynamic simulation models. This exploits the technique of dominant mode analysis to identify a reduced-order, linear transfer function model that closely reproduces the linearized dynamic behavior of the large model. Based on a set of such reduced-order models, identified over a specified region of the large model’s parameter space, nonparametric regression, tensor product cubic spline smoothing, or Gaussian process emulation are used to construct a computationally efficient, low-order, dynamic emulation (or meta) model that can replace the large model in applications such as sensitivity analysis, forecasting, or control system design. Two modes of emulation are possible, one of which allows for novel ‘stand-alone’ operation that replicates the dynamic behavior of the large simulation model over any time horizon and any sequence of the forcing inputs. Two examples demonstrate the practical utility of the proposed technique and supplementary materials, available online and including Matlab code, provide a background to the methods of transfer function model identification and estimation used in the article.


Reliability Engineering & System Safety | 2009

Calculating First-order Sensitivity Measures: A Benchmark of Some Recent Methodologies

Debora Gatelli; Sergei S. Kucherenko; Marco Ratto; Stefano Tarantola

Abstract This work compares three different global sensitivity analysis techniques, namely the state-dependent parameter (SDP) modelling, the random balance designs, and the improved formulas of the Sobol’ sensitivity indices. These techniques are not yet commonly known in the literature. Strengths and weaknesses of each technique in terms of efficiency and computational cost are highlighted, thus enabling the user to choose the more suitable method depending on the computational model analysed. Two test functions proposed in the literature are considered. Computational costs and convergence rates for each function are compared and discussed.

Collaboration


Dive into the Marco Ratto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Saltelli

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Robert Kollmann

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter C. Young

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Pagano

International Practical Shooting Confederation

View shared research outputs
Researchain Logo
Decentralizing Knowledge