Rossella Arcucci
University of Naples Federico II
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rossella Arcucci.
Journal of Scientific Computing | 2014
Luisa D'Amore; Rossella Arcucci; Luisa Carracciuolo; Almerico Murli
Data assimilation (DA) is a methodology for combining mathematical models simulating complex systems (the background knowledge) and measurements (the reality or observational data) in order to improve the estimate of the system state (the forecast). The DA is an inverse and ill posed problem usually used to handle a huge amount of data, so, it is a large and computationally expensive problem. Here we focus on scalable methods that makes DA applications feasible for a huge number of background data and observations. We present a scalable algorithm for solving variational DA which is highly parallel. We provide a mathematical formalization of this approach and we also study the performance of the resulted algorithm.
international conference on high performance computing and simulation | 2015
Rossella Arcucci; Luisa D'Amore; Luisa Carracciuolo
We present an innovative approach for solving Four Dimensional Variational Data Assimilation (4D-VAR DA) problems. The approach we consider starts from a decomposition of the physical domain; it uses a partitioning of the solution and a modified regularization functional describing the 4D-VAR DA problem on the decomposition. We provide a mathematical formulation of the model and we perform a feasibility analysis in terms of computational cost and of algorithmic scalability. We use the scale-up factor which measure the performance gain in terms of time complexity reduction. We verify the reliability of the approach on a consistent test case (the Shallow Water Equations).
NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2011: International Conference on Numerical Analysis and Applied Mathematics | 2011
Luisa D’Amore; Rossella Arcucci; Livia Marcellino; A. Murli
Data Assimilation (DA) refers to the methods for merging observed (generally sparse and noisy) information into the numerical model. Good assimilations make the modeled state more consistent with the observations. Effective data assimilation systems tend to make forecasts more accurate within the ability of the model. In this work we discuss some computational efforts towards the development of parallel three dimensional data assimilation scheme, based on the oceanographic 3D‐VAR assimilation scheme, named OCEANVAR.
International Journal of Parallel Programming | 2017
Rossella Arcucci; Luisa D’Amore; Luisa Carracciuolo; Giuseppe Scotti; Giuliano Laccetti
We introduce a decomposition of the Tikhonov Regularization (TR) functional which split this operator into several TR functionals, suitably modified in order to enforce the matching of their solutions. As a consequence, instead of solving one problem we can solve several problems reproducing the initial one at smaller dimensions. Such approach leads to a reduction of the time complexity of the resulting algorithm. Since the subproblems are solved in parallel, this decomposition also leads to a reduction of the overall execution time. Main outcome of the decomposition is that the parallel algorithm is oriented to exploit the highest performance of parallel architectures where concurrency is implemented both at the coarsest and finest levels of granularity. Performance analysis is discussed in terms of the algorithm and software scalability. Validation is performed on a reference parallel architecture made of a distributed memory multiprocessor and a Graphic Processing Unit. Results are presented on the Data Assimilation problem, for oceanographic models.
international conference on conceptual structures | 2017
Rossella Arcucci; Umberto Marotta; Aniello Murano; Loredana Sorrentino
Abstract Parity games are abstract infinite-duration two-player games, widely studied in computer science. Several solution algorithms have been proposed and also implemented in the community tool of choice called PGSolver, which has declared the Zielonka Recursive (ZR) algorithm the best performing on randomly generated games. With the aim of scaling and solving wider classes of parity games, several improvements and optimizations have been proposed over the existing algorithms. However, no one has yet explored the benefit of using the full computational power of which even common modern multicore processors are capable of. This is even more surprisingly by considering that most of the advanced algorithms in PGSolver are sequential. In this paper we introduce and implement, on a multicore architecture, a parallel version of the Attractor algorithm, that is the main kernel of the ZR algorithm. This choice follows our investigation that more of the 99% of the execution time of the ZR algorithm is spent in this module. We provide testing on graphs up to 20K nodes generated through PGSolver and we discuss performance analysis in terms of strong and weak scaling.
international conference on applied mathematics | 2017
Rossella Arcucci; Luisa D’Amore; Valeria Mele
We analyse and discuss the performance of a decomposition approach introduced for solving large scale Variational Data Assimilation (DD-VAR DA) problems. Our performance analysis uses a set of matrices (decomposition and execution)[9], built to highlight the dependency relationship among component parts of a computational problem and/or among operators of the algorithm that solves the problem [10?], that are the fundamental characteristics of an algorithm. We will show how performance metrics depend on the complexity of the algorithm and on parameters characterizing the structure of the two matrices, like their number of rows and columns. We use a new definition of speed up, involving the scale-up factor which measure the performance gain in terms of time complexity reduction, to describe the non-linear behavior of the performance gain.
Journal of Computational Physics | 2017
Rossella Arcucci; Luisa D'Amore; Jenny Pistoia; Ralf Toumi; Almerico Murli
Abstract We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.
international conference on parallel processing | 2017
Luisa D’Amore; Rossella Arcucci; Yi Li; Raffaele Montella; Andrew M. Moore; Luke Phillipson; Ralf Toumi
We consider the Incremental Strong constraint 4D VARiational (IS4DVAR) algorithm for data assimilation implemented in ROMS with the aim to study its performance in terms of strong scaling scalability on computing architectures such as a cluster of CPUs. We consider realistic test cases with data collected in enclosed and semi enclosed seas, namely, Caspian sea, West Africa/Angola, as well as data collected into the California bay. The computing architecture we use is currently available at Imperial College London. The analysis allows us to highlight that the ROMS-IS4DVAR performance on emerging architectures depends on a deep relation among the problems size, the domain decomposition approach and the computing architecture characteristics.
international conference on parallel processing | 2017
Rossella Arcucci; Davide Basciano; Alessandro Cilardo; Luisa D'Amore; F. Mantovani
Driven by the emerging requirements of High Performance Computing (HPC) architectures, the main focus of this work is the interplay of computational and energetic aspects of a Four Dimensional Variational (4DVAR) Data Assimilation algorithm, based on Domain Decomposition (named DD-4DVAR). We report first results on the energy consumption of the DD-4DVAR algorithm on embedded processor and a mathematical analysis of the energy behavior of the algorithm by assuming the architectures characteristics as variable of the model. The main objective is to capture the essential operations of the algorithm exhibiting a direct relationship with the measured energy. The experimental evaluation is carried out on a set of mini-clusters made available by the Barcelona Supercomputing Center.
international conference on applied mathematics | 2017
Rossella Arcucci; Luisa D’Amore; Ralf Toumi
Data Assimilation (DA) is an uncertainty quantification technique used for improving numerical forecasted results by incorporating observed data into prediction models. As a crucial point into DA models is the ill conditioning of the covariance matrices involved, it is mandatory to introduce, in a DA software, preconditioning methods. Here we present first studies concerning the introduction of two different preconditioning methods in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea by using the Regional Ocean Modeling System (ROMS) with observations provided by the Group of High resolution sea surface temperature (GHRSST). We also present the algorithmic strategies we employ.