Paolo Zanetti
University of Naples Federico II
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paolo Zanetti.
parallel computing | 2010
Stefania Corsaro; P. L. De Angelis; Zelda Marino; Francesca Perla; Paolo Zanetti
In this paper we discuss the development of a valuation system of asset-liability management of portfolios of life insurance policies on advanced architectures. According to the new rules of the Solvency II project, numerical simulations must provide reliable estimates of the relevant quantities involved in the contracts; therefore, valuation processes have to rely on accurate algorithms able to provide solutions in a suitable turnaround time. Our target is to develop an effective valuation software. At this aim we first introduce a change of numeraire in the stochastic processes for risks sources, thus providing estimates under the forward risk-neutral measure that result in a gain in accuracy. We then parallelize the Monte Carlo method to speed-up the simulation process.
soft computing | 2017
Giuseppe Casarano; Gilberto Castellani; Luca Passalacqua; Francesca Perla; Paolo Zanetti
The definition of solvency for insurance companies, within the European Union, is currently being revised as part of Solvency II Directive. The new definition induces revolutionary changes in the logic of control and expands the responsibilities in business management. The rationale of the fundamental measures of the Directive cannot be understood without reference to probability distribution functions. Many insurers are struggling with the realisation of a so-called “internal model” to assess risks and determine the overall solvency needs, as requested by the Directive. The quantitative assessment of the solvency position of an insurer relies on Monte Carlo simulation, in particular on nested Monte Carlo simulation that produces very hard computational and technological problems to deal with. In this paper, we address methodological and computational issues of an “internal model” designing a tractable formulation of the very complex expectations resulting from the “market-consistent” valuation of fundamental measures, such as Technical Provisions, Solvency Capital Requirement and Probability Distribution Forecast, in the solvency assessment of life insurance companies. We illustrate the software and technological solutions adopted to integrate the Disar system—an asset–liability computational system for monitoring life insurance policies—in advanced computing environments, thus meeting the demand for high computing performance that makes feasible the calculation process of the solvency measures covered by the Directive.
Journal of Computational Science | 2017
Ugo Fiore; Paolo Zanetti; Francesco Palmieri; Francesca Perla
Abstract Traffic matrices, abstract representations of demand, are essential for network operators endeavoring to model, measure, maintain, and improve the efficiency of their complex and heterogeneous architectures. Traffic matrix estimation consists in inferring a traffic matrix from link-level measurements. Provoked by the need to enable agile deployment of new services while, at the same time, slashing operating expenditure and energy consumption, the trend in telecommunications is to shift functionality from physical appliances to virtualized services. We analyze the effects of this landscape change on traffic matrices, their dynamics, and their estimation, indicating some new challenges and problems that will arise in all the associated modeling, analysis and evaluation activities.
Journal of e-learning and knowledge society | 2009
Stefania Corsaro; P. L. De Angelis; M Guarracino; Zelda Marino; V. Monetti; Francesca Perla; Paolo Zanetti
The widespread use of computing tools and Internet technologies that allow both distance learning and access to large amount of data and information, makes the process of solving a technical/scientific problem, much more realistic, exciting and stimulating than a few years ago, due to lack of appropriate calculation tools. However, usability of information by students and teachers at various levels, seems to be extremely limited, both due to the diversity and fragmentation of the available material, and for the large gap between the different components that should characterize science and, in particular, modern mathematics. KREMM (Knowledge Repository of Mathematical Models) is an e-learning system for the study of mathematics for economics and finance. The purpose of KREMM is to provide a significant aid in educational and pedagogical use of mathematical and statistical techniques in the context of economic and social disciplines, which starting from the creation of material on these issues, comes to propose a complete learning path.
Archive | 2018
Ugo Fiore; Zelda Marino; Luca Passalacqua; Francesca Perla; Salvatore Scognamiglio; Paolo Zanetti
Under the Solvency II Directive, insurance and reinsurance undertakings are required to perform continuous monitoring of risks and market consistent valuation of assets and liabilities. Solvency II application is particularly demanding, both theoretically and under the computational point of view. At present, any technique able to improve on accuracy or to reduce computing time is highly desirable. This works reports initial results on the design of a Deep Learning Network, aimed to reduce computing time by avoiding the standard full nested Monte Carlo approach.
International Symposium on Cyberspace Safety and Security | 2018
Ugo Fiore; Adrian Florea; Arpad Gellert; Lucian N. Vintan; Paolo Zanetti
Over the last years, timing channels that exploit resources shared at the microarchitectural level have attracted a lot of attention. The majority of such side-channel attacks target CPU caches. Cache-based side-channel attacks are based on monitoring cache accesses performed by a victim process through measurements of access times by a spy process that shares the cache with the victim. Among the countermeasures proposed to frustrate cache-based side-channel attacks, cache partitioning seems most effective. The recently introduced Cache Allocation Technology (CAT) enables fine control over the LLC and how cores allocate into it. In this work, we introduce the problem of optimizing cache partitioning under dynamically configurable schemes such as Intel CAT, in the perspective of thwarting access-based side-channel attacks.
Information Sciences | 2017
Ugo Fiore; Alfredo De Santis; Francesca Perla; Paolo Zanetti; Francesco Palmieri
Abstract In the last years, the number of frauds in credit card-based online payments has grown dramatically, pushing banks and e-commerce organizations to implement automatic fraud detection systems, performing data mining on huge transaction logs. Machine learning seems to be one of the most promising solutions for spotting illicit transactions, by distinguishing fraudulent and non-fraudulent instances through the use of supervised binary classification systems properly trained from pre-screened sample datasets. However, in such a specific application domain, datasets available for training are strongly imbalanced, with the class of interest considerably less represented than the other. This significantly reduces the effectiveness of binary classifiers, undesirably biasing the results toward the prevailing class, while we are interested in the minority class. Oversampling the minority class has been adopted to alleviate this problem, but this method still has some drawbacks. Generative Adversarial Networks are general, flexible, and powerful generative deep learning models that have achieved success in producing convincingly real-looking images. We trained a GAN to output mimicked minority class examples, which were then merged with training data into an augmented training set so that the effectiveness of a classifier can be improved. Experiments show that a classifier trained on the augmented set outperforms the same classifier trained on the original data, especially as far the sensitivity is concerned, resulting in an effective fraud detection mechanism.
Applied mathematical sciences | 2017
Stefania Corsaro; P. L. De Angelis; Zelda Marino; Francesca Perla; Paolo Zanetti; Ugo Fiore
Prediction of market prices is an important and well-researched problem. While traditional techniques have yielded good results, rooms for improvement still exists, especially in the ability to explain sudden changes in behavior, as a response to shocks. Nonlinear systems have been successfully used to describe phase transitions in deterministic chaotic systems, so the combination of the expressive power of nonlinear systems and the efficient computation of linear models is an attractive idea. On such basis, in this work, an hybrid model is proposed that tunes its regression parameters with the results of nonlinear tools. Experiments, performed on several stocks in diverse sector and markets, show interesting performances, confirming as well the presence of distinct phases in the stock evolution, characterized by distinctly separated dynamics. Mathematics Subject Classification: 34A34, 62J02
Archive | 2012
Stefania Corsaro; Pasquale Luigi De Angelis; Zelda Marino; Paolo Zanetti
The European Directive Solvency II has increased the request of stochastic asset–liability management models for insurance undertakings. The Directive has established that insurance undertakings can develop their own “internal models” for the evaluation of values and risks in the contracts. In this chapter, we give an overview on some computational issues related to internal models. The analysis is carried out on “Italian style” profit-sharing life insurance policies (PS policy) with minimum guaranteed return. We describe some approaches for the development of accurate and efficient algorithms for their simulation. In particular, we discuss the development of parallel software procedures. Main computational kernels arising in models employed in this framework are stochastic differential equations (SDEs) and high-dimensional integrals. We show how one can develop accurate and efficient procedures for PS policies simulation applying different numerical methods for SDEs and techniques for accelerating Monte Carlo simulations for the evaluation of the integrals. Moreover, we show that the choice of an appropriate probability measure provides a significative gain in terms of accuracy.
International Journal of Parallel, Emergent and Distributed Systems | 2008
Mario Rosario Guarracino; Francesca Perla; Paolo Zanetti
In this work, we propose an efficient parallel implementation of the nonsymmetric block Lanczos algorithm for the computation of few extreme eigenvalues, and corresponding eigenvectors, of real nonhermitian matrices for distributed memory multicomputers. The reorganisation of the block Lanczos algorithm implemented allows to exploit a coarse-grained parallelism and to harness the computational power of the target architectures. The computational kernels of the algorithm are matrix–matrix multiplications, with dense and sparse factors, QR factorisation and singular value decomposition. To reduce the total amount of communication involved in the matrix–matrix multiplication with a sparse factor, we substitute each matrix appearing in the algorithm with its transpose. Then, we develop an efficient parallelisation of the matrix–matrix multiplication when the second factor is sparse. Some other linear algebra operations are performed using ScaLAPACK library. The parallel eigensolver has been tested on a cluster of PCs. All reported results show the proposed algorithm is efficient on the target architectures for problems of adequate dimension.