Silvia Figini
University of Pavia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Silvia Figini.
Journal of the Operational Research Society | 2011
Silvia Figini; Paolo Giudici
In this paper we introduce and discuss statistical models aimed at predicting default probabilities of Small and Medium Enterprises (SME). Such models are based on two separate sources of information: quantitative balance sheet ratios and qualitative information derived from the opinion mining process on unstructured data. We propose a novel methodology for data fusion in longitudinal and survival duration models using quantitative and qualitative variables separately in the likelihood function and then combining their scores linearly by a weight, to obtain the corresponding probability of default for each SME. With a real financial database at hand, we have compared the results achieved in terms of model performance and predictive capability using single models and our own proposal. Finally, we select the best model in terms of out-of-sample forecasts considering key performance indicators.
Quality and Reliability Engineering International | 2010
Silvia Figini; Ron S. Kenett; Silvia Salini
The focus of the paper is the use of optimal scaling techniques to reduce the dimensionality of ordinal variables describing the quality of services to a continuous score interpretable as a measure of operational risk. This new score of operational risk is merged with a financial risk score in order to obtain an integrated measure of risk. The proposed integration methodology is a generalization of the merging model suggested in Fagini and Giudici (J. Oper. Res. Soc. 2010; in press) for a hierarchical data structure. In order to demonstrate the methodology, we use real data from a telecommunication company providing services to enterprises in different business lines and geographical locations. For each enterprise, we have collected information about operational and financial performance. The approach demonstrated in this case study can be generalized to general service providers who are concerned by both the quality of service and the financial solvency of their customers. Copyright
Journal of Operational Risk | 2007
Silvia Figini; Paolo Giudici; Pierpaolo Uberti; Ani Sanyal
According to the last proposals of the Basel Committee on Banking Supervision, banks are allowed to use the Advanced Measurement Approach (AMA) option for the computation of their capital charge covering operational risks. Among these methods, the Loss Distribution Approach (LDA) is the most sophisticated (see Frachot et al (2001) and Baud et al (2002)). It is widely recognized that calibration on internal data may not suffice for computing an accurate capital charge against operational risk. In other words, internal data should be supplemented with external data. The goal of this paper is to address issues regarding the optimal way to mix internal and external data with regards to frequency and severity. As a result rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging both databases leads to unbiased estimates. We propose a rigorous way to tackle this issue through a statistically optimized methodology.
Computational Statistics & Data Analysis | 2014
Chiara Gigliarano; Silvia Figini; Pietro Muliere
The ROC curve is one of the most common statistical tools useful to assess classifier performance. The selection of the best classifier when ROC curves intersect is quite challenging. A novel approach for model comparisons when ROC curves show intersections is proposed. In particular, the relationship between ROC orderings and stochastic dominance is investigated in a theoretical framework and a general class of indicators is proposed which is coherent with dominance criteria also when ROC curves cross. Furthermore, a simulation study and a real application to credit risk data are proposed to illustrate the use of the new methodological approach.
European Journal of Operational Research | 2010
Pierpaolo Uberti; Silvia Figini
Credit risk concentration is one of the leading topics in modern finance, as the bank regulation has made increasing use of external and internal credit ratings. Concentration risk in credit portfolios comes into being through an uneven distribution of bank loans to individual borrowers (single-name concentration) or in a hierarchical dimension such as in industry and services sectors and geographical regions (sectorial concentration). To measure single-name concentration risk the literature proposes specific concentration indexes such as the Herfindahl-Hirschman index, the Gini index or more general approaches to calculate the appropriate economic capital needed to cover the risk arising from the potential default of large borrowers. However, in our opinion, the Gini index and the Herfindahl-Hirschman index can be improved taking into account methodological and theoretical issues which are explained in this paper. We propose a new index to measure single-name credit concentration risk and we prove the properties of our contribution. Furthermore, considering the guidelines of Basel II, we describe how our index works on real financial data. Finally, we compare our index with the common procedures proposed in the literature on the basis of simulated and real data.
Journal of Operational Risk | 2015
Silvia Figini; Lijun Gao; Paolo Giudici
Operational risk is hard to quantify, for the presence of heavy tailed loss distributions. Extreme value distributions, used in this context, are very sensitive to the data, and this is a problem in the presence of rare loss data. Self risk assessment questionnaires, if properly modelled, may provide the missing piece of information that is necessary to adequately estimate op- erational risks. In this paper we propose to embody self risk assessment data into suitable prior distributions, and to follow a Bayesian approach to merge self assessment with loss data. We derive operational loss posterior distribu- tions, from which appropriate measures of risk, such as the Value at Risk, or the Expected Shortfall, can be derived. We test our proposed models on a real database, made up of internal loss data and self risk assessment questionnaires of an anonymous commercial bank. Our results show that the proposed Bayesian models performs better with respect to classical extreme value models, leading to a smaller quantification of the Value at Risk required to cover unexpected losses.
International Journal of Intelligent Systems in Accounting, Finance & Management | 2016
Silvia Figini; Roberto Savona; Marika Vezzoli
Focusing on credit risk modelling, this paper introduces a novel approach for ensemble modelling based on a normative linear pooling. Models are first classified as dominant and competitive, and the pooling is run using the competitive models only. Numerical experiments based on parametric logit, Bayesian model averaging and nonparametric classification tree, random forest, bagging, boosting model comparison shows that the proposed ensemble performs better than alternative approaches, in particular when different modelling cultures are mixed together logit and classification tree. Copyright
Journal of Applied Statistics | 2010
Silvia Figini; Paolo Giudici; Pierpaolo Uberti
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk. It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates. The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data.
Statistical Methods and Applications | 2009
Silvia Figini; Paolo Giudici
In this paper we analyse a real e-learning dataset derived from the e-learning platform of the University of Pavia. The dataset concerns an online learning environment with in-depth teaching materials. The main focus of this paper is to supply a measure of the relative importance of the exercises (test) at the end of each training unit; to build predictive models of student’s performance and finally to personalize the e-learning platform. The methodology employed is based on nonparametric statistical methods for kernel density estimation and generalized linear models and generalized additive models for predictive purposes.
PLOS ONE | 2012
Luisa Cutillo; Annamaria Carissimo; Silvia Figini
We consider the problem of finding the set of rankings that best represents a given group of orderings on the same collection of elements (preference lists). This problem arises from social choice and voting theory, in which each voter gives a preference on a set of alternatives, and a system outputs a single preference order based on the observed voters’ preferences. In this paper, we observe that, if the given set of preference lists is not homogeneous, a unique true underling ranking might not exist. Moreover only the lists that share the highest amount of information should be aggregated, and thus multiple rankings might provide a more feasible solution to the problem. In this light, we propose Network Selection, an algorithm that, given a heterogeneous group of rankings, first discovers the different communities of homogeneous rankings and then combines only the rank orderings belonging to the same community into a single final ordering. Our novel approach is inspired by graph theory; indeed our set of lists can be loosely read as the nodes of a network. As a consequence, only the lists populating the same community in the network would then be aggregated. In order to highlight the strength of our proposal, we show an application both on simulated and on two real datasets, namely a financial and a biological dataset. Experimental results on simulated data show that Network Selection can significantly outperform existing related methods. The other way around, the empirical evidence achieved on real financial data reveals that Network Selection is also able to select the most relevant variables in data mining predictive models, providing a clear superiority in terms of predictive power of the models built. Furthermore, we show the potentiality of our proposal in the bioinformatics field, providing an application to a biological microarray dataset.