Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tony Van Gestel is active.

Publication


Featured researches published by Tony Van Gestel.


Machine Learning | 2004

Benchmarking Least Squares Support Vector Machine Classifiers

Tony Van Gestel; Johan A. K. Suykens; Bart Baesens; Stijn Viaene; Jan Vanthienen; Guido Dedene; Bart De Moor; Joos Vandewalle

In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fishers linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.


European Journal of Operational Research | 2006

Bayesian kernel based classification for financial distress detection

Tony Van Gestel; Bart Baesens; Johan A. K. Suykens; Dirk Van den Poel; Dirk-Emma Baestaens; Marleen Willekens

Corporate credit granting is a key commercial activity of financial institutions nowadays. A critical first step in the credit granting process usually involves a careful financial analysis of the creditworthiness of the potential client. Wrong decisions result either in foregoing valuable clients or, more severely, in substantial capital losses if the client subsequently defaults. It is thus of crucial importance to develop models that estimate the probability of corporate bankruptcy with a high degree of accuracy. Many studies focused on the use of financial ratios in linear statistical models, such as linear discriminant analysis and logistic regression. However, the obtained error rates are often high. In this paper, Least Squares Support Vector Machine (LS-SVM) classifiers, also known as kernel Fisher discriminant analysis, are applied within the Bayesian evidence framework in order to automatically infer and analyze the creditworthiness of potential corporate clients. The inferred posterior class probabilities of bankruptcy are then used to analyze the sensitivity of the classifier output with respect to the given inputs and to assist in the credit assignment decision making process. The suggested nonlinear kernel based classifiers yield better performances than linear discriminant analysis and logistic regression when applied to a real-life data set concerning commercial credit granting to mid-cap Belgian and Dutch firms.


decision support systems | 2006

A process model to develop an internal rating system: sovereign credit ratings

Tony Van Gestel; Bart Baesens; Peter Van Dijcke; Joao Garcia; Johan A. K. Suykens; Jan Vanthienen

The Basel II capital accord encourages financial institutions to develop rating systems for assessing the risk of default of their credit portfolios in order to better calculate the minimum regulatory capital needed to cover unexpected losses. In the internal ratings based approach, financial institutions are allowed to build their own models based on collected data. In this paper, a generic process model to develop an advanced internal rating system is presented in the context of country risk analysis of developed and developing countries. In the modelling step, a new, gradual approach is suggested to augment the well-known ordinal logistic regression model with a kernel based learning capability, hereby yielding models which are at the same time both accurate and readable. The estimated models are extensively evaluated and validated taking into account several criteria. Furthermore, it is shown how these models can be transformed into user-friendly and easy to understand scorecards.


Journal of Credit Risk | 2005

Linear and Non-linear Credit Scoring by Combining Logistic Regression and Support Vector Machines

Tony Van Gestel; Bart Baesens; Peter Van Dijcke; Johan A. K. Suykens; Joao Garcia

The Basel II capital accord encourages banks to develop internal rating models that are financially intuitive, easily interpretable and optimally predictive for default. Standard linear logistic models are very easily readable but have limited model flexibility. Advanced neural network and support vector machine models (SVMs) are less straightforward to interpret but can capture more complex multivariate non-linear relations. A gradual approach that balances the interpretability and predictability requirements is applied here to rate banks. First, a linear model is estimated; it is then improved by identifying univariate non-linear ratio transformations that emphasize distressed conditions; and finally SVMs are added to capture remaining multivariate non-linear relations.


international conference on artificial neural networks | 2001

Kernel Canonical Correlation Analysis and Least Squares Support Vector Machines

Tony Van Gestel; Johan A. K. Suykens; Jos De Brabanter; Bart De Moor; Joos Vandewalle

A key idea of nonlinear Support Vector Machines (SVMs) is to map the inputs in a nonlinear way to a high dimensional feature space, while Mercers condition is applied in order to avoid an explicit expression for the nonlinear mapping. In SVMs for nonlinear classification a large margin classifier is constructed in the feature space. For regression a linear regressor is constructed in the feature space. Other kernel extensions of linear algorithms have been proposed like kernel Principal Component Analysis (PCA) and kernel Fisher Discriminant Analysis. In this paper, we discuss the extension of linear Canonical Correlation Analysis (CCA) to a kernel CCA with application of the Mercer condition. We also discuss links with single output Least Squares SVM (LS-SVM) Regression and Classification.


Neurocomputing | 2010

From linear to non-linear kernel based classifiers for bankruptcy prediction

Tony Van Gestel; Bart Baesens; David Martens

Bankruptcy prediction has been a topic of research for decades, both within the financial and the academic world. The implementations of international financial and accounting standards, such as Basel II and IFRS, as well as the recent credit crisis, have accentuated this topic even further. This paper describes both regularized and non-linear kernel variants of traditional discriminant analysis techniques, such as logistic regression, Fisher discriminant analysis (FDA) and quadratic discriminant analysis (QDA). Next to a systematic description of these variants, we contribute to the literature by introducing kernel QDA and providing a comprehensive benchmarking study of these classification techniques and their regularized and kernel versions for bankruptcy prediction using 10 real-life data sets. Performance is compared in terms of binary classification accuracy, relevant for evaluating yes/no credit decisions and in terms of classification accuracy, relevant for pricing differentiated credit granting. The results clearly indicate the significant improvement for kernel variants in both percentage correctly classified (PCC) test instances and area under the ROC curve (AUC), and indicate that bankruptcy problems are weakly non-linear. On average, the best performance is achieved by LSSVM, closely followed by kernel quadratic discriminant analysis. Given the high impact of small improvements in performance, we show the relevance and importance of considering kernel techniques within this setting. Further experiments with backwards input selection improve our results even further. Finally, we experimentally investigate the relative ranking of the different categories of variables: liquidity, solvency, profitability and various, and as such provide new insights into the relative importance of these categories for predicting financial distress.


European Journal of Control | 2001

On frequency weighted balanced truncation: Hankel singular values and error bounds

Tony Van Gestel; Bart De Moor; Brian D. O. Anderson; Peter Van Overschee

The concept of frequency weighted balancing proposed by Enns is a generalisation of internally balanced model truncation which is simple to apply and additionally attractive because of the existence of an upper H ∞ error bound that is a function of the neglected Hankel singular values. However, a generalisation of this error bound based on the frequency weighted Hankel singular values has not been reported. In this paper, it is shown that there does not exist a frequency weighted upper error bound that depends only on the neglected frequency weighted Hankel singular values. Based on this result, it is shown that truncation of the states corresponding to the lowest frequency weighted Hankel singular values does not always yield the lowest approximation error. It is explained that this is due to cross-terms that appear in the frequency weighted error bound and make the discussion on stability of the reduced order model more complex. These cross-terms are inherent in the frequency weighted balancing technique proposed by Enns.


Journal of the Operational Research Society | 2014

Forecasting Loss Given Default models: impact of account characteristics and the macroeconomic state

Ellen Tobback; David Martens; Tony Van Gestel; Bart Baesens

On the basis of two data sets containing Loss Given Default (LGD) observations of home equity and corporate loans, we consider non-linear and non-parametric techniques to model and forecast LGD. These techniques include non-linear Support Vector Regression (SVR), a regression tree, a transformed linear model and a two-stage model combining a linear regression with SVR. We compare these models with an ordinary least squares linear regression. In addition, we incorporate several variants of 11 macroeconomic indicators to estimate the influence of the economic state on loan losses. The out-of-time set-up is complemented with an out-of-sample set-up to mitigate the limited number of credit crisis observations available in credit risk data sets. The two-stage/transformed model outperforms the other techniques when forecasting out-of-time for the home equity/corporate data set, while the non-parametric regression tree is the best performer when forecasting out-of-sample. The incorporation of macroeconomic variables significantly improves the prediction performance. The downturn impact ranges up to 5% depending on the data set and the macroeconomic conditions defining the downturn. These conclusions can help financial institutions when estimating LGD under the internal ratings-based approach of the Basel Accords in order to estimate the downturn LGD needed to calculate the capital requirements. Banks are also required as part of stress test exercises to assess the impact of stressed macroeconomic scenarios on their Profit and Loss (P&L) and banking book, which favours the accurate identification of relevant macroeconomic variables driving LGD evolutions.


The Journal of Risk Model Validation | 2014

A proposed framework for backtesting loss given default models

Gert Loterman; Michiel Debruyne; Karlien Vanden Branden; Tony Van Gestel; Christophe Mues

The Basel Accords require financial institutions to regularly validate their loss given default (LGD) models. This is crucial so banks are not misestimating the minimum required capital to protect them against the risks they are facing through their lending activities. The validation of an LGD model typically includes backtesting, which involves the process of evaluating to what degree the internal model estimates still correspond with the realized observations. Reported backtesting examples have typically been limited to simply measuring the similarity between model predictions and realized observations. It is however not straightforward to determine acceptable performance based on these measurements alone. Although recent research led to advanced backtesting methods for PD models, the literature on similar backtesting methods for LGD models is much scarcer. This study addresses this literature gap by proposing a backtesting framework using statistical hypothesis tests to support the validation of LGD models. The proposed statistical hypothesis tests implicitly define reliable reference values to determine acceptable performance and take into account the number of LGD observations, as a small sample may affect the quality of the backtesting procedure. This workbench of tests is applied to an LGD model fitted to real-life data and evaluated through a statistical power analysis.


IFAC Proceedings Volumes | 2003

Identifying positive real models in subspace identification by using regularization

Ivan Goethals; Tony Van Gestel; Johan A. K. Suykens; Paul Van Dooren; Bart De Moor

Abstract This paper deals with the lack of positive realness of identified models that may be encountered in many stochastic subspace identification procedures. Lack of positive realness is an often neglected, but important problem. Subspace identification algorithms fail to return a valid linear model if the so-called covariance model which is obtained from an intermediate realization step in the subspace identification algorithm, is not positive real. The main contribution of this paper is to introduce a regularization approach to impose positive realness on the covariance model. It is shown that positive realness can be imposed by adding a regularization term to a least squares cost function appearing in the subspace identification procedure.

Collaboration


Dive into the Tony Van Gestel's collaboration.

Top Co-Authors

Avatar

Bart Baesens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Bart De Moor

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jan Vanthienen

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jos De Brabanter

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Bart De Moor

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Johan Suykens

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge