Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katrien Antonio is active.

Publication


Featured researches published by Katrien Antonio.


The North American Actuarial Journal | 2006

Lognormal Mixed Models for Reported Claims Reserves

Katrien Antonio; Jan Beirlant; Tom Hoedemakers; Robert Verlaak

Abstract Traditional claims-reserving techniques are based on so-called run-off triangles containing aggregate claim figures. Such a triangle provides a summary of an underlying data set with individual claim figures. This contribution explores the interpretation of the available individual data in the framework of longitudinal data analysis. Making use of the theory of linear mixed models, a flexible model for loss reserving is built. Whereas traditional claims-reserving techniques don’t lead directly to predictions for individual claims, the mixed model enables such predictions on a sound statistical basis with, for example, confidence regions. Both a likelihood-based as well as a Bayesian approach are considered. In the frequentist approach, expressions for the mean squared error of prediction of an individual claim reserve, origin year reserves, and the total reserve are derived. Using MCMC techniques, the Bayesian approach allows simulation from the complete predictive distribution of the reserves and the calculation of various risk measures. The paper ends with an illustration of the suggested techniques on a data set from practice, consisting of Belgian automotive third-party liability claims. The results for the mixed-model analysis are compared with those obtained from traditional claims-reserving techniques for run-off triangles. For the data under consideration, the lognormal mixed model fits the observed individual data well. It leads to individual predictions comparable to those obtained by applying chain-ladder development factors to individual data. Concerning the predictive power on the aggregate level, the mixed model leads to reasonable predictions and performs comparable to and often better than the stochastic chain ladder for aggregate data.


Scandinavian Actuarial Journal | 2016

The Impact of Multiple Structural Changes on Mortality Predictions

Frank Van Berkum; Katrien Antonio; Michel Vellekoop

Most mortality models proposed in recent literature rely on the standard ARIMA framework (in particular: a random walk with drift) to project mortality rates. As a result the projections are highly sensitive to the calibration period. We therefore analyse the impact of allowing for multiple structural changes on a large collection of mortality models. We find that this may lead to more robust projections for the period effect but that there is only a limited effect on the ranking of the models based on backtesting criteria, since there is often not yet sufficient statistical evidence for structural changes. However, there are cases for which we do find improvements in estimates and we therefore conclude that one should not exclude on beforehand that structural changes may have occurred.


Astin Bulletin | 2010

A multilevel analysis of intercompany claim counts

Katrien Antonio; Edward W. Frees; Emiliano A. Valdez

It is common for professional associations and regulators to combine the claims experience of several insurers into a database known as an “intercompany†experience data set. In this paper, we analyze data on claim counts provided by the General Insurance Association of Singapore, an organization consisting of most of the general insurers in Singapore. Our data comes from the financial records of automobile insurance policies followed over a period of nine years. Because the source contains a pooled experience of several insurers, we are able to study company effects on claim behavior, an area that has not been systematically addressed in either the insurance or the actuarial literatures. We analyze this intercompany experience using multilevel models. The multilevel nature of the data is due to: a vehicle is observed over a period of years and is insured by an insurance company under a “fleet†policy. Fleet policies are umbrella-type policies issued to customers whose insurance covers more than a single vehicle. We investigate vehicle, fleet and company effects using various count distribution models (Poisson, negative binomial, zero-inflated and hurdle Poisson). The performance of these various models is compared; we demonstrate how our model can be used to update a priori premiums to a posteriori premiums, a common practice of experience-rated premium calculations. Through this formal model structure, we provide insights into effects that company-specific practice has on claims experience, even after controlling for vehicle and fleet effects.


Lifetime Data Analysis | 2016

Multivariate mixtures of Erlangs for density estimation under censoring

Roel Verbelen; Katrien Antonio; Gerda Claeskens

Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with randomly censored and fixed truncated data. The effectiveness of the proposed algorithm is demonstrated on simulated as well as real data sets.


The North American Actuarial Journal | 2015

Reserving by Conditioning on Markers of Individual Claims: A Case Study Using Historical Simulation

Els Godecharle; Katrien Antonio

This article explores the use of claim specific characteristics, so-called claim markers, for loss reserving with individual claims. Starting from the approach of Rosenlund and using the technique of historical simulation we develop a stochastic Reserve by Detailed Conditioning method that is applicable to a microlevel data set with detailed information on individual claims. We construct the predictive distribution of the outstanding loss reserve by simulating future payments of a claim, given its claim markers. We demonstrate the performance of the method on a portfolio of general liability insurance policies for private individuals from a European insurance company. Hereby we explore how to incorporate different kinds of claim markers and evaluate the impact of the set of markers and their specification on the predictive distribution of the outstanding reserve.


Insurance Mathematics & Economics | 2017

Modelling censored losses using splicing: a global fit strategy with mixed Erlang and extreme value distributions

Tom Reynkens; Roel Verbelen; Jan Beirlant; Katrien Antonio

In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modeling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement.


ieee international conference on data science and advanced analytics | 2015

Profit maximizing logistic regression modeling for customer churn prediction

Eugen Stripling; Seppe vanden Broucke; Katrien Antonio; Bart Baesens; Monique Snoeck

The selection of classifiers which are profitable is becoming more and more important in real-life situations such as customer churn management campaigns in the telecommunication sector. In previous works, the expected maximum profit (EMP) metric has been proposed, which explicitly takes the cost of offer and the customer lifetime value (CLV) of retained customers into account. It thus permits the selection of the most profitable classifier, which better aligns with business requirements of end-users and stake holders. However, modelers are currently limited to applying this metric in the evaluation step. Hence, we expand on the previous body of work and introduce a classifier that incorporates the EMP metric in the construction of a classification model. Our technique, called ProfLogit, explicitly takes profit maximization concerns into account during the training step, rather than the evaluation step. The technique is based on a logistic regression model which is trained using a genetic algorithm (GA). By means of an empirical benchmark study applied to real-life data sets, we show that ProfLogit generates substantial profit improvements compared to the classic logistic model for many data sets. In addition, profit-maximized coefficient estimates differ considerably in magnitude from the maximum likelihood estimates.


Journal of The Royal Statistical Society Series C-applied Statistics | 2018

Unraveling the predictive power of telematics data in car insurance pricing

Roel Verbelen; Katrien Antonio; Gerda Claeskens

A data set from a Belgian telematics product aimed at young drivers is used to identify how car insurance premiums can be designed based on the telematics data collected by a black box installed in the vehicle. In traditional pricing models for car insurance, the premium depends on self-reported rating variables (e.g. age, postal code) which capture characteristics of the policy(holder) and the insured vehicle and are often only indirectly related to the accident risk. Using telematics technology enables tailor-made car insurance pricing based on the driving behavior of the policyholder. We develop a statistical modeling approach using generalized additive models and compositional predictors to quantify and interpret the effect of telematics variables on the expected claim frequency. We find that such variables increase the predictive power and render the use of gender as a discriminating rating variable redundant.


Archive | 2014

Multivariate Mixtures of Erlangs for Density Estimation Under Censoring and Truncation

Verbelen Roel; Katrien Antonio; Gerda Claeskens

Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with censored and truncated data. The effectiveness of the proposed algorithm, which has been implemented in R, is demonstrated on simulated as well as real data sets.The Addendum for this paper are available at the following URL: http://ssrn.com/abstract=2593016


Swarm and evolutionary computation | 2017

Profit maximizing logistic model for customer churn prediction using genetic algorithms

Eugen Stripling; Seppe vanden Broucke; Katrien Antonio; Bart Baesens; Monique Snoeck

To detect churners in a vast customer base, as is the case with telephone service providers, companies heavily rely on predictive churn models to remain competitive in a saturated market. In previous work, the expected maximum profit measure for customer churn (EMPC) has been proposed in order to determine the most profitable churn model. However, profit concerns are not directly integrated into the model construction. Therefore, we present a classifier, named ProfLogit, that maximizes the EMPC in the training step using a genetic algorithm, where ProfLogits interior model structure resembles a lasso-regularized logistic model. Additionally, we introduce threshold-independent recall and precision measures based on the expected profit maximizing fraction, which is derived from the EMPC framework. Our proposed technique aims to construct profitable churn models for retention campaigns to satisfy the business requirement of profit maximization. In a benchmark study with nine real-life data sets, ProfLogit exhibits the overall highest, out-of-sample EMPC performance as well as the overall best, profit-based precision and recall values. As a result of the lasso resemblance, ProfLogit also performs a profit-based feature selection in which features are selected that would otherwise be excluded with an accuracy-based measure, which is another noteworthy finding.

Collaboration


Dive into the Katrien Antonio's collaboration.

Top Co-Authors

Avatar

Roel Verbelen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Gerda Claeskens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jan Beirlant

University of the Free State

View shared research outputs
Top Co-Authors

Avatar

Michel Denuit

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Tom Hoedemakers

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Beirlant

University of the Free State

View shared research outputs
Top Co-Authors

Avatar

Els Godecharle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Mathieu Pigeon

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge