Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Banasik is active.

Publication


Featured researches published by John Banasik.


Journal of the Operational Research Society | 2003

Sample selection bias in credit scoring models

John Banasik; Jonathan Crook; Lyn C. Thomas

One of the aims of credit scoring models is to predict the probability of repayment of any applicant and yet such models are usually parameterised using a sample of accepted applicants only. This may lead to biased estimates of the parameters. In this paper we examine two issues. First, we compare the classification accuracy of a model based only on accepted applicants, relative to one based on a sample of all applicants. We find only a minimal difference, given the cutoff scores for the old model used by the data supplier. Using a simulated model we examine the predictive performance of models estimated from bands of applicants, ranked by predicted creditworthiness. We find that the lower the risk band of the training sample, the less accurate the predictions for all applicants. We also find that the lower the risk band of the training sample, the greater the overestimate of the true performance of the model, when tested on a sample of applicants within the same risk band — as a financial institution would do. The overestimation may be very large. Second, we examine the predictive accuracy of a bivariate probit model with selection (BVP). This parameterises the accept–reject model allowing for (unknown) omitted variables to be correlated with those of the original good–bad model. The BVP model may improve accuracy if the loan officer has overridden a scoring rule. We find that a small improvement when using the BVP model is sometimes possible.


European Journal of Operational Research | 2007

Reject inference, augmentation, and sample selection

John Banasik; Jonathan Crook

Many researchers see the need for reject inference in credit scoring models to come from a sample selection problem whereby a missing variable results in omitted variable bias. Alternatively, practitioners often see the problem as one of missing data where the relationship in the new model is biased because the behaviour of the omitted cases differs from that of those who make up the sample for a new model. To attempt to correct for this, differential weights are applied to the new cases. The aim of this paper is to see if the use of both a Heckman style sample selection model and the use of sampling weights, together, will improve predictive performance compared with either technique used alone. This paper will use a sample of applicants in which virtually every applicant was accepted. This allows us to compare the actual performance of each model with the performance of models which are based only on accepted cases.


Journal of the Operational Research Society | 2001

Scoring by usage

John Banasik; Jonathan Crook; Lyn C. Thomas

This paper aims to discover whether the predictive accuracy of a new applicant scoring model for a credit card can be improved by estimating separate scoring models for applicants who are predicted to have high or low usage of the card. Two models are estimated. First we estimate a model to explain the desired usage of a card, and second we estimate separately two further scoring models, one for those applicants whose usage is predicted to be high, and one for those for whom it is predicted to be low. The desired usage model is a two-stage Heckman model to take into account the fact that the observed usage of accepted applicants is constrained by their credit limit. Thus a model of the determinants of the credit limit, and one of usage, are both estimated using Heckmans ML estimator. We find a large number of variables to be correlated with desired usage. We also find that the two stage scoring methodology gives only very marginal improvements over a single stage scoring model, that we are able to predict a greater percentage of bad payers for low users than for high users and a greater percentage of good payers for high users than for low users.


Journal of the Operational Research Society | 2010

Reject inference in survival analysis by augmentation

John Banasik; Jonathan Crook

The literature suggests that the commonly used augmentation method of reject inference achieves no appreciable benefit in the context of logistic and probit regression models. Ranking is not improved and the ability to discern a correct cut-off is undermined. This paper considers the application of augmentation to profit scoring applicants by means of survival analysis and by the Cox proportional hazard model, in particular. This new context involves more elaborate models answering more specific questions such as when will default occur and what will be its precise financial implication. Also considered in this paper is the extent to which the rejection rate is critical in the potential usefulness of reject inference and how augmentation meets that potential. The conclusion is essentially that augmentation achieves negative benefits only and that the scope for reject inference in this context pertains mainly to circumstances where a high proportion of applicants have been rejected.


Journal of the Operational Research Society | 2005

Credit Scoring augmentation and lean models

John Banasik; Jonathan Crook

If a credit scoring model is built using only applicants who have been previously accepted for credit such a non-random sample selection may produce bias in the estimated model parameters and accordingly the models predictions of repayment performance may not be optimal. Previous empirical research suggests that omission of rejected applicants has a detrimental impact on model estimation and prediction. This paper explores the extent to which, given the previous cutoff score applied to decide on accepted applicants, the number of included variables influences the efficacy of a commonly used reject inference technique, reweighting. The analysis benefits from the availability of a rare sample, where virtually no applicant was denied credit. The general indication is that the efficacy of reject inference is little influenced by either model leanness or interaction between model leanness and the rejection rate that determined the sample. However, there remains some hint that very lean models may benefit from reject inference where modelling is conducted on data characterized by a very high rate of applicant rejection.


Economic Systems Research | 1996

The Compatibility of Red Book National Accounts Estimates with the Barna 1935 Input–Output Estimates

John Banasik

In 1952, Tibor Barna published an input–output table of the 1935 UK economy. From 1954 to 1972, a number of ‘Red Books’ published under the direction of Richard Stone provided national accounting statistics for 1920–38. Casual inspection suggests some serious disagreement between the two data sets, but much of this arises out of mismatching of incompatibly defined items. This paper assesses the compatibility of final demand and net output estimates that appear on the periphery of the Barna table with corresponding Red Book statistics. Given the admitted margins of error, an acceptable pattern of discrepancy emerges from the comparisons. Service industry incomes are the sole major exception to this finding.


Journal of the Operational Research Society | 1999

Not if but when will borrowers default

John Banasik; Jonathan Crook; Lyn C. Thomas


Journal of Banking and Finance | 2004

Does reject inference really improve the performance of application scoring models

Jonathan Crook; John Banasik


The International Review of Retail, Distribution and Consumer Research | 1996

Does scoring a subpopulation make a difference

John Banasik; Jonathan Crook; Lyn C. Thomas


International Journal of Forecasting | 2012

Forecasting and explaining aggregate consumer credit delinquency behaviour

Jonathan Crook; John Banasik

Collaboration


Dive into the John Banasik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lyn C. Thomas

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Jake Ansell

University of Edinburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge