Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Glenn Meyers is active.

Publication


Featured researches published by Glenn Meyers.


Journal of the American Statistical Association | 2011

Summarizing Insurance Scores Using a Gini Index

Edward W. Frees; Glenn Meyers; A. David Cummings

Individuals, corporations and government entities regularly exchange financial risks y at prices Π. Comparing distributions of risks and prices can be difficult, particularly when the financial risk distribution is complex. For example, with insurance, it is not uncommon for a risk distribution to be a mixture of 0’s (corresponding to no claims) and a right-skewed distribution with thick tails (the claims distribution). However, analysts do not work in a vacuum, and in the case of insurance they use insurance scores relative to prices, called “relativities,” that point to areas of potential discrepancies between risk and price distributions. Ordering both risks and prices based on relativities, in this article we introduce what we call an “ordered” Lorenz curve for comparing distributions. This curve extends the classical Lorenz curve in two ways, through the ordering of risks and prices and by allowing prices to vary by observation. We summarize the ordered Lorenz curve in the same way as the classic Lorenz curve using a Gini index, defined as twice the area between the curve and the 45-degree line. For a given ordering, a large Gini index signals a large difference between price and risk distributions. We show that the ordered Lorenz curve has desirable properties. It can be expressed in terms of weighted distributions functions. In special cases, curves can be ranked through a partial ordering. We show how to estimate the Gini index and give pointwise consistency and asymptotic normality results. A simulation study and an example using homeowners insurance underscore the potential applications of these methods.


Archive | 2014

Predictive Modeling Applications in Actuarial Science

Edward W. Frees; Glenn Meyers; Richard A. Derrig

Predictive modeling involves the use of data to forecast future events. It relies on capturing relationships between explanatory variables and the predicted variables from past occurrences and exploiting this to predict future outcomes. Forecasting future financial events is a core actuarial skill - actuaries routinely apply predictive-modeling techniques in insurance and other risk-management applications. This book is for actuaries and other financial analysts who are developing their expertise in statistics and wish to become familiar with concrete examples of predictive modeling. The book also addresses the needs of more seasoned practising analysts who would like an overview of advanced statistical topics that are particularly relevant in actuarial practice. Predictive Modeling Applications in Actuarial Science emphasizes lifelong learning by developing tools in an insurance context, providing the relevant actuarial applications, and introducing advanced statistical techniques that can be used by analysts to gain a competitive advantage in situations with complex data.


Archive | 2016

Clustering in General Insurance Pricing

Ji Yao; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Introduction Clustering is the unsupervised classification of patterns into groups (Jain et al., 1999). It is widely studied and applied in many areas including computer science, biology, social science and statistics. A significant number of clustering methods have been proposed in Berkhin (2006), Filippone et al. (2008), Francis (2014), Han et al. (2001), Jain et al. (1999), Luxburg (2007), and Xu and Wunsch (2005). In the context of actuarial science, Guo (2003), Pelessoni and Picech (1998), and Sanche and Lonergan (2006) studied some possible applications of clustering methods in insurance. As to the territory ratemaking, Christopherson and Werland (1996) considered the use of geographical information systems. Athorough analysis of the application of clustering methods in insurance ratemaking is not known to the author. The purpose of this chapter is twofold. The first part of the chapter will introduce the typical idea of clustering and state-of-the-art clustering methods with their application in insurance data. To facilitate the discussion, an insurance dataset is introduced before the discussion of clustering methods. Due to the large number of methods, it is not intended to give a detailed review of every clustering methods in the literature. Rather, the focus is on the key ideas of each methods, and more importantly their advantages and disadvantages when applied in insurance ratemaking. In the second part, a clustering method called the exposure-adjusted hybrid (EAH) clustering method is proposed. The purpose of this section is not to advocate one certain clustering method but to illustrate the general approach that could be taken in territory clustering. Because clustering is subjective, it is well recognized that most details should be modified to accommodate the feature of the dataset and the purpose of the clustering. The remainder of the chapter proceeds as follows. Section 6.2 introduces clustering and its application in insurance ratemaking. Section 6.3 introduces a typical insurance dataset that requires clustering analysis on geographic information. Section 6.4 reviews clustering methods and their applicability in insurance ratemaking. Section 6.5 proposes the EAH clustering method and illustrates this method step by step using U.K. motor insurance data with the results presented in Section 6.8. Section 6.7 discusses some other considerations, and conclusions are drawn in Section 6.8. Some useful references are listed in Section 6.9.


Archive | 2016

Applying Generalized Linear Models to Insurance Data: Frequency/Severity versus Pure Premium Modeling

Dan Tevet; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Chapter Preview. This chapter is a case study on modeling loss costs for motorcycle collision insurance. It covers two approaches to loss cost modeling: fitting separate models for claim frequency and claim severity and fitting one model for pure premium. Exploratory data analysis, construction of the models, and evaluation and comparisons of the results are all discussed. Introduction When modeling insurance claims data using generalized linear models (GLM), actuaries have two choices for the model form: 1. create two models – one for the frequency of claims and one for the average severity of claims; 2. create a single model for pure premium (i.e., average claim cost). Actuaries have traditionally used the frequency/severity approach, but in recent years pure premium modeling has gained popularity. The main reason for this development is the growing acceptance and use of the Tweedie distribution. In a typical insurance dataset, the value of pure premium is zero for the vast majority of records, since most policies incur no claims; but, where a claim does occur, pure premium tends to be large. Unfortunately, most “traditional” probability distributions – Gaussian, Poisson, gamma, lognormal, inverse Gaussian, and so on – cannot be used to describe pure premium data. Gaussian assumes that negative values are possible; the Poisson is generally too thin-tailed to adequately capture the skewed nature of insurance data; and the gamma, lognormal, and inverse Gaussian distributions do not allow for values of zero. Fortunately, there is a distribution that does adequately describe pure premium for most insurance datasets – the Tweedie distribution. The Tweedie is essentially a compound Poisson-gamma process, whereby claims occur according to a Poisson distribution and claim severity, given that a claim occurs, follows a gamma distribution. This is very convenient since actuaries have traditionally modeled claim frequency with a Poisson distribution and claim severity with a gamma distribution. In this chapter, we explore three procedures for modeling insurance data: 1. modeling frequency and severity separately; 2. modeling pure premium using the Tweedie distribution (without modeling dispersion; in practice, this is the most common implementation of pure premium modeling; 3. modeling pure premium using a “double GLM approach”; that is, modeling both the mean and dispersion of pure premium. We compare the results of each approach and assess the approaches’ advantages and disadvantages.


Archive | 2016

Frameworks for General Insurance Ratemaking: Beyond the Generalized Linear Model

Peng Shi; James Guszcza; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Chapter Preview. This chapter illustrates the applications of various predictive modeling strategies for determining pure premiums in property-casualty insurance. Consistent with standard predictive modeling practice, we focus on methodologies capable of harnessing risk-level information in the ratemaking process. The use of such micro-level data yields statistical models capable of making finer-grained distinctions between risks, thereby enabling more accurate predictions. This chapter will compare multiple analytical approaches for determining risk-level pure premium. A database of personal automobile risks will be used to illustrate the various approaches. A distinctive feature of our approach is the comparison of two broad classes of modeling frameworks: univariate and multivariate. The univariate approach, most commonly used in industry, specifies a separate model for each outcome variable. The multivariate approach specifies a single model for a vector of outcome variables. Comparing the performance of different models reveals that there is no unique solution, and each approach has its own strengths and weaknesses. Introduction Ratemaking, the estimation of policy- or risk-level pure premium for the purpose of pricing insurance contracts, is one of the classical problems of general insurance (aka “nonlife,” aka “property-casualty”) actuarial science. For most of the 20th century, ratemaking practice was largely restricted to the one-way analyses of various rating dimensions (such as risk class, prior claims, geographical territory, and so on) with pure premium. Insurers would occasionally employ Bailey-Simonminimum bias methods to account for dependencies amongst rating dimensions. However, the multivariate approach to ratemaking was typically more honored in the breach than the observance due to the information technology limitations of the time and the difficulty of running the iterative Bailey-Simon computational routines. Neither the one-way nor the Bailey-Simon approach was founded in mainstream statistical modeling concepts. This state of affairs began to change rapidly starting in the mid-1990s thanks partly to increasing availability of computing technology, and partly to the recognition that generalized linear models (GLM) offers practitioners a rigorous, mainstream statistical framework within which Bailey-Simon type calculations, and much else, can be performed (see, e.g., Mildenhall, 1999). GLM methodology offers the ability to move beyond fairly rudimentary analyses of low-dimensional aggregated data to the creation of sophisticated statistical models, containing numerous predictive variables, that estimate expected pure premium at the risk rather than class level.


Archive | 2016

Pure Premium Modeling Using Generalized Linear Models

Ernesto Schirmacher; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Chapter Preview. Pricing insurance products is a complex endeavor that requires blending many different perspectives. Historical data must be properly analyzed, socioeconomic trends must be identified, and competitor actions and the companys own underwriting and claims strategy must be taken into account. Actuaries are well trained to contribute in all these areas and to provide the insights and recommendations necessary for the successful development and implementation of a pricing strategy. In this chapter, we illustrate the creation of one of the fundamental building blocks of a pricing project, namely, pure premiums. We base these pure premiums on generalized linear models of frequency and severity. We illustrate the model building cycle by going through all the phases: data characteristics, exploratory data analysis, one-way and multiway analyses, the fusion of frequency and severity into pure premiums, and validation of the models. The techniques that we illustrate are widely applicable, and we encourage the reader to actively participate via the exercises that are sprinkled throughout the text; after all, data science is not a spectator sport ! Introduction The pricing of insurance products is a complex undertaking and a key determinant of the long-term success of a company. Todays actuaries play a pivotal role in analyzing historical data and interpreting socioeconomic trends to determine actuarially fair price indications. These price indications form the backbone of the final prices that a company will charge its customers. Final pricing cannot be done by any one group. The final decision must blend many considerations, such as competitor actions, growth strategy, and consumer satisfaction. Therefore, actuaries, underwriters, marketers, distributors, claims adjusters, and company management must come together and collaborate on setting prices. This diverse audience must clearly understand price indications and the implications of various pricing decisions. Actuaries are well positioned to explain and provide the insight necessary for the successful development and implementation of a pricing strategy. Figure 1.1 shows one possible representation of an overall pricing project. Any one box in the diagram represents a significant portion of the overall project. In the following sections, we concentrate on the lower middle two boxes: “Build many models” and “Diagnose and refine models.” We concentrate on the first phase of the price indications that will form the key building block for later discussions, namely, the creation of pure premiums based on two generalized linear models.


Archive | 2016

Application of Two Unsupervised Learning Techniques to Questionable Claims: PRIDIT and Random Forest

Louise Francis; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Chapter Preview. Predictive modeling can be divided into two major kinds of modeling, referred to as supervised and unsupervised learning, distinguished primarily by the presence or absence of dependent/target variable data in the data used for modeling. Supervised learning approaches probably account for the majority of modeling analyses. The topic of unsupervised learning was introduced in Chapter 12 of Volume I of this book. This chapter follows up with an introduction to two advanced unsupervised learning techniques PRIDIT (Principal Components of RIDITS) and Random Forest (a tree based data-mining method that is most commonly used in supervised learning applications). The methods will be applied to an automobile insurance database to model questionable claims. A couple of additional unsupervised learning methods used for visualization, including multidimensional scaling, will also be briefly introduced. Databases used for detecting questionable claims often do not contain a questionable claims indicator as a dependent variable. Unsupervised learning methods are often used to address this limitation. A simulated database containing features observed in actual questionable claims data was developed for this research based on actual data. The methods in this chapter will be applied to this data. The database is available online at the books website. Introduction An introduction to unsupervised learning techniques as applied to insurance problems is provided by Francis (2014) as part of Predictive Modeling Applications in Actuarial Science, Volume I , a text intended to introduce actuaries and insurance professionals to predictive modeling analytic techniques. As an introductory work, it focused on two classical approaches: principal components and clustering. Both are standard statistical methods that have been in use for many decades and are well known to statisticians. The classical approaches have been augmented by many other unsupervised learning methods such as neural networks, association rules and link analysis. While these are frequently cited methods for unsupervised learning, only the kohonen neural network method will be briefly discussed in this chapter. The two methods featured here, PRIDIT and Random Forest clustering are less well known and less widely used. Brockett et al. (2003) introduced the application of PRIDITs to the detection of questionable claims in insurance. Lieberthal (2008) has applied the PRIDIT method to hospital quality studies.


Archive | 2016

Finite Mixture Model and Workers’ Compensation Large-Loss Regression Mixture Model and Workers’ Compensation Large-Loss Regression Analysis

Luyang Fu; Edward W. Frees; Glenn Meyers; Richard A. Derrig

Chapter Preview. Actuaries have been studying loss distributions since the emergence of the profession. Numerous studies have found that the widely used distributions, such as lognormal, Pareto, and gamma, do not fit insurance data well. Mixture distributions have gained popularity in recent years because of their flexibility in representing insurance losses from various sizes of claims, especially on the right tail. To incorporate the mixture distributions into the framework of popular generalized linear models (GLMs), the authors propose to use finite mixture models (FMMs) to analyze insurance loss data. The regression approach enhances the traditional whole-book distribution analysis by capturing the impact of individual explanatory variables. FMM improves the standard GLM by addressing distribution-related problems, such as heteroskedasticity, over- and underdispersion, unobserved heterogeneity, and fat tails. A case study with applications on claims triage and on high-deductible pricing using workers’ compensation data illustrates those benefits. Introduction Conventional Large Loss Distribution Analysis Large loss distributions have been extensively studied because of their importance in actuarial applications such as increased limit factor and excess loss pricing (Miccolis, 1977), reinsurance retention and layer analysis (Clark, 1996), high deductible pricing (Teng, 1994), and enterprise risk management (Wang, 2002). Klugman et al. (1998) discussed the frequency, severity, and aggregate loss distributions in detail in their book, which has been on the Casualty Actuarial Society syllabus of exam Construction and Evaluation of Actuarial Models for many years. Keatinge (1999) demonstrated that popular single distributions, including those in Klugman et al. (1998), are not adequate to represent the insurance loss well and suggested using mixture exponential distributions to improve the goodness of fit. Beirlant et al. (2001) proposed a flexible generalized Burr-gamma distribution to address the heavy tail of loss and validated the effectiveness of this parametric distribution by comparing its implied excess-of-loss reinsurance premium with other nonparametric and semi-parametric distributions. Matthys et al. (2004) presented an extreme quantile estimator to deal with extreme insurance losses. Fleming (2008) showed that the sample average of any small data from a skewed population is most likely below the true mean and warned the danger of insurance pricing decisions without considering extreme events. Henry and Hsieh (2009) stressed the importance of understanding the heavy tail behavior of a loss distribution and developed a tail index estimator assuming that the insurance loss possess Pareto-type tails.


Archive | 2014

Predictive Modeling Applications In Actuarial Science: Frontmatter

Edward W. Frees; Richard A. Derrig; Glenn Meyers

Predictive modeling involves the use of data to forecast future events. It relies on capturing relationships between explanatory variables and the predicted variables from past occurrences and exploiting this to predict future outcomes. Forecasting future financial events is a core actuarial skill - actuaries routinely apply predictive-modeling techniques in insurance and other risk-management applications. This book is for actuaries and other financial analysts who are developing their expertise in statistics and wish to become familiar with concrete examples of predictive modeling. The book also addresses the needs of more seasoned practising analysts who would like an overview of advanced statistical topics that are particularly relevant in actuarial practice. Predictive Modeling Applications in Actuarial Science emphasizes lifelong learning by developing tools in an insurance context, providing the relevant actuarial applications, and introducing advanced statistical techniques that can be used by analysts to gain a competitive advantage in situations with complex data.


Archive | 2014

Predictive Modeling Applications In Actuarial Science: Index

Edward W. Frees; Richard A. Derrig; Glenn Meyers

Predictive modeling involves the use of data to forecast future events. It relies on capturing relationships between explanatory variables and the predicted variables from past occurrences and exploiting this to predict future outcomes. Forecasting future financial events is a core actuarial skill - actuaries routinely apply predictive-modeling techniques in insurance and other risk-management applications. This book is for actuaries and other financial analysts who are developing their expertise in statistics and wish to become familiar with concrete examples of predictive modeling. The book also addresses the needs of more seasoned practising analysts who would like an overview of advanced statistical topics that are particularly relevant in actuarial practice. Predictive Modeling Applications in Actuarial Science emphasizes lifelong learning by developing tools in an insurance context, providing the relevant actuarial applications, and introducing advanced statistical techniques that can be used by analysts to gain a competitive advantage in situations with complex data.

Collaboration


Dive into the Glenn Meyers's collaboration.

Top Co-Authors

Avatar

Edward W. Frees

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Peng Shi

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jean-Philippe Boucher

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Greg Taylor

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge