Russell Gerrard
City University London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Russell Gerrard.
Mathematical Methods of Operations Research | 2007
Łukasz Delong; Russell Gerrard
We consider a collective insurance risk model with a compound Cox claim process, in which the evolution of a claim intensity is described by a stochastic differential equation driven by a Brownian motion. The insurer operates in a financial market consisting of a risk-free asset with a constant force of interest and a risky asset which price is driven by a Lévy noise. We investigate two optimization problems. The first one is the classical mean-variance portfolio selection. In this case the efficient frontier is derived. The second optimization problem, except the mean-variance terminal objective, includes also a running cost penalizing deviations of the insurer’s wealth from a specified profit-solvency target which is a random process. In order to find optimal strategies we apply techniques from the stochastic control theory.
The North American Actuarial Journal | 2006
Russell Gerrard; Steven Haberman; Elena Vigna
Abstract The aim of the paper is to lay the theoretical foundations for the construction of a flexible tool that can be used by pensioners to find optimal investment and consumption choices in the distribution phase of a defined contribution pension plan. The investment/consumption plan is adopted until the time of compulsory annuitization, taking into account the possibility of earlier death. The effect of the bequest motive and the desire to buy a higher annuity than the one purchasable at retirement are included in the objective function. The mathematical tools provided by dynamic programming techniques are applied to find closed-form solutions: numerical examples are also presented. In the model, the tradeoff between the different desires of the individual regarding consumption and final annuity can be dealt with by choosing appropriate weights for these factors in the setting of the problem. Conclusions are twofold. First, we find that there is a natural time-varying target for the size of the fund, which acts as a sort of safety level for the needs of the pensioner. Second, the personal preferences of the pensioner can be translated into optimal choices, which in turn affect the distribution of the consumption path and of the final annuity.
Quantitative Finance | 2012
Russell Gerrard; Bjarne Højgaard; Elena Vigna
In the context of decision making for retirees of a defined contribution pension scheme in the de-cumulation phase, we formulate and solve a problem of finding the optimal time of annuitization for a retiree having the possibility of choosing her own investment and consumption strategy. We formulate the problem as a combined stochastic control and optimal stopping problem. As criterion for the optimization we select a loss function that penalizes both the deviance of the running consumption rate from a desired consumption rate and the deviance of the final wealth at the time of annuitization from a desired target. We find closed-form solutions for the problem and show the existence of three possible types of solutions depending on the free parameters of the problem. In numerical applications we find the optimal wealth that triggers annuitization, compare it with the desired target and investigate its dependence on both parameters of the financial market and parameters linked to the risk attitude of the retiree. Simulations of the behaviour of the risky asset seem to show that, under typical situations, optimal annuitization should occur a few years after retirement.
Insurance Mathematics & Economics | 1996
Russell Gerrard; Steven Haberman
Abstract Methods of funding pension funds which amortize inter-valuation gains or losses over a fixed number of years have been considered by Dufresne (1988, 1989) and Haberman (1990, 1991, 1994). We consider the effects of such a method in the case where the rate of return on investments behaves as a first-order autoregressive process. This means that the actuarial loss process has the structure of a non-linear Time Series, where an autoregressive component is multiplied by the autoregressive rate of return. We obtain a recursive formula for the expected actuarial loss in a given year and for the expectation of the square of this quantity and prove that, for suitable values of the parameters, these expectations converge in time to limits which, though not explicitly obtainable, can nevertheless be found by a simple numerical procedure in any given case.
Risk Analysis | 2011
Russell Gerrard; Andreas Tsanakas
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.
The Scientific World Journal | 2014
Russell Gerrard; Montserrat Guillén; Jens Perch Nielsen; Ana M. Pérez-Marín
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investors risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Journal of Multivariate Analysis | 2016
Alexandru Vali Asimit; Russell Gerrard
Multivariate extremes behave very differently under asymptotic dependence as compared to asymptotic independence. In the bivariate setting, we are able to characterise the extreme behaviour of the asymptotic dependent case by using the concept of the copula. As a result, we are able to identify the properties of the boundary cases, that are asymptotic independent but still have some asymptotic dependent features. These situations are the most problematic in statistical extreme, and, for this reason, distinguishing between asymptotic dependence and asymptotic independence represents a difficult problem. We propose a simple test to resolve this issue which is an alternative to the procedure based on the classical coefficient of tail dependence. In addition, we are able to identify the worst/least asymptotic dependence (in the presence of asymptotic dependence) that maximises/minimises the probability of a given extreme region if tail dependence parameter is fixed. It is found that the perfect extreme association is not the worst asymptotic dependence, which is consistent with the existing literature. We are able to find lower and upper bounds for some risk measures of functions of random variables. A particular example is the sum of random variables, for which a vivid academic effort has been noticed in the last decade, where bounds for a sum of random variables are sought. It is numerically shown that our approach provides a great improvement of the existing methods, which reiterates the sensible conclusion that any additional piece of information on dependence would help to reduce the spread of these bounds.
European Journal of Operational Research | 2018
Russell Gerrard; Munir Hiabu; Ioannis Kyriakou; Jens Perch Nielsen
The paper shows how to reform the platform of pension products so that pension savers, professional financial advisors, actuaries and investment experts intuitively understand the underlying financial risk of the optimal investment profile. It is also pointed out that an excellent optimal investment strategy can destroy the future expected utility of a pension saver if the financial communication is wrong. It is shown that a simple system with an upper and a lower bound, originally inspired by Merton (2014), which can be executed easily using fintech, can replace complicated power utility optimization for the pension saver so that everyone can exactly understand the amount of financial risk taken. The paper focuses on investing money as a lump sum because being able to communicate the associated financial risk can serve as the first step towards communicating more complex pension saving structures.
Journal of biometrics & biostatistics | 2017
Juraj ŠteÅo; Valeriy Boyko; Petro Zamiatin; Nadiya Dubrovina; Russell Gerrard; Peter Labas; Olex; er Gurov; Olena Kozyreva; Dmytro Hladkykh; Yuliia Tkachenko; Denis Zamiatin; Viktorija Borodina
Background: There are different approaches to the assessment of the severity of trauma in a victim and to the provision of specialized health care. Some of them are based on the development of scales and logistic models, using expert systems or statistical methods, to assess the severity of injury and the probability of a particular outcome. This article presents the results of a study on the feasibility of developing and applying various statistical models in order to predict the outcome in the case of different types of trauma, based on data on the status of victims with severe trauma. Patients and methods: We present selected information about 373 victims, admitted and treated at the Department of Traumatic Shock of the GI «V.T. Zaycev Kharkiv Research Institute of General and Emergency Surgery» of NAMS of Ukraine; the records, which relate to patients with severe and combined trauma, were made between 1985 and 2015. The initial database contained 263 victims who had positive outcomes (survived), while 110 had fatal outcomes. Most of the patients presented with an open trauma (285 cases), then there were 80 cases with a closed injury and only 8 cases with a combined injury. Results: To estimate the probability of the outcome for various types of trauma we have developed a predictive model, based on a logistic relationship. Categorical variables, indicating the presence or absence of various types of trauma, were used in the model. Information about the eventual outcome for a given victim with the indicated type of trauma was used as the dependent variable. The logit model which we obtained has a high predictive accuracy in predicting positive outcomes. Thus, based on the a posteriori analysis, 92% of cases in which victims survived were correctly recognized by the model. In view of the fact that abdominal trauma is the commonest of all trauma mechanisms, we constructed a predictive model to estimate the probability of various outcomes in the case of abdominal trauma or injury to certain organs of the abdominal cavity. Linear discriminant functions were developed by us and used for the classification of possible outcomes depending on the condition of the victim and the resuscitation measures carried out. The model presented has a high predictive accuracy: on the basis of a posteriori analysis using data of discriminant functions, correct conclusions were drawn in 90% of cases when there was a positive outcome, and in 75% of cases when the outcome was fatal. Conclusion: We conclude that it is reasonable to use the statistical model developed, along with other qualitative and quantitative methods of prognostic determination of outcomes for victims with severe injuries. As different models have different predictive accuracy and require the provision of different information, it is necessary to use a sufficiently large number of techniques to derive accurate predictions and to choose the right tactics for diagnosis and treatment.
Journal of Adenocarcinoma | 2016
Valery Boyko; Nadiya Dubrovina; Petro Zamiatin; Alex; er Sinelnikov; Russell Gerrard; Olena Kolesnikova; Volodymyr Shaprynskyy; er Gurov; Vira Zlatkina; Evgen Shaprynskyy; Denis Zamiatin; Sergiy Bityak
In this article the problems of the prevalence of esophageal cancer and the spatial distribution of mortality rates from this disease are considered using as examples the NUTS 2 regions in six countries of Central and Eastern Europe (Austria, Germany, the Czech Republic, Poland, Slovakia and Hungary). The rates of mortality from esophageal cancer are analyzed by statistical methods and by spatial econometrics. A study is carried out of the features of the spatial distribution of the rates of mortality from esophageal cancer. It allows us to determine more and less epidemiologically affected regions and to carry out more detailed studies on the link between the mortality rates from esophageal cancer and various factors, such as the environmental situation, socio-demographic characteristics of the population, culture and nature of nutrition, the general health status of the population, the availability of resources and the level of healthcare in the region. By means of the multifactor regression model we forecast the rates of mortality from esophageal cancer, taking into account characteristics of the countries, the dynamics of the number of patients with diseases of the esophagus and the general time trend.