Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Axel Gandy is active.

Publication


Featured researches published by Axel Gandy.


Scandinavian Journal of Statistics | 2013

Guaranteed Conditional Performance of Control Charts via Bootstrap Methods

Axel Gandy; Jan Terje Kvaløy

To use control charts in practice, the in-control state usually has to be estimated. This estimation has a detrimental effect on the performance of control charts, which is often measured by the false alarm probability or the average run length. We suggest an adjustment of the monitoring schemes to overcome these problems. It guarantees, with a certain probability, a conditional performance given the estimated in-control state. The suggested method is based on bootstrapping the data used to estimate the in-control state. The method applies to different types of control charts, and also works with charts based on regression models. If a non-parametric bootstrap is used, the method is robust to model errors. We show large sample properties of the adjustment. The usefulness of our approach is demonstrated through simulation studies.


Journal of the American Statistical Association | 2009

Sequential Implementation of Monte Carlo Tests with Uniformly Bounded Resampling Risk

Axel Gandy

This paper introduces an open-ended sequential algorithm for computing the p-value of a test using Monte Carlo simulation. It guarantees that the resampling risk, the probability of a different decision than the one based on the theoretical p-value, is uniformly bounded by an arbitrarily small constant. Previously suggested sequential or nonsequential algorithms, using a bounded sample size, do not have this property. Although the algorithm is open-ended, the expected number of steps is finite, except when the p-value is on the threshold between rejecting and not rejecting. The algorithm is suitable as standard for implementing tests that require (re)sampling. It can also be used in other situations: to check whether a test is conservative, iteratively to implement double bootstrap tests, and to determine the sample size required for a certain power. An R-package implementing the sequential algorithm is available online.


Management Science | 2017

A Bayesian Methodology for Systemic Risk Assessment in Financial Networks

Axel Gandy; Luitgard A. M. Veraart

We develop a Bayesian methodology for systemic risk assessment in financial networks such as the interbank market. Nodes represent participants in the network and weighted directed edges represent liabilities. Often, for every participant, only the total liabilities and total assets within this network are observable. However, systemic risk assessment needs the individual liabilities. We propose a model for the individual liabilities, which, following a Bayesian approach, we then condition on the observed total liabilities and assets and, potentially, on certain observed individual liabilities. We construct a Gibbs sampler to generate samples from this conditional distribution. These samples can be used in stress testing, giving probabilities for the outcomes of interest. As one application we derive default probabilities of individual banks and discuss their sensitivity with respect to prior information included to model the network. An R-package implementing the methodology is provided.


Biometrika | 2013

Non-restarting cumulative sum charts and control of the false discovery rate

Axel Gandy; F. Din-Houn Lau

Cumulative sum or cusum charts are typically used to detect a change in the distribution of a sequence of observations, e.g., shifts in the mean. Usually, after signalling, the chart is restarted by setting it to some value below the signalling threshold. We propose a non-restarting cusum chart which is able to detect periods during which the stream is out of control. Further, we advocate an upper boundary to prevent the cusum chart rising too high, which helps to detect a change back into control. We present an algorithm to control the false discovery rate when considering cusum charts based on multiple streams of data. We consider two definitions of a false discovery: signalling out-of-control when the observations have been in control since the start and signalling out-of-control when the observations have been in control since the last time the chart was at zero. We prove that the false discovery rate is controlled under both these definitions simultaneously. Simulations reveal the difference in false discovery rate control when using these and other desirable definitions of a false discovery. Copyright 2013, Oxford University Press.


Archive | 2013

Risk Assessment and Evaluation of Predictions

Mei-Ling Ting Lee; Mitchell H. Gail; Ruth M. Pfeiffer; Glen Satten; Tianxi Cai; Axel Gandy

Methods of risk analysis and the outcome of particular evaluations and predictions are covered in detail in this proceedings volume, whose contributionsare based on invited presentations from Professor Mei-Ling Ting Lees 2011 symposium on Risk Analysis and the Evaluation of Predictions. This symposium was held at theUniversity of Maryland in October of 2011. Risk analysis is the science of evaluating health, environmental, and engineering risks resulting from past, current, or anticipated, future activities.The use of these evaluations include to provide information for determining regulatory actions to limit risk, present scientific evidence in legal settings, evaluate products and potential liabilities within private organizations, resolve World Trade disputes amongst nations, and educate the public concerning particular risk issues.Risk analysis is an interdisciplinary science that relies on epidemiology and laboratory studies, collection of exposure and other field data, computer modeling, and related social, economic and communication considerations.In addition, social dimensions of risk are addressed by social scientists.


Monthly Notices of the Royal Astronomical Society | 2011

A Bayesian approach to star–galaxy classification

Marc Henrion; D. Mortlock; David J. Hand; Axel Gandy

Star–galaxy classification is one of the most fundamental data-processing tasks in survey astronomy and a critical starting point for the scientific exploitation of survey data. Star–galaxy classification for bright sources can be done with almost complete reliability, but for the numerous sources close to a survey’s detection limit each image encodes only limited morphological information about the source. In this regime, from which many of the new scientific discoveries are likely to come, it is vital to utilize all the available information about a source, both from multiple measurements and from prior knowledge about the star and galaxy populations. This also makes it clear that it is more useful and realistic to provide classification probabilities than decisive classifications. All these desiderata can be met by adopting a Bayesian approach to star–galaxy classification, and we develop a very general formalism for doing so. An immediate implication of applying Bayes’s theorem to this problem is that it is formally impossible to combine morphological measurements in different bands without using colour information as well; however, we develop several approximations that disregard colour information as much as possible. The resultant scheme is applied to data from the UKIRT Infrared Deep Sky Survey (UKIDSS) and tested by comparing the results to deep Sloan Digital Sky Survey (SDSS) Stripe 82 measurements of the same sources. The Bayesian classification probabilities obtained from the UKIDSS data agree well with the deep SDSS classifications both overall (a mismatch rate of 0.022 compared to 0.044 for the UKIDSS pipeline classifier) and close to the UKIDSS detection limit (a mismatch rate of 0.068 compared to 0.075 for the UKIDSS pipeline classifier). The Bayesian formalism developed here can be applied to improve the reliability of any star–galaxy classification schemes based on the measured values of morphology statistics alone.


Annals of Statistics | 2013

An algorithm to compute the power of Monte Carlo tests with guaranteed precision

Axel Gandy; Patrick Rubin-Delanchy

This article presents an algorithm that generates a conservative confidence interval of a specified length and coverage probability for the power of a Monte Carlo test (such as a bootstrap or permutation test). It is the first method that achieves this aim for almost any Monte Carlo test. Previous research has focused on obtaining as accurate a result as possible for a fixed computational effort, without providing a guaranteed precision in the above sense. The algorithm we propose does not have a fixed effort and runs until a confidence interval with a user-specified length and coverage probability can be constructed. We show that the expected effort required by the algorithm is finite in most cases of practical interest, including situations where the distribution of the p-value is absolutely continuous or discrete with finite support. The algorithm is implemented in the R-package simctest, available on CRAN.


Mathematical Finance | 2013

The Effect of Estimation in High‐Dimensional Portfolios

Axel Gandy; Luitgard A. M. Veraart

We study the effect of estimated model parameters in investment strategies on expected log‐utility of terminal wealth. The market consists of a riskless bond and a potentially vast number of risky stocks modeled as geometric Brownian motions. The well‐known optimal Merton strategy depends on unknown parameters and thus cannot be used in practice. We consider the expected utility of several estimated strategies when the number of risky assets gets large. We suggest strategies which are less affected by estimation errors and demonstrate their performance in a real data example. Strategies in which the investment proportions satisfy an L1‐constraint are less affected by estimation effects.


Lifetime Data Analysis | 2009

Model checks for Cox-type regression models based on optimally weighted martingale residuals

Axel Gandy; Uwe Jensen

We introduce directed goodness-of-fit tests for Cox-type regression models in survival analysis. “Directed” means that one may choose against which alternatives the tests are particularly powerful. The tests are based on sums of weighted martingale residuals and their asymptotic distributions. We derive optimal tests against certain competing models which include Cox-type regression models with different covariates and/or a different link function. We report results from several simulation studies and apply our test to a real dataset.


Reliability Engineering & System Safety | 2007

Decision support in early development phases—A case study from machine engineering

Axel Gandy; Patrick Jäger; Bernd Bertsche; Uwe Jensen

In the case study presented in this paper we consider early development phases of a mechanical product. We want to evaluate different concepts and decide which one(s) to pursue. A problem in early phases is that usually no test runs are available. In our case study, based on a standard, there are ways to compute the lifetime distributions of the components of the different concepts. Some parameters needed for these computations are not known precisely. Unfortunately, the lifetime distributions of the components are highly sensitive to these parameters. Our approach is to equip these parameters with distributions. These distributions would be called prior distributions in Bayesian terminology, but no update is possible since no test runs are available. Our approach implies that the distribution of the system lifetime for each concept is random, i.e. we get random elements in the space of lifetime distributions. Using Monte-Carlo simulations, we demonstrate several ways to compare the random lifetime distributions of the concepts. Some of these comparisons use stochastic orderings. We also introduce a new stochastic ordering which is particularly suitable for reliability purposes. Our case study, consisting of three scenarios, allows us to demonstrate some conclusions that can be reached.

Collaboration


Dive into the Axel Gandy's collaboration.

Top Co-Authors

Avatar

Georg Hahn

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Uwe Jensen

University of Hohenheim

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Mortlock

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luitgard A. M. Veraart

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar

Marc Henrion

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge