Amanda Turner
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amanda Turner.
Communications in Mathematical Physics | 2012
James Norris; Amanda Turner
We establish some scaling limits for a model of planar aggregation. The model is described by the composition of a sequence of independent and identically distributed random conformal maps, each corresponding to the addition of one particle. We study the limit of small particle size and rapid aggregation. The process of growing clusters converges, in the sense of Carathéodory, to an inflating disc. A more refined analysis reveals, within the cluster, a tree structure of branching fingers, whose radial component increases deterministically with time. The arguments of any finite sample of fingers, tracked inwards, perform coalescing Brownian motions. The arguments of any finite sample of gaps between the fingers, tracked outwards, also perform coalescing Brownian motions. These properties are closely related to the evolution of harmonic measure on the boundary of the cluster, which is shown to converge to the Brownian web.
Annales De L Institut Henri Poincare-probabilites Et Statistiques | 2012
Fredrik Johansson Viklund; Alan Sola; Amanda Turner
We consider a variation of the standard Hastings-Levitov model HL(0), in which growth is anisotropic. Two natural scaling limits are established and we give precise descriptions of the effects of the anisotropy. We show that the limit shapes can be realised as Loewner hulls and that the evolution of harmonic measure on the cluster boundary can be described by the solution to a deterministic ordinary differential equation related to the Loewner equation. We also characterise the stochastic fluctuations around the deterministic limit flow.
Computers & Mathematics With Applications | 2009
Sebastian Mosbach; Amanda Turner
We examine numerical rounding errors of some deterministic solvers for systems of ordinary differential equations (ODEs) from a probabilistic viewpoint. We show that the accumulation of rounding errors results in a solution which is inherently random and we obtain the theoretical distribution of the trajectory as a function of time, the step size and the numerical precision of the computer. We consider, in particular, systems which amplify the effect of the rounding errors so that over long time periods the solutions exhibit divergent behaviour. By performing multiple repetitions with different values of the time step size, we observe numerically the random distributions predicted theoretically. We mainly focus on the explicit Euler and fourth order Runge-Kutta methods but also briefly consider more complex algorithms such as the implicit solvers VODE and RADAU5 in order to demonstrate that the observed effects are not specific to a particular method.
Annals of Probability | 2007
Amanda Turner
We consider sequences (XtN)t≥0 of Markov processes in two dimensions whose fluid limit is a stable solution of an ordinary differential equation of the form xt=b(xt), where for some λ, μ>0 and τ(x)=O(|x|2). Here the processes are indexed so that the variance of the fluctuations of XtN is inversely proportional to N. The simplest example arises from the OK Corral gunfight model which was formulated by Williams and McIlroy [Bull. London Math. Soc. 30 (1998) 166–170] and studied by Kingman [Bull. London Math. Soc. 31 (1999) 601–606]. These processes exhibit their most interesting behavior at times of order logN so it is necessary to establish a fluid limit that is valid for large times. We find that this limit is inherently random and obtain its distribution. Using this, it is possible to derive scaling limits for the points where these processes hit straight lines through the origin, and the minimum distance from the origin that they can attain. The power of N that gives the appropriate scaling is surprising. For example if T is the time that XtN first hits one of the lines y=x or y=−x, then Nμ/{(2(λ+μ))}|XTN| ⇒ |Z|μ/{(λ+μ)}, for some zero mean Gaussian random variable Z.
Annals of Probability | 2015
James Norris; Amanda Turner
We define a new state-space for the coalescing Brownian flow, also known as the Brownian web, on the circle. The elements of this space are families of order-preserving maps of the circle, depending continuously on two time parameters and having a certain weak flow property. The space is equipped with a complete separable metric. A larger state-space, allowing jumps in time, is also introduced, and equipped with a Skorokhod-type metric, also complete and separable. We prove that the coalescing Brownian flow is the weak limit in this larger space of a family of flows which evolve by jumps, each jump arising from a small localized disturbance of the circle. A local version of this result is also obtained, in which the weak limit law is that of the coalescing Brownian flow on the line. Our set-up is well adapted to time-reversal and our weak limit result provides a new proof of time-reversibility of the coalescing Brownian flow. We also identify a martingale associated with the coalescing Brownian flow on the circle and use this to make a direct calculation of the Laplace transform of the time to complete coalescence.
Statistics in Medicine | 2015
John Whitehead; Faye Cleary; Amanda Turner
In this paper, a Bayesian approach is developed for simultaneously comparing multiple experimental treatments with a common control treatment in an exploratory clinical trial. The sample size is set to ensure that, at the end of the study, there will be at least one treatment for which the investigators have a strong belief that it is better than control, or else they have a strong belief that none of the experimental treatments are substantially better than control. This criterion bears a direct relationship with conventional frequentist power requirements, while allowing prior opinion to feature in the analysis with a consequent reduction in sample size. If it is concluded that at least one of the experimental treatments shows promise, then it is envisaged that one or more of these promising treatments will be developed further in a definitive phase III trial. The approach is developed in the context of normally distributed responses sharing a common standard deviation regardless of treatment. To begin with, the standard deviation will be assumed known when the sample size is calculated. The final analysis will not rely upon this assumption, although the intended properties of the design may not be achieved if the anticipated standard deviation turns out to be inappropriate. Methods that formally allow for uncertainty about the standard deviation, expressed in the form of a Bayesian prior, are then explored. Illustrations of the sample sizes computed from the new method are presented, and comparisons are made with frequentist methods devised for the same situation.
Statistics & Probability Letters | 2015
Amanda Turner; John Whitehead
We establish a partial stochastic dominance result for the maximum of a multivariate Gaussian random vector with positive intraclass correlation coefficient and negative expectation. Specifically, we show that the distribution function intersects that of a standard Gaussian exactly once.
Informs Journal on Computing | 2018
Jamie Fairbrother; Amanda Turner; Stein W. Wallace
In this paper we propose a problem-driven scenario generation approach to the single-period portfolio selection problem which use tail risk measures such as conditional value-at-risk. Tail risk measures are useful for quantifying potential losses in worst cases. However, for scenario-based problems these are problematic: because the value of a tail risk measure only depends on a small subset of the support of the distribution of asset returns, traditional scenario based methods, which spread scenarios evenly across the whole support of the distribution, yield very unstable solutions unless we use a very large number of scenarios. The proposed approach works by prioritizing the construction of scenarios in the areas of a probability distribution which correspond to the tail losses of feasible portfolios. The proposed approach can be applied to difficult instances of the portfolio selection problem characterized by high-dimensions, non-elliptical distributions of asset returns, and the presence of integer variables. It is also observed that the methodology works better as the feasible set of portfolios becomes more constrained. Based on this fact, a heuristic algorithm based on the sample average approximation method is proposed. This algorithm works by adding artificial constraints to the problem which are gradually tightened, allowing one to telescope onto high quality solutions.
arXiv: Probability | 2008
James Norris; Amanda Turner
Communications in Mathematical Physics | 2015
Fredrik Johansson Viklund; Alan Sola; Amanda Turner