Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris L. Farmer is active.

Publication


Featured researches published by Chris L. Farmer.


SIAM Journal on Scientific Computing | 2007

Hierarchical Nonlinear Approximation for Experimental Design and Statistical Data Fitting

Daniel Busby; Chris L. Farmer; Armin Iske

This paper proposes a hierarchical nonlinear approximation scheme for scalar-valued multivariate functions, where the main objective is to obtain an accurate approximation with using only very few function evaluations. To this end, our iterative method combines at any refinement step the selection of suitable evaluation points with kriging, a standard method for statistical data analysis. Particular improvements over previous nonhierarchical methods are mainly concerning the construction of new evaluation points at run time. In this construction process, referred to as experimental design, a flexible two-stage method is employed, where adaptive domain refinement is combined with sequential experimental design. The hierarchical method is applied to statistical data analysis, where the data is generated by a very complex and computationally expensive computer model, called a simulator. In this application, a fast and accurate statistical approximation, called an emulator, is required as a cheap surrogate of the expensive simulator. The construction of the emulator relies on computer experiments using a very small set of carefully selected input configurations for the simulator runs. The hierarchical method proposed in this paper is, for various analyzed models from reservoir forecasting, more efficient than existing standard methods. This is supported by numerical results, which show that our hierarchical method is, at comparable computational costs, up to ten times more accurate than traditional nonhierarchical methods, as utilized in commercial software relying on the response surface methodology (RSM).


PLOS Computational Biology | 2014

Defining the Estimated Core Genome of Bacterial Populations Using a Bayesian Decision Model

Andries J. van Tonder; Shilan Mistry; James E. Bray; Dorothea M. C. Hill; Alison J. Cody; Chris L. Farmer; Keith P. Klugman; Anne von Gottberg; Stephen D. Bentley; Julian Parkhill; Keith A. Jolley; Martin C. J. Maiden; Angela B. Brueggemann

The bacterial core genome is of intense interest and the volume of whole genome sequence data in the public domain available to investigate it has increased dramatically. The aim of our study was to develop a model to estimate the bacterial core genome from next-generation whole genome sequencing data and use this model to identify novel genes associated with important biological functions. Five bacterial datasets were analysed, comprising 2096 genomes in total. We developed a Bayesian decision model to estimate the number of core genes, calculated pairwise evolutionary distances (p-distances) based on nucleotide sequence diversity, and plotted the median p-distance for each core gene relative to its genome location. We designed visually-informative genome diagrams to depict areas of interest in genomes. Case studies demonstrated how the model could identify areas for further study, e.g. 25% of the core genes with higher sequence diversity in the Campylobacter jejuni and Neisseria meningitidis genomes encoded hypothetical proteins. The core gene with the highest p-distance value in C. jejuni was annotated in the reference genome as a putative hydrolase, but further work revealed that it shared sequence homology with beta-lactamase/metallo-beta-lactamases (enzymes that provide resistance to a range of broad-spectrum antibiotics) and thioredoxin reductase genes (which reduce oxidative stress and are essential for DNA replication) in other C. jejuni genomes. Our Bayesian model of estimating the core genome is principled, easy to use and can be applied to large genome datasets. This study also highlighted the lack of knowledge currently available for many core genes in bacterial genomes of significant global public health importance.


Archive | 2005

Geological Modelling and Reservoir Simulation

Chris L. Farmer

The main mathematical techniques used in building geological models for input to fluid flow simulation are reviewed. The subject matter concerns the entire geological and reservoir simulation modelling workflow relating to the subsurface. To provide a realistic illustration of a complete fluid flow model, a short outline of two-phase incompressible flow through porous media is given. The mathematics of model building is discussed in a context of seismic acquisition, processing and interpretation, well logging and geology. Grid generation, geometric modelling and spatial statistics are covered in considerable detail. A few new results in the area of geostatistics are proved. In particular the equivalence of radial basis functions, general forms of kriging and minimum curvature methods is shown. A Bayesian formulation of uncertainty assessment is outlined. The theory of inverse problems is discussed in a general way, from both deterministic and statistical points of view. There is a brief discussion of upscaling. A case for multiscale geological modelling is made and the outstanding research problems to be solved in building multiscale models from many types of data are discussed.


Water Resources Research | 2015

Numerical rivers: A synthetic streamflow generator for water resources vulnerability assessments

Edoardo Borgomeo; Chris L. Farmer; Jim W. Hall

The vulnerability of water supplies to shortage depends on the complex interplay between streamflow variability and the management and demands of the water system. Assessments of water supply vulnerability to potential changes in streamflow require methods capable of generating a wide range of possible streamflow sequences. This paper presents a method to generate synthetic monthly streamflow sequences that reproduce the statistics of the historical record and that can express climate-induced changes in user-specified streamflow characteristics. The streamflow sequences are numerically simulated through random sampling from a parametric or a nonparametric distribution fitted to the historical data while shuffling the values in the time series until a sequence matching a set of desired temporal properties is generated. The desired properties are specified in an objective function which is optimized using simulated annealing. The properties in the objective function can be manipulated to generate streamflow sequences that exhibit climate-induced changes in streamflow characteristics such as interannual variability or persistence. The method is applied to monthly streamflow data from the Thames River at Kingston (UK) to generate sequences that reproduce historical streamflow statistics at the monthly and annual time scales and to generate perturbed synthetic sequences expressing changes in short-term persistence and interannual variability. Key Points: Nonparametric streamflow generation method based on simulated annealing User-specified properties of the time series can be modified Enables stress testing against potential climate-induced hydrological changes


Journal of Global Optimization | 2013

A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions

Jaroslav M. Fowkes; Nicholas I. M. Gould; Chris L. Farmer

We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.


12th European Conference on the Mathematics of Oil Recovery | 2010

Optimal Well Placement

Chris L. Farmer; Jaroslav M. Fowkes; Nicholas I. M. Gould

One is often faced with the problem of finding the optimal location and trajectory for an oil well. Increasingly this includes the additional complication of optimising the design of a multilateral well. We present a new approach based on the theory of expensive function optimisation. The key idea is to replace the underlying expensive function (ie. the simulator response) by a cheap approximation (ie. an emulator). This enables one to apply existing optimisation techniques to the emulator. Our approach uses a radial basis function interpolant to the simulator response as the emulator. Note that the case of a Gaussian radial basis function is equivalent to the geostatistical method of Kriging and radial basis functions can be interpreted as a single-layer neural network. We use a stochastic model of the simulator response to adaptively refine the emulator and optimise it using a branch and bound global optimisation algorithm. To illustrate our approach we apply it numerically to finding the optimal location and trajectory of a multilateral well in a reservoir simulation model using the industry standard ECLIPSE simulator. We compare our results to existing approaches and show that our technique is comparable, if not superior, in performance to these approaches.


Applied Mathematics Letters | 2006

The motion of a viscous filament in a porous medium or Hele-Shaw cell : a physical realisation of the Cauchy-Riemann equations

Chris L. Farmer; Sam Howison

We consider the motion of a thin filament of viscous fluid in a Hele-Shaw cell. The appropriate thin film analysis and use of Lagrangian variables leads to the Cauchy-Riemann system in a surprisingly direct way. We illustrate the inherent ill-posedness of these equations in various contexts.


Journal of Physics: Conference Series | 2011

Data assimilation using Bayesian filters and B-spline geological models

Lian Duan; Chris L. Farmer; Ibrahim Hoteit; Xiaodong Luo; Irene M. Moroz

This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.


Archive | 2007

Uncertainty Evaluation in Reservoir Forecasting by Bayes Linear Methodology

Daniel Busby; Chris L. Farmer; Armin Iske

We propose application of Bayes linear methodology to uncertainty evaluation in reservoir forecasting. On the basis of this statistical model, effective emulators are constructed. The resulting statistical method is illustrated by application to a commonly used test case scenario, called PUNQS [11]. A statistical data analysis of different output responses is performed. Responses obtained from our emulator are compared with both true responses and with responses obtained using the response surface methodology (RSM), the basic method used by leading commercial software packages.


Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science | 2017

Uncertainty quantification and optimal decisions

Chris L. Farmer

A mathematical model can be analysed to construct policies for action that are close to optimal for the model. If the model is accurate, such policies will be close to optimal when implemented in the real world. In this paper, the different aspects of an ideal workflow are reviewed: modelling, forecasting, evaluating forecasts, data assimilation and constructing control policies for decision-making. The example of the oil industry is used to motivate the discussion, and other examples, such as weather forecasting and precision agriculture, are used to argue that the same mathematical ideas apply in different contexts. Particular emphasis is placed on (i) uncertainty quantification in forecasting and (ii) how decisions are optimized and made robust to uncertainty in models and judgements. This necessitates full use of the relevant data and by balancing costs and benefits into the long term may suggest policies quite different from those relevant to the short term.

Collaboration


Dive into the Chris L. Farmer's collaboration.

Top Co-Authors

Avatar

Nicholas I. M. Gould

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge