Albert C. Reynolds
University of Tulsa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Albert C. Reynolds.
Mathematical Geosciences | 1997
Dean S. Oliver; Luciane B. Cunha; Albert C. Reynolds
Generating one realization of a random permeability field that is consistent with observed pressure data and a known variogram model is not a difficult problem. If, however, one wants to investigate the uncertainty of reservior behavior, one must generate a large number of realizations and ensure that the distribution of realizations properly reflects the uncertainty in reservoir properties. The most widely used method for conditioning permeability fields to production data has been the method of simulated annealing, in which practitioners attempt to minimize the difference between the ’ ’true and simulated production data, and “true” and simulated variograms. Unfortunately, the meaning of the resulting realization is not clear and the method can be extremely slow. In this paper, we present an alternative approach to generating realizations that are conditional to pressure data, focusing on the distribution of realizations and on the efficiency of the method. Under certain conditions that can be verified easily, the Markov chain Monte Carlo method is known to produce states whose frequencies of appearance correspond to a given probability distribution, so we use this method to generate the realizations. To make the method more efficient, we perturb the states in such a way that the variogram is satisfied automatically and the pressure data are approximately matched at every step. These perturbations make use of sensitivity coefficients calculated from the reservoir simulator.
Computers & Geosciences | 2013
Alexandre A. Emerick; Albert C. Reynolds
In the last decade, ensemble-based methods have been widely investigated and applied for data assimilation of flow problems associated with atmospheric physics and petroleum reservoir history matching. This paper focuses entirely on the reservoir history-matching problem. Among the ensemble-based methods, the ensemble Kalman filter (EnKF) is the most popular for history-matching applications. However, the recurrent simulation restarts required in the EnKF sequential data assimilation process may prevent the use of EnKF when the objective is to incorporate the history matching in an integrated geo-modeling workflow. In this situation, the ensemble smoother (ES) is a viable alternative. However, because ES computes a single global update, it may not result in acceptable data matches; therefore, the development of efficient iterative forms of ES is highly desirable. In this paper, we propose to assimilate the same data multiple times with an inflated measurement error covariance matrix in order to improve the results obtained by ES. This method is motivated by the equivalence between single and multiple data assimilation for the linear-Gaussian case. We test the proposed method for three synthetic reservoir history-matching problems. Our results show that the proposed method provides better data matches than those obtained with standard ES and EnKF, with a computational cost comparable with the computational cost of EnKF.
Spe Reservoir Evaluation & Engineering | 2010
E. Peters; R.J. Arts; G.K. Brouwer; C.R. Geel; S. Cullick; R.J. Lorentzen; Yan Chen; K.N.B. Dunlop; F.C. Vossepoel; R. Xu; Pallav Sarma; A.H. Alhutali; Albert C. Reynolds
In preparation for the SPE Applied Technology Workshop (ATW) held in Brugge in June 2008, a unique benchmark project was organized to test the combined use of waterflooding-optimization and history-matching methods in a closed-loop workflow. The benchmark was organized in the form of an interactive competition during the months preceding the ATW. The goal set for the exercise was to create a set of history-matched reservoir models and then to find an optimal waterflooding strategy for an oil field containing 20 producers and 10 injectors that can each be controlled by three inflow-control valves (ICVs). A synthetic data set was made available to the participants by TNO, consisting of well-log data, the structure of the reservoir, 10 years of production data, inverted time-lapse seismic data, and other information necessary for the exercise. The parameters to be estimated during the history match were permeability, porosity, and net-to gross-(NTG) thickness ratio. The optimized production strategy was tested on a synthetic truth model developed by TNO, which was also used to generate the production data and inverted time-lapse seismic. Because of time and practical constraints, a full closed-loop exercise was not possible; however, the participants could obtain the response to their production strategy after 10 years, update their models, and resubmit a revised production strategy for the final 10 years of production. In total, nine groups participated in the exercise. The spread of the net present value (NPV) obtained by the different participants is on the order of 10%. The highest result that was obtained is only 3% below the optimized case determined for the known truth field. Although not an objective of this exercise, it was shown that the increase in NPV as a result of having three control intervals per well instead of one was considerable (approximately 20%). The results also showed that the NPV achieved with the flooding strategy that was updated after additional production data became available was consistently higher than before the data became available.
Computational Geosciences | 2012
Alexandre A. Emerick; Albert C. Reynolds
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF. In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of information during the truncation of small singular values.
ECMOR VIII - 8th European Conference on the Mathematics of Oil Recovery | 2002
Fengjun Zhang; Albert C. Reynolds
Within the framework of Bayesian statistics, realizations or estimates of rock property fields can be generated by automatic history matching of production data using a prior model to provide regularization. In this context, automatic history matching requires the minimization of an objective function which includes both model and data mismatch terms. For large scale problems, the computational efficiency and robustness of the optimization algorithms used for minimization are of paramount importance. From a comparison of algorithms for a variety of history matching problems, a scaled limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm was identified as the most promising for large scale optimization problems.
Computational Geosciences | 2013
Sy T. Do; Albert C. Reynolds
Performing a line search method in the direction given by the simplex gradient is a well-known method in the mathematical optimization community. For reservoir engineering optimization problems, both a modification of the simultaneous perturbation stochastic approximation (SPSA) and ensemble-based optimization (EnOpt) have recently been applied for estimating optimal well controls in the production optimization step of closed-loop reservoir management. The modified SPSA algorithm has also been applied to assisted history-matching problems. A recent comparison of the performance of EnOpt and a SPSA-type algorithm (G-SPSA) for a set of production optimization test problems showed that the two algorithms resulted in similar estimates of the optimal net-present-value and required roughly the same amount of computational time to achieve these estimates. Here, we show that, theoretically, this result is not surprising. In fact, we show that both the simplex, preconditioned simplex, and EnOpt algorithms can be derived directly from a modified SPSA-type algorithm where the preconditioned simplex algorithm is presented for the first time in this paper. We also show that the expectation of all these preconditioned stochastic gradients is a first-order approximation of the preconditioning covariance matrix times the true gradient or a covariance matrix squared times the true gradient.
Petroleum Geoscience | 2001
Dean S. Oliver; Albert C. Reynolds; Zhuoxin Bi; Yafes Abacioglu
The problem of mapping reservoir properties, such as porosity and permeability, and of assessing the uncertainty in the mapping has been largely approached probabilistically, i.e. uncertainty is estimated based on the properties of an ensemble of random realizations of the reservoir properties all of which satisfy constraints provided by data and prior geological knowledge. When the constraints include observations of production characteristics, the problem of generating a representative ensemble of realizations can be quite difficult partly because the connection between a measurement of water-cut or GOR at a well and the permeability at some other location is by no means obvious. In this paper, the progress towards incorporation of production data and remaining challenges are reviewed.
Journal of Canadian Petroleum Technology | 2004
Albert C. Reynolds; R. Li; Dean S. Oliver
This paper focuses on the simultaneous estimation of the absolute permeability field and relative permeability curves from three-phase flow production data. Irreducible water saturation, critical gas saturation, and residual oil saturations are assumed to be known. The two-phase relative permeability curves for an oil-gas system and the two-phase relative permeability curves for an oil-water system are represented by power law models. The three-phase oil relative permeability curve is calculated from the two sets of two-phase curves using Stones Model II. The adjoint method is applied to three-phase flow problems to calculate the sensitivity of production data to the absolute permeability field and the parameters defining the relative permeability functions. Using the calculated sensitivity coefficients, absolute permeability, and relative permeability fields are estimated by automatic history matching of production data.
SPE Annual Technical Conference and Exhibition | 2007
Gaoming Li; Albert C. Reynolds
The ensemble Kalman filter (EnKF) is a subject of intensive investigation for use as a reservoir management tool. For strongly nonlinear problems, however, EnKF can fail to achieve an acceptable data match at certain times in the assimilation process. Here, we provide iterative EnKF procedures to remedy this deficiency and explore the validity of these iterative methods compared to standard EnKF by considering two examples, one of which is pertains to a simple problem where the posterior probability density function has two modes. In both examples, we are able to obtain better data matches using iterative methods than with standard EnKF. In Appendix A, we enumerate the assumptions that must hold in order to show that EnKF provides a correct sampling of the probability distribution for the random variables. This derivation calls into question the common derivation in which one adds the data to the original combined state vector of model parameters and dynamical variables. In fact, it appears that there is no assurance that this trick for turning a nonlinear problem into a linear problem results in a correct sampling of the pdf one wishes to sample. However, we show that augmenting the state vector with the data results in a correct procedure for sampling the pdf if at every data assimilation step, the predicted data vector is a linear function of the combined (unaugmented) state vector and the average predicted data vector is equal to the predicted data evaluated at the average of the predicted combined state vector. Without these assumptions, we know of no way to show EnKF samples correctly. For completeness, in Appendix C, we show that each ensemble member of model parameters obtained at each step of EnKF is a linear combination of the initial ensemble, which emphasizes the importance of obtaining a sufficiently large initial ensemble. Introduction The ensemble Kalman filter (EnKF) was introduced by Evensen (1994) in the context of ocean dynamics literature as a Monte Carlo approximation of the extended Kalman filter and has been extensively discussed in the weather prediction literature. EnKF was recently introduced into the petroleum engineering literature (Naevdal et al., 2002, 2003) and adapted to the problem of estimating reservoir variables or parameters (permeability and porosity fields). Since its introduction into the petroleum engineering literature, EnKF has been investigated by a variety of researchers including Gu and Oliver (2004); Skjervheim et al. (2005); Gao et al. (2005); Liu and Oliver (2005); Wen and Chen (2005); Zafari and Reynolds (2005a); Skjervheim et al. (2006); Thulin et al. (2007) in a reservoir characterization setting. The method has also recently been applied successfully to a true field case (Evensen et al. (2007)). As shown in Gao et al. (2006), EnKF and the more computationally intense randomized maximum likelihood (RML) method gave a similar model estimate and a similar characterization of uncertainty in reservoir performance predictions for the well known PUNQ-S3 problem. For the most part, EnKF has performed well for reservoir characterization examples. However, it is relatively easy to generate toy problems with multimodal conditional pdf’s for which EnKF samples very poorly and hence provides a poor assessment of uncertainty Zafari (2005); Zafari and Reynolds (2005b); Reynolds et al. (2006). Reynolds et al. (2006) also showed a small, but representative reservoir problem where EnKF has difficulty correctly assimilating watercut data, and because of this, they designed an iterative process that combines features of randomized maximum likelihood (Oliver et al., 1996; Zhang and Reynolds, 2002; Zhang et al., 2005). Reynolds et al. (2006) also showed the EnKF update (analysis) equation is the same equation as one obtains by using RML with one Gauss-Newton iteration with a full step using the EnKF forecast (prediction) as the initial guess. Because of this result, it not surprising that it may be necessary to use an iterative procedure to obtain an acceptable match of data for highly nonlinear problems. Here, we present a detailed derivation of our current version of the Reynolds et al. (2006) algorithm and refer to this algorithm as IEnKF(1). In IEnKF(1), we simply match data sequentially in time as 2 GAOMING LI AND A.C. REYNOLDS SPE 109808 in the normal EnKF procedure. However, we iterate using a gradient based algorithm to obtain a better match of data than can be obtained by EnKF. IEnKF(2) simply refers to converting from EnKF to randomized maximum likelihood when the EnKF fails to provide a good match of data and will not be discussed here. To iterate with either of these two methods, we compute at each iteration the gradient of an objective function using the adjoint procedure (Li et al., 2003; Zhang and Reynolds, 2002) which requires one forward run from time zero to the current data assimilation time and one solution of the adjoint system backward from the current time to time zero. To avoid the forward and adjoint solution from time zero, we formulate an iterative scheme, IENKF(3), which at each iteration, simply requires a forward run from the previous data assimilation time to the current data assimilation time and an adjoint solution from the current data assimilation time back to the previous data assimilation time. While this last iterative scheme is far more efficient than the first two, it requires iteratively updating the primary variables of the reservoir simulator at the previous data assimilation time. It is conceivable that this could lead to highly non-physical values which could introduce errors into our estimates. Our limited experiments show that this highly efficient method gives reasonable results, but far more testing is needed. Like EnKF and RML, one of the assumptions necessary to prove the three iterative EnKF methods sample correctly is that there is a linear relation between data and vector of model parameters. Historically, in the atmospheric literature, predictions were made forward in time based on the ensemble of states obtained at the most recent data assimilation step. In reservoir simulation applications, it has not been clearly established whether predictions should be made forward in time using the reservoir simulator from the last updated (analyzed) ensemble of reservoir parameters and simulation primary variables or instead, should be made from time zero using the ensemble of reservoir parameters obtained at the last data assimilation step. The first procedure has the advantage of computational efficiency, but the second has the advantage that we avoid nonphysical values of pressure and saturation at all time steps and maintain material balances. The hope is that the two methods will give equivalent or at least very similar results; in some cases this is true, but in other the results can be radically different (Zhao et al., 2007). For a linear problem with no model error, a Gaussian prior model and fixed known initial conditions, Li and Reynolds (2007) have shown that running from time zero with the final ensemble of model parameters gives the same predictions of states (primary variables) as are obtained by running forward from the last data assimilation step. A refined version of this result is given in Thulin et al. (2007). Similarly, Zafari and Reynolds (2005a) have shown that for the a linear problem with a Gaussian prior, no model error and fixed initial conditions, EnKF becomes equivalent to randomized maximum likelihood as the number of ensembles goes to infinity. Thus, in this situation, EnKF samples the correct pdf at least asymptotically. For the same situation, the same methodology can be used to show the iterative methods given here sample the pdf (Eq. 3) correctly as the ensemble size goes to infinity. While this is comforting, there is no guarantee the methods sample the pdf correctly for nonlinear, non-Gaussian problems. Conditional PDF The Nm-dimensional column vectors of model parameters is denoted by m. Model parameters can include reservoir gridblock permeabilities and porosities, fault transmissibilities, fluid contacts, initial fluid distributions or parameters describing relative permeabilities, but in the examples presented here only porosities and log-permeabilities are included as model parameters. We let ti, i = 1, 2, · · · , denote the simulation time steps with t0 = 0 and let the random Np-dimensional column vector p denote the vector of dynamical variables, i.e., the primary variables of the reservoir simulation equations at time ti, where p denotes the random vector which represents the initial conditions. For a black black oil system, p includes pressures, saturations and dissolved GOR ratios. Note that the random vector p depends on time but the reservoir model m, i.e., the Nm-dimensional random column vector of model parameters, m, does not. However, the joint conditional pdf for these random vectors evolves in time as more data are assimilated. In the specific examples considered here, p is fixed and known; an example where this is not the case is given in Thulin et al. (2007). We also neglect model error in our examples. Boundary conditions are also assumed known. Thus, there is a deterministic relationship between the reservoir simulator and the model parameters and the primary variable in the simulator are random variables through their relation to the random vector m. We nevertheless use EnKF to sequentially update both m and the primary variables as this conceptually avoid the necessity to rerun the reservoir simulator from time zero with each updated ensemble member. The vector d represents the Nn-dimensional random column vector of predicted data vector at time tin , n = 1, 2, · · · , where these times denote the times at which we wish to assimilate data, i.e., condition m and the pn ’s, to observations, dobs. It is conceptually possible to sample the pdf f(pn , pin−1, · · · p, p, m | dobs,
Spe Journal | 1997
Nanqun He; Albert C. Reynolds; Dean S. Oliver
Bayesian estimation techniques are applied to generate three-dimensional permeability and porosity fields conditioned to prior means, variograms, hard data and/or multiwell pressure data. A posteriori variances are generated to obtain a measure of the uncertainty in the rock property fields, or equivalently, a measure of the variability in realizations of the rock property fields. These a posteriori variances can also be used to quantify the value of data, i.e., to determine the reduction in uncertainty achieved by adding a particular type of data. A key ingredient of our methodology is the development and implementation of an efficient procedure to estimate sensitivity coefficients for three-dimensional single-phase flow problems.