Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dean S. Oliver is active.

Publication


Featured researches published by Dean S. Oliver.


Spe Journal | 2007

An Iterative Ensemble Kalman Filter for Multiphase Fluid Flow Data Assimilation

Yaqing Gu; Dean S. Oliver

Summary The dynamical equations for multiphase flow in porous media are highly non-linear and the number of variables required to characterize the medium is usually large, often two or more variables per simulator gridblock. Neither the extended Kalman filter nor the ensemble Kalman filter is suitable for assimilating data or for characterizing uncertainty for this type of problem. Although the ensemble Kalman filter handles the nonlinear dynamics correctly during the forecast step, it sometimes fails badly in the analysis (or updating) of saturations. This paper focuses on the use of an iterative ensemble Kalman filter for data assimilation in nonlinear problems, especially of the type related to multiphase flow in porous media. Two issues are key: (1) iteration to enforce constraints and (2) ensuring that the resulting ensemble is representative of the conditional pdf (i.e. that the uncertainty quantification is correct). The new algorithm is compared to the ensemble Kalman filter on several highly nonlinear example problems, and shown to be superior in the prediction of uncertainty.


Journal of Energy Resources Technology-transactions of The Asme | 2006

The Ensemble Kalman Filter for Continuous Updating of Reservoir Simulation Models

Yaqing Gu; Dean S. Oliver

This paper reports the use of ensemble Kalman filter (EnKF) for automatic history matching. EnKF is a Monte Carlo method, in which an ensemble of reservoir state variables are generated and kept up-to-date as data are assimilated sequentially. The uncertainty of reservoir state variables is estimated from the ensemble at any time step. Two synthetic problems are selected to investigate two primary concerns with the application of the EnKF. The first concern is whether it is possible to use a Kalman filter to make corrections to state variables in a problem for which the covariance matrix almost certainly provides a poor representation of the distribution of variables. It is tested with a one-dimensional, two-phase waterflood problem. The water saturation takes large values behind the flood front, and small values ahead of the front. The saturation distribution is bimodal and is not well modeled by the mean and variance. The second concern is the representation of the covariance via a relatively small ensemble of state vectors may be inadequate. It is tested by a two-dimensional, two-phase problem. The number of ensemble members is kept the same as for the one-dimensional problem. Hence the number of ensemble members used to create the covariance matrix is far less than the number of state variables. We conclude that EnKF can provide satisfactory history matching results while requiring less computation work than traditional history matching methods. DOI: 10.1115/1.2134735


Computational Geosciences | 2013

Levenberg–Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification

Yan Chen; Dean S. Oliver

The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).


annual simulation symposium | 2005

Critical Evaluation of the Ensemble Kalman Filter on History Matching of Geologic Facies

N. Liu; Dean S. Oliver

The objective of this paper is to compare the performance of the ensemble Kalman filter (EnKF) to the performance of a gradient-based minimization method for the problem of estimation of facies boundaries in history matching. EnKF is a Monte Carlo method for data assimilation that uses an ensemble of reservoir models to represent and update the covariance of variables. In several published studies, it outperforms traditional history matching algorithms in adaptability and eciency. Because of the approximate nature of the EnKF, the realizations from one ensemble tend to underestimate the uncertainty especially for problems that are highly nonlinear. In this paper, the distributions of reservoir model realizations from 20 independent ensembles are compared with the distributions from 20 randomized maximum likelihood (RML) realizations for a 2D water-flood model with one injector and four producers. RML is a gradient based sampling method that generates one reservoir realization in each minimization of the objective function. It is an approximate sampling method, but its sampling properties are similar to Markov chain Monte Carlo method (McMC) on highly nonlinear problems and relatively more ecient than the McMC. Despite the nonlinear relationship between data such as production rates and facies observations, and the model variables, the EnKF was eective at history matching the production data. We find that the computational eort to generate 20 independent realizations was similar for the two methods, although the complexity of the code is substantially less for the EnKF.


SPE Annual Technical Conference and Exhibition | 2003

Automatic History Matching of Geologic Facies

Ning Liu; Dean S. Oliver

The truncated plurigaussian method for modeling geologic facies is appealing not only for the wide variety of textures and shapes that can be generated, but also because of the internal consistency of the stochastic model. This method has not, however, been widely applied in simulating distributions of reservoir properties facies or in automatic history matching. One reason seems to be that it is fairly difficult to estimate the parameters of the stochastic model that could be used to geological facies maps with the desired properties. The second is that because “facies type” is a discrete variable, it is not straightforward to apply the efficient gradient-based minimization method to generate reservoir facies models that honor production data. Non-gradient methods, however, are too slow for large field-scale problems. In this paper, the non-differentiable history-matching problem was replaced with a differentiable problem so that an automatic history matching technique could be applied to the problem of conditional simulation of facies boundaries generated from the truncated plurigaussian method. The resulting realizations are consistent both with the geostatistical model of the observed facies and the historic production. Application of the method requires efficient computation of the gradient of the objective function with respect to model variables. We present an example five-spot water injection problem with more than 73,000 model variables conditioned to pressure data at wells. The gradient was computed using the adjoint simulator method, and the minimization routine used a quasi-Newton minimization. The data mismatch decreased more than 90% in the first two iterations.


Journal of Canadian Petroleum Technology | 2003

Sensitivity Coefficients for Three-Phase Flow History Matching

R. Li; Albert C. Reynolds; Dean S. Oliver

Changing the value of the porosity or the horizontal or vertical permeability in any grid cell in a reservoir simulator by a small amount often results in a small change in the value of a property predicted by the simulator. The ratio of change in prediction to change in reservoir property is called the sensitivity coefficient. In this paper, we describe the use of the adjoint system of equations to compute the sensitivity of wellbore pressure, water-oil ratio, and gas-oil ratio to changes in gridblock permeability and porosity. Unlike some other methods of computing sensitivity coefficients, this method is applicable for problems with large numbers of model parameters and for problems in which cross-flow and compressibility are significant. Although the adjoint system has previously been used to compute the gradient of an objective function for three-phase flow data with respect to model parameters, the sensitivity coefficients are significantly more powerful as they enable the use of Newton-like methods with quadratic convergence, i.e., estimation of the covariance of model estimates based on the inverse of the Hessian, and they provide insight into the information content of data. We use several three-phase flow examples with solution-gas drive, gas injection, and gravity segregation to illustrate these ideas.


Mathematical Geosciences | 2015

Conditioning Truncated Pluri-Gaussian Models to Facies Observations in Ensemble-Kalman-Based Data Assimilation

Alina Astrakova; Dean S. Oliver

The truncated pluri-Gaussian model is a powerful tool for representing realistic spatial distributions of facies in reservoir characterization. It is suitable for generating stochastic three-dimensional facies realizations with complex vertical and lateral relationships such as are observed in algal mound behavior. Truncated pluri-Gaussian realizations account for anisotropies and relative proportions of the facies. Despite their advantages, truncated pluri-Gaussian models have not been extensively used in data assimilation techniques such as ensemble-Kalman-based algorithms. One of the major limitations encountered in the existing implementations is the difficulty of preserving facies observations at well locations through the data assimilation procedure arising even for weakly correlated data. In this work, the problem of maintaining consistency of realizations with facies is solved by merging the data assimilation algorithm (Levenberg–Marquardt ensemble randomized maximum likelihood) with an interior-point method suitable of inequality constraints. The iterative ensemble smoother is effective at assimilating highly nonlinear production data, while the interior-point method takes into account the inequality constraints on the Gaussian model variables during the data assimilation. The formulation uses an objective function that includes a model-mismatch term, a data-mismatch term and a boundary penalization function. The method allows for its approximate version neglecting model-mismatch. The algorithm was tested on a three-dimensional synthetic reservoir mimicking the algal mounds shapes of an outcrop in the Paradox Basin of Utah and includes a large number of strongly correlated facies data and significant change of petrophysical properties for different facies. It resulted in good decrease in the data-mismatch while preserving the mound structure realism and variability in the ensemble.


Computational Geosciences | 2017

Localization and regularization for iterative ensemble smoothers

Yan Chen; Dean S. Oliver

Ensemble-based data assimilation methods have recently become popular for solving reservoir history matching problems, but because of the practical limitation on ensemble size, using localization is necessary to reduce the effect of sampling error and to increase the degrees of freedom for incorporating large amounts of data. Local analysis in the ensemble Kalman filter has been used extensively for very large models in numerical weather prediction. It scales well with the model size and the number of data and is easily parallelized. In the petroleum literature, however, iterative ensemble smoothers with localization of the Kalman gain matrix have become the state-of-the-art approach for ensemble-based history matching. By forming the Kalman gain matrix row-by-row, the analysis step can also be parallelized. Localization regularizes updates to model parameters and state variables using information on the distance between the these variables and the observations. The truncation of small singular values in truncated singular value decomposition (TSVD) at the analysis step provides another type of regularization by projecting updates to dominant directions spanned by the simulated data ensemble. Typically, the combined use of localization and TSVD is necessary for problems with large amounts of data. In this paper, we compare the performance of Kalman gain localization to two forms of local analysis for parameter estimation problems with nonlocal data. The effect of TSVD with different localization methods and with the use of iteration is also analyzed. With several examples, we show that good results can be obtained for all localization methods if the localization range is chosen appropriately, but the optimal localization range differs for the various methods. In general, for local analysis with observation taper, the optimal range is somewhat shorter than the optimal range for other localization methods. Although all methods gave equivalent results when used in an iterative ensemble smoother, the local analysis methods generally converged more quickly than Kalman gain localization when the amount of data is large compared to ensemble size.


Computational Geosciences | 2015

Ensemble-based multi-scale history-matching using second-generation wavelet transform

Théophile Gentilhomme; Dean S. Oliver; Trond Mannseth; Guillaume Caumon; Rémi Moyen; Philippe Doyen

Ensemble-based optimization methods are of- ten efficiently applied to history-matching problems. Although satisfactory matches can be obtained, the updated realizations, affected by spurious correlations, generally fail to preserve prior information when using a small ensemble, even when localization is applied. In this work, we propose a multi-scale approach based on grid-adaptive second-generation wavelets. These wavelets can be applied on irregular reservoir grids of any dimensions containing dead or flat cells. The proposed method starts by modifying a few low frequency parameters (coarse scales) and then progressively allows more important updates on a limited number of sensitive parameters of higher resolution (fine scales). The Levenberg-Marquardt ensemble randomized maximum likelihood (LM-enRML) is used as optimization method with a new space-frequency distance-based localization of the Kalman gain, specifically designed for the multi-scale scheme. The algorithm is evaluated on two test cases. The first test is a 2D synthetic case in which several inversions are run using independent ensembles. The second test is the Brugge benchmark case with 10 years of history. The efficiency and quality of results of the multi-scale approach are compared with the grid-block-based LM-enRML with distance-based localization. We observe that the final realizations better preserve the spatial contrasts of the prior models and are less noisy than the realizations updated using a standard grid-block method, while matching the production data equally well.


information processing and trusted computing | 2008

Reservoir Simulation Model Updates via Automatic History Matching with Integration of Seismic Impedance Change and Production Data

Yannong Dong; Dean S. Oliver

This paper was selected for presentation by an IPTC Programme Committee following review of information contained in an abstract submitted by the author(s). Contents of the paper, as presented, have not been reviewed by the International Petroleum Technology Conference and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the International Petroleum Technology Conference, its officers, or members. Papers presented at IPTC are subject to publication review by Sponsor Society Committees of IPTC. Electronic reproduction, distribution, or storage of any part of this paper for commercial purposes without the written consent of the International Petroleum Technology Conference is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of where and by whom the paper was presented. Abstract Automatic history matching can be used to incorporate 4D seismic data into reservoir characterization by adjusting values of permeability and porosity to minimize the difference between the observed impedance change and the predicted impedance change, while remaining as close as possible to the initial geological model. To perform the history matching efficiently, an adjoint method is used to compute the gradient of the data mismatch and a quasi-Newton method is used to compute the search direction. Compared to other approaches that use the time-lapse seismic data to infer saturation and pressure directly, this method uses a finite-difference, black-oil reservoir simulator to ensure that the material balance and flow equations are honored. Two ancillary issues were important in obtaining a match. First, it was necessary to convert the map of change in reflection coefficients to a map of change in impedance. Second, it was necessary to characterize the noise in observed seismic impedance change data to prevent overmatching of the data. All the procedures are illustrated with an application to a reservoir in the Gulf of Mexico. Introduction The objective of automatic history matching is to obtain a reservoir simulation model that honors observed production history and is geologically plausible. Although dynamic data from the wells, such as pressure, gas oil ratio (GOR), and water oil ratio (WOR), provide useful information for reservoir characterization in traditional history matching, the resolution of an estimate obtained from this type of data is typically poor. The only way to improve the resolution in such cases is to integrate additional …

Collaboration


Dive into the Dean S. Oliver's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ning Liu

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaqing Gu

University of Oklahoma

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge