Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Céline Scheidt is active.

Publication


Featured researches published by Céline Scheidt.


Computational Geosciences | 2013

History matching and uncertainty quantification of facies models with multiple geological interpretations

Hyucksoo Park; Céline Scheidt; Darryl Fenwick; Alexandre Boucher; Jef Caers

Uncertainty quantification is currently one of the leading challenges in the geosciences, in particular in reservoir modeling. A wealth of subsurface data as well as expert knowledge are available to quantify uncertainty and state predictions on reservoir performance or reserves. The geosciences component within this larger modeling framework is partially an interpretive science. Geologists and geophysicists interpret data to postulate on the nature of the depositional environment, for example on the type of fracture system, the nature of faulting, and the type of rock physics model. Often, several alternative scenarios or interpretations are offered, including some associated belief quantified with probabilities. In the context of facies modeling, this could result in various interpretations of facies architecture, associations, geometries, and the way they are distributed in space. A quantitative approach to specify this uncertainty is to provide a set of alternative 3D training images from which several geostatistical models can be generated. In this paper, we consider quantifying uncertainty on facies models in the early development stage of a reservoir when there is still considerable uncertainty on the nature of the spatial distribution of the facies. At this stage, production data are available to further constrain uncertainty. We develop a workflow that consists of two steps: (1) determining which training images are no longer consistent with production data and should be rejected and (2) to history match with a given fixed training image. We illustrate our ideas and methodology on a test case derived from a real field case of predicting flow in a newly planned well in a turbidite reservoir off the African West coast.


Mathematical Geosciences | 2015

Prediction-Focused Subsurface Modeling: Investigating the Need for Accuracy in Flow-Based Inverse Modeling

Céline Scheidt; Philippe Renard; Jef Caers

The objective of most formulations of inverse modeling in the Earth Sciences is to estimate the model parameters given the observation data. In this paper, an additional element to these formulations is considered, namely, the prediction for which the models are built. An example of such modeling is the prediction of solute transport using geological models of the subsurface constrained to existing geophysical, flow dynamic and solute observations. The paper then illustrates and addresses a fundamental question relating data, model and prediction: does model inversion reduce uncertainty in prediction variables given the observed data? To investigate this question, a diagnostic tool is proposed to assess whether matching the observed data significantly reduces uncertainty in the prediction variables. In addition, for some cases, a quick estimate of uncertainty can be obtained without applying inverse modeling. It relies on a dimensionality reduction method using non-linear principal component analysis (NLPCA) calibrated from evaluating the prediction and data response variables on a few prior Earth models. The proposed diagnostic tool is applied on a simple example of tracer flow and, for the cases investigated, the NLPCA provided an accurate diagnostic and a satisfactory pre-estimation of uncertainty in the prediction variables.


Mathematical Geosciences | 2014

Quantifying Asymmetric Parameter Interactions in Sensitivity Analysis: Application to Reservoir Modeling

Darryl Fenwick; Céline Scheidt; Jef Caers

In this paper, a new generalized sensitivity analysis is developed with a focus on parameter interaction. The proposed method is developed to apply to complex reservoir systems. Most critical in many engineering applications is to find which model parameters and parameter combinations have a significant impact on the decision variables. There are many types of parameters used in reservoir modeling, e.g., geophysical, geological and engineering. Some parameters are continuous, others discrete, and others have no numerical value and are scenario-based. The proposed generalized sensitivity analysis approach classifies the response/decision variables into a limited set of discrete classes. The analysis is based on the following principle: if the parameter frequency distribution is the same in each class, then the model response is insensitive to the parameter, while differences in the frequency distributions indicate that the model response is sensitive to the parameter. Based on this simple idea, a new general measure of sensitivity is developed. This sensitivity measure quantifies the sensitivity to parameter interactions, and incorporates the possibility that these interactions can be asymmetric for complex reservoir modeling. The approach is illustrated using a case study of a West Africa offshore oil reservoir.


Computational Geosciences | 2015

Updating joint uncertainty in trend and depositional scenario for reservoir exploration and early appraisal

Céline Scheidt; Pejman Tahmasebi; Marco Pontiggia; Andrea Da Pra; Jef Caers

Computationally efficient updating of reservoir models with new production data has received considerable attention recently. In this paper however, we focus on the challenges of updating reservoir models prior to production, in particular when new exploration wells are drilled. At this stage, uncertainty in the depositional model is highly impactful in terms of risk and decision making. Mathematically, such uncertainty is often decomposed into uncertainty of lithological trends in facies proportions which is typically informed by seismic data, and sub-seismic variability often modeled geostatistically by means of training images. While uncertainty in the training image has received considerable attention, uncertainty in the trend/facies proportion receives little to no consideration. In many practical applications, with either poor geophysical data or little well information, the trend is often as uncertain as the training image, yet is often fixed, leading to unrealistic uncertainty models. The problem is addressed through a hierarchical model of probability. Total model uncertainty is divided into first uncertainty in the training image, then uncertainty in the trend given the uncertain training image. Our methodology relies on an efficient Bayesian updating of these model parameters (trend and training image) by modeling forward-simulated well facies profiles in low-dimensional metric space. We apply this methodology to a real field case study involving wells drilled sequentially in the subsurface, where as more data becomes available, uncertainty in both training image and trend require updating to improve characterization of the facies.


Computers & Geosciences | 2016

DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

Jihoon Park; Guang Yang; Addy Satija; Céline Scheidt; Jef Caers

Abstract Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.


Computational Geosciences | 2017

Direct forecasting of reservoir performance using production data without history matching

Addy Satija; Céline Scheidt; Jef Caers

The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.


Journal of Geophysical Research | 2016

Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

Céline Scheidt; Anjali M. Fernandes; Chris Paola; Jef Caers

We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that, with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the ‘eigen-patterns’ of the natural system. The eigen-pattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.


Computers & Geosciences | 2017

Stochastic simulation by image quilting of process-based geological models

Júlio Hoffimann; Céline Scheidt; Adrian A. S. Barfod; Jef Caers

Abstract Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse—a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.


Petroleum Geostatistics 2015 | 2015

Can geostatistical models represent nature's variability? an analysis using flume experiments

Céline Scheidt; A. Fernandes; Chris Paola; Jef Caers

One of the difficulties in multi-point geostatistics (MPS) is the definition of the training image (TI). In the context of uncertainty modeling, the construction of a set of TIs is desirable, but the number of TIs and the characteristics that should to be varied in the TI are not well understood. In this research, we explore the question of the definition of the TIs using tank experiments. A set of snapshots of delta deposits seen in the tank are used to explore the variability of the system over time and to see if MPS can reproduce the variability of the set of images using only a few, well-selected images that are taken as TI. Preliminary methodologies are explored to select representative images, where the variation of the deposits over time is studied. Our results show that MPS was able to reproduce the variability in the full set of images, hence the variability of the studied system. Analyzing the characteristics of the selected images is a first step forward in the attempt to define TIs. This study only present preliminary investigations and more general answers will be explored.


Archive | 2011

Integration of Engineering and Geological Uncertainty for Reservoir Performance Prediction Using a Distance-Based Approach

Jef Caers; Céline Scheidt

Uncertainty is an integral part of risk and decision making. Uncertainty in the reservoir is caused by a lack of knowledge on key geologic and reservoir engineering factors that are required for building geologic and flow models. Traditional approaches to modeling uncertainty, such as those relying on the experimental design technique and Monte Carlo simulation, are either limited in effectiveness (unable to handle general cases) or efficiency (too demanding on the central processing unit). We review a new technique for modeling reservoir performance uncertainty that is more general and efficient. The technique relies on the definition of a distance between any two reservoir models. This distance should correlate with the difference in reservoir response and provides the key missing link between geologic uncertainty and flow uncertainty. We present a workflow combining several statistical tools such as multidimensional scaling and clustering to model reservoir response uncertainty when both geologic and engineering parameters are uncertain. At the same time, this workflow allows assessment of the most influential parameters regarding flow, as well as accuracy of the uncertainty assessment. Two real field cases illustrate this approach.

Collaboration


Dive into the Céline Scheidt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Paola

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge