Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastien Strebelle is active.

Publication


Featured researches published by Sebastien Strebelle.


AAPG Bulletin | 2004

Multiple-point simulation integrating wells, three-dimensional seismic data, and geology

Yuhong Liu; Andrew Harding; William L. Abriel; Sebastien Strebelle

There are two significant challenges in building a reservoir model integrating all available information. One challenge is that wells and seismic data measure the reservoir at different scales of resolution. The other challenge lies in how to account for conceptual geological knowledge with resolution at multiple scales.In this paper, we present a case study of integrating well data, seismic data, and conceptual geologic models. The well and seismic data are of good quality, but conventional well-seismic data calibration indicates that the seismic data are unable to fully differentiate sand from shale. The reason for this poor well-seismic data calibration is that well log and seismic data measure the reservoir at different scales. Well logs are able to differentiate sand from shale, whereas seismic data are better at detecting larger scale depositional geometries.A new workflow is presented to deal with this problem. First, principal component analysis clustering is used to identify characteristic patterns of certain depositional facies, from which sandy and shaly channels are interpreted. Next, multiple-point geostatistical simulation is performed to build a depositional-facies model, which integrates both hard and soft data but also incorporates realistic depositional-facies geometries provided by our geological knowledge of this reservoir. Finally, different lithofacies (sand and shale) indicators and corresponding petrophysical properties are simulated honoring the limited well data.The results show that not only are the geological features better reproduced, but also is the uncertainty about the reservoir significantly reduced because of a better integration of corresponding three-dimensional seismic data.


Spe Journal | 2003

Modeling of a Deepwater Turbidite Reservoir Conditional to Seismic Data Using Principal Component Analysis and Multiple-Point Geostatistics

Sebastien Strebelle; Karen Payrazyan; Jef Caers

Geological interpretation and seismic data analysis provide two complementary sources of information to model reservoir architecture. Seismic data affords the opportunity to identify geologic patterns and features at a resolution on the order of 10’s of feet, while well logs and conceptual geologic models provide information at a resolution on the order of one foot. Both the large-scale distribution of geologic features and their internal fine-scale architecture influence reservoir performance. Development and application of modeling techniques that incorporate both large-scale information derived from seismic and fine-scale information derived from well logs, cores, and analog studies represents a significant opportunity to improve reservoir performance predictions. In this paper we present a practical new geostatistical approach for solving this difficult data integration problem and apply it to an actual, prominent reservoir. Traditional geostatistics relies upon a variogram to describe geologic continuity. However, a variogram, which is a two-point measure of spatial variability, cannot describe realistic, curvilinear or geometrically complex patterns. Multiple-point geostatistics uses a training image instead of a variogram to account for geological information. The training image provides a conceptual description of the subsurface geological heterogeneity, containing possibly complex multiple-point patterns of geological heterogeneity. Multiple-point statistics simulation then consists of anchoring these patterns to well data and seismic-derived information. This work introduces a novel alternative approach to traditional Bayesian modeling to incorporate seismic. The focus in this paper lies in demonstrating the practicality, flexibility and CPU-advantage of this new approach by applying it to an actual deep-water turbidite reservoir. Based on well log interpretation and a global geological understanding of the reservoir architecture, a training image depicting sinuous sand bodies is generated using a non-conditional object-based simulation algorithm. Disconnected sand bodies are interpreted from seismic amplitude data using a principal component cluster analysis technique. In addition, a map of local sand probabilities obtained from a principal component proximity transform of the same seismic is generated. Multiple-point geostatistics then simulates multiple realizations of channel bodies constrained to the local sand probabilities, partially interpreted sand bodies and well-log data. The CPU-time is comparable to traditional geostatistical methods.


SPE Annual Technical Conference and Exhibition | 2002

Modeling of a Deepwater Turbidite Reservoir Conditional to Seismic Data Using Multiple-Point Geostatistics

Sebastien Strebelle; Karen Payrazyan; Jef Caers

Geological interpretation and seismic data analysis provide two complementary sources of information to model reservoir architecture. Seismic data affords the opportunity to identify geologic patterns and features at a resolution on the order of 10’s of feet, while well logs and conceptual geologic models provide information at a resolution on the order of one foot. Both the large-scale distribution of geologic features and their internal fine-scale architecture influence reservoir performance. Development and application of modeling techniques that incorporate both large-scale information derived from seismic and fine-scale information derived from well logs, cores, and analog studies represents a significant opportunity to improve reservoir performance predictions. In this paper we present a practical new geostatistical approach for solving this difficult data integration problem and apply it to an actual, prominent reservoir. Traditional geostatistics relies upon a variogram to describe geologic continuity. However, a variogram, which is a two-point measure of spatial variability, cannot describe realistic, curvilinear or geometrically complex patterns. Multiple-point geostatistics uses a training image instead of a variogram to account for geological information. The training image provides a conceptual description of the subsurface geological heterogeneity, containing possibly complex multiple-point patterns of geological heterogeneity. Multiple-point statistics simulation then consists of anchoring these patterns to well data and seismic-derived information. This work introduces a novel alternative approach to traditional Bayesian modeling to incorporate seismic. The focus in this paper lies in demonstrating the practicality, flexibility and CPU-advantage of this new approach by applying it to an actual deep-water turbidite reservoir. Based on well log interpretation and a global geological understanding of the reservoir architecture, a training image depicting sinuous sand bodies is generated using a non-conditional object-based simulation algorithm. Disconnected sand bodies are interpreted from seismic amplitude data using a principal component cluster analysis technique. In addition, a map of local sand probabilities obtained from a principal component proximity transform of the same seismic is generated. Multiple-point geostatistics then simulates multiple realizations of channel bodies constrained to the local sand probabilities, partially interpreted sand bodies and well-log data. The CPU-time is comparable to traditional geostatistical methods. Introduction Geostatistics aims at building multiple alternative reservoir models thereby assessing uncertainty about the reservoir. One major challenge of geostatistical modeling is to integrate information from different sources obtained at different resolutions: • well-data which is sparse but of high resolution, on the order of one foot, • seismic data which is exhaustive but of much lower resolution, on the order of 10’s of feet in the • vertical direction, • conceptual geological models, which could quantify reservoir heterogeneity from the layer scale to the basin scale. Variogram-based algorithms allow integrating well and seismic data using a pixel-based approach: First, the well data are assigned to the closest simulation grid nodes. Then, all unsampled nodes are simulated conditional to well and seismic data using some form of co-kriging. Variogram-based geostatistics is inadequate in integrating geological concepts since the variogram is too limited in capturing complex geological heterogeneity. A variogram is a two-point statistics that poorly reflects a geologists’ prior conceptual vision of the reservoir architecture. Dense well environments and/or good quality seismic data might overcome this limitation. Deepwater turbidite reservoirs represent a growing number of hydrocarbon targets for major oil companies, hence the number of wells is usually limited, SPE 77425 Modeling of a Deepwater Turbidite Reservoir Conditional to Seismic Data Using Multiple-Point Geostatistics Sebastien Strebelle and Karen Payrazyan, ChevronTexaco EPTC, and Jef Caers, Stanford University 2 S. STREBELLE, K. PAYRAZYAN, AND J. CAERS SPE 77425 and the seismic data is of varying degrees of quality and resolution. High drilling and production costs associated with such reservoirs increase the need for reliable architecture modeling. Integration of geological information beyond twopoint variogram reproduction becomes critical in order to quantify more accurately heterogeneity and assess realistically the uncertainty of oil recovery. To accurately integrate geological models, object-based algorithms have been developed since they allow modeling realistic geological geometries according to prior geological description. A number of important drawbacks have been observed in applying such models to large 3D cases: • The proposed object-oriented algorithms have difficulty honoring all the available well data. • The integration of seismic data is limited: often only 2D areal proportion maps are allowed. • They are CPU demanding. • For each new object type, a different algorithm needs to be developed. In this paper, we propose to use an alternative approach that combines the easy conditioning of pixel-based algorithms with the ability to reproduce “shapes” of object-based techniques, without relying on excessive CPU demand. Multiple-point (mp) geostatistics uses a training image instead of a variogram to account for geological information. The training image describes the geometrical facies patterns believed to represent the subsurface. Training images do need not carry any local information of the actual reservoir; they only reflect a prior geological/structural concept. Object-based algorithms freed of the constraint of data conditioning can be used to generate such images. mp-Geostatistics consists of extracting patterns from the training image, and anchoring them to local data, i.e. well logs and seismic data. Several training images corresponding to alternative geological interpretations can be used to account for the uncertainty about the reservoir architecture. Data sets and prior geological models To illustrate the mp-geostatistical methodology we use data from an actual prominent ChevronTexaco turbidite reservoir. The reservoir contains complex patterns of sand intercalated in a mudstone matrix. The goal of the study is to build a sand/nosand model integrating the following information: • well logs from four wells. Sand indicator data is obtained by applying a cutoff on the v-shale log information. The sample sand proportion is 30.6%. • A 3D cube of seismic amplitude data at a resolution of 2 ms. • A prior conceptual description of the type of geological bodies expected in the subsurface. The sand/no-sand variable is modeled by an indicator random function, defined as follows:    = otherwise 0 at present is sand if 1 ) ( u u I Reservoir top and bottom surfaces were picked from the seismic travel times, and used to build a NW-oriented simulation stratigraphic grid of 216*112*50=1,265,600 nodes. The horizontal resolution of the grid is 50*50 meter. After time-depth conversion, the vertical resolution is 1.1 meter on average. Fig. 1 shows a horizontal section of the simulation grid populated with the seismic amplitude data after timedepth conversion, and re-sampling at the resolution level of the grid using a simple linear interpolation technique; the figure is NW-oriented. No seismic data is available in the south corner of the grid. Fig. 1 displays also the locations of the four wells. Prior geological models. Based on geological models of similar fields, and using statistics from well logs and interpreted seismic data, two prior geological models are proposed: • The first model consists of large-scale continuous NW-oriented channel-type sand bodies that extend over the entire study area. Individual channels are from 200 to 300 m wide, their thickness:width ratio varies from 1:50 to 1:150, and sinuosity departure is between 0 and 400 meter. Based on this information, a single realization is generated using the object-based program fluvsim. Note that this realization in not constrained to any well or seismic data, it is purely conceptual. A horizontal section is shown in Fig. 2. This model will be used as a 3D analog reservoir, or training image, in the mp-geostatistics simulation program. • An alternative geological model consisting of sand bodies with limited spatial continuity is shown in Fig. 3. The sand bodies are roughly 1750 meter long, 250 meter wide and 2 meter thick. Based on these parameters, an unconditional object-based simulated realization is generated on a grid of 110*70*100=770,000 nodes with a resolution of 50*50*1.1 meter, using Roxar RMS. Sand probability conditional to seismic. 3D post-stack seismic amplitude data is available. In general seismic amplitude can be used as an indicator of change of rock impedance due to a change in rock facies (shale/sand). Instead of performing a traditional seismic inversion of amplitude to impedance, we use a method proposed by Payrazyan, named Principal Component Proximity Transform (PCPT). This method avoids the inversion step, and allows calculating directly facies probabilities from amplitudes. Payrazyan proposed to relate the facies at reservoir location u to a user-defined window WS(u) of seismic data: WS(u)={S(u+h1), S(u+h2)...S(u+hN)} MODELING OF A DEEPWATER TURBIDITE RESERVOIR SPE 77425 CONDITIONAL TO SEISMIC DATA USING MULTIPLE-POINT GEOSTATISTICS 3 where the vectors hi define the geometry of that window. The algorithm proceeds in two steps. First, because the size N of the seismic window WS can be large on 3D grids, and the components S(u+hi) often carry a high degree of redundancy, a Principal Component Analysis (PCA) is performed. Scanning the seismic cube using the window WS produces multiple realizati


Geophysics | 2003

Stochastic integration of seismic data and geologic scenarios: A West Africa submarine channel saga

Jef Caers; Sebastien Strebelle; Karen Payrazyan

Providing quantitative statements about the risk associated with reservoir exploration and production has become increasingly important in modern-day reservoir management. At the exploration or early production stage, with limited well-log and production data in hand, the first-order uncertainty is often dominated by the quality of the seismic data and the accuracy of conceptual geologic models showing facies geometry and connectivity. Seismic may provide quality information on reservoir geometry and large-scale trends of facies such as net-to-gross variations. As a complement to seismic, detailed geologic interpretations of facies association and geometry may provide heterogeneity information at all scales of the reservoir, including the fine-scale lacking in the seismic. However, geologic interpretations may result in competing geologic scenarios. The variability associated with multiple geologic interpretations may result in competing geologic scenarios. The variability associated with multiple geologi...


Mathematical Geosciences | 2014

Solving Speed and Memory Issues in Multiple-Point Statistics Simulation Program SNESIM

Sebastien Strebelle; Claude Edward Cavelius

In the last 10 years, Multiple-Point Statistics (MPS) modeling has emerged in Geostatistics as a valuable alternative to traditional variogram-based and object-based modeling. In contrast to variogram-based simulation, which is limited to two-point correlation reproduction, MPS simulation extracts and reproduces multiple-point statistics moments from training images; this allows modeling geologically realistic features, such as channels that control reservoir connectivity and flow behavior. In addition, MPS simulation works on individual pixels or small groups of pixels (patterns), thus does not suffer from the same data conditioning limitations as object-based simulation. The Single Normal Equation Simulation program SNESIM was the first implementation of MPS simulation to propose, through the introduction of search trees, an efficient solution to the extraction and storage of multiple-point statistics moments from training images. SNESIM is able to simulate three-dimensional models; however, memory and speed issues can occur when applying it to multimillion cell grids. Several other implementations of MPS simulation were proposed after SNESIM, but most of them manage to reduce memory demand or simulation time only at the expense of data conditioning exactitude and/or training pattern reproduction quality. In this paper, the original SNESIM program is revisited, and solutions are presented to eliminate both memory demand and simulation time limitations. First, we demonstrate that the time needed to simulate a grid node is a direct function of the number of uninformed locations in the conditioning data search neighborhood. Thus, two improvements are proposed to maximize the ratio of informed to uniformed locations in search neighborhoods: a new multiple-grid approach introducing additional intermediary subgrids; and a new search neighborhood designing process to preferentially include previously simulated node locations. Finally, because SNESIM memory demand and simulation time increase with the size of the data template used to extract multiple-point statistics moments from the training image and build the search tree, a simple method is described to minimize data template sizes while preserving training pattern reproduction quality.


SPE Western Regional and Pacific Section AAPG Joint Meeting | 2008

The Effect of Geologic Parameters and Uncertainties on Subsurface Flow: Deepwater Depositional Systems

William J. Milliken; Marge Levy; Sebastien Strebelle; Ye Zhang

The application of reservoir simulation as a tool for reservoir development and management is widespread in the oil and gas industry. Moreover, it is recognized that the results of any reservoir simulation model are strongly influenced by the underlying geologic model. However, the direct relationship between geologic parameters and subsurface flow is obscure. In this paper we explore this relationship in a deepwater depositional system using data from two reservoir analogs: the shallow seismic dataset from the Mahakam Fan and outcrop data from the Brushy Canyon Formation of West Texas. Shallow seismic data from the Mahakam Fan area shows a high-resolution deepwater channel-levee system consisting of 10 migrating channels. Using an experimental design framework and a series of three increasingly complex models, we investigated the effect of nine different geologic factors on several different measures of the flow behavior. Our results show that, as expected, different geologic factors influence different measures of the flow. Most significant is the clear effect that the proportion and organization of the different internal facies making up the channels have on the recovery factor and net oil production. The Brushy Canyon outcrops used in this work represent sand-rich proximal deposits of a distributary lobe complex. Here we built models on a very small length scale to investigate the effects of sheet-like reservoir architecture as well as internal facies distribution of the sheets on subsurface flow. Again, an experimental design framework was employed, this time to examine the influence of 11 input variables. The proportion and organization of the internal lobe facies has a significant influence on the subsurface flow here but in these distributary lobe complexes other variables, including the stacking of the lobes, were also found to be important. The models in this study address flow behavior in deepwater, sparse well environments. Using models from the simple to the complex, we found that several parameters incorporated in the complex models, and not in simple models, had a significant impact on the predicted flow.


Geological Society, London, Special Publications | 2008

Using multiple-point statistics to build geologically realistic reservoir models: the MPS/FDM workflow

Sebastien Strebelle; Marjorie Levy

Abstract Building geologically realistic reservoir models that honour well data and seismic-derived information remains a major challenge. Conventional variogram-based modelling techniques typically fail to capture complex geological structures while object-based techniques are limited by the amount of conditioning data. This paper presents new reservoir facies modelling tools that improve both model quality and efficiency relative to traditional geostatistical techniques. Geostatistical simulation using Multiple-Point Statistics (MPS) is an innovative depositional facies modelling technique that uses conceptual geological models as training images to integrate geological information into reservoir models. Replacing the two-point statistic variogram with multiple-point statistics extracted from a training image enables to model non-linear facies geobody shapes such as sinuous channels, and to capture complex spatial relationships between multiple facies. In addition, because the MPS algorithm is pixel-based, it can handle a large amount of conditioning data, including many wells, seismic data, facies proportion maps and curves, variable azimuth maps, and interpreted geobodies, thus reducing the uncertainty in facies spatial distribution. Facies Distribution Modelling (FDM) is a new technique to generate facies probability cubes from user-digitized depositional maps and cross-sections, well data, and vertical facies proportion curves. Facies probability cubes generated by FDM are used as soft constraints in MPS geostatistical modelling. They are critical, especially with sparse well data, to ensure that the spatial distribution of the simulated facies is consistent with the depositional facies interpretation of the field. A workflow combining MPS and FDM has been successfully used in Chevron to model important oilfield assets in both shallow- and deep-water depositional environments. Sedimentary environments can be characterized by a succession of deposition of elements, or rock bodies, through time. These elements are traditionally grouped into classes, commonly named ‘depositional facies’, based on their lithology, petrophysical properties, and biological structures. For example, the typical depositional facies encountered in fluvial environments are high permeability sand channels, with levées and crevasse splays, having a more variable range of permeability and net-to-gross ratio, within a background of low permeability shaley facies.


Geophysics | 2006

Probabilistic integration of geologic scenarios, seismic, and production data—a West Africa turbidite reservoir case study

Jef Caers; Todd Hoffman; Sebastien Strebelle; Xian-Huan Wen

From reservoir exploration to field abandonment, reservoir modeling plays a central role in understanding and predicting the reservoir key geologic, geophysical, and engineering components. Reservoir models, either as a simple layer cake or as a complex, fully 3D description of structural components, rock and fluid properties, are ideal gateways for aggregating data and expertise from different sources and disciplines. The complexity of such models should be driven by the practical reservoir management question raised, be it the estimation of OOIP (reserve question) or the development and planning of wells/surface facilities and recovery strategies (flow question). At the same time, the limitation of reservoir models should be well understood: reservoir models can only mimic the reservoirs true complexity; they can never fully (nor need they) represent the actual subsurface heterogeneity.


SPE Annual Technical Conference and Exhibition | 2004

Assessment of Global Uncertainty for Early Appraisal of Hydrocarbon Fields

G. Caumon; Sebastien Strebelle; Jef Caers; Andre G. Journel

This paper was selected for presentation by an SPE Program Committee following review of information contained in a proposal submitted by the author(s). Contents of the paper, as presented, have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Papers presented at SPE meetings are subject to publication review by Editorial Committees of the Society of Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper for commercial purposes without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to a proposal of not more than 300 words; illustrations may not be copied. The proposal must contain conspicuous acknowledgment of where and by whom the paper was presented. Abstract We propose a workflow to assess the uncertainty about a global reservoir parameter such as net-to-gross during early exploration. As opposed to traditional statistical approaches that assume data independence and cannot easily account for either seismic data or geological interpretation, this model, based on multiple point statistics, integrates the main components of uncertainty, namely: • the choice of a geological scenario, probably the most important factor at the appraisal stage. • The location of wells, which could have been drilled elsewhere, giving a different picture of the reservoir. • The calibration of the seismic to the well data. This global uncertainty model is demonstrated on a large 3D fluvial reservoir. Introduction Advances in deep water drilling technologies have cleared the path for new domains of hydrocarbon exploration. The appraisal of such deep offshore reservoirs is a high risk exercise: in addition to political and economical unknows, the sparsity of early exploration data compounds with the geological complexity of turbiditic formations, making any global reserve estimate highly uncertain. However, early in the appraisal stage, corporate decisions must be made about developing the field by drilling one or several new wells, or just abandoning the field and moving on to a safer prospect. Decision science provides tools to address this type of issue 1 , but calls for a sound assessment of uncertainty about the reservoir potential. In deep offshore reservoirs, the uncertainty on the oil in place is controlled in great part by the reservoir geometry and the pore volume, the latter …


Archive | 2012

Event-Based Geostatistical Modeling: Description and Applications

Michael J. Pyrcz; Timothy R. McHargue; Julian David Clark; Morgan Sullivan; Sebastien Strebelle

Event-based methods provide unique opportunities to improve the integration of geologic concepts into reservoir models. This may be accomplished over a continuum of rule complexity from very simple geometric models to complicated dynamics. Even the application of simple rules, including few conceptual interactions based on an understanding of stratigraphic relationships and parametric geometries for event scale depositional and erosion features, have been shown to efficiently produce complicated and realistic reservoir heterogeneities. In more complicated applications, initial and boundary conditions from analysis of paleobathymetry and external controls on sediment supply and the event rules may be informed by process models. These models have interesting features that depart from typical geostatistical model; they demonstrate emergent behaviors and preserve all information at all scales during their construction. These models may be utilized to produce very realistic reservoir models and their unique properties allow for novel applications. These modeling applications include; impact of model scale, seismic resolvability, value of information, flow relevance of advanced architecture, iterative and rule-based conditioning to sparse well and seismic data, numerical analogs for architectural concepts, statistical analysis and classification of architectures, unstructured grid construction and utilization as training and visualization tools.

Collaboration


Dive into the Sebastien Strebelle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge