Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marko Maucec is active.

Publication


Featured researches published by Marko Maucec.


Seg Technical Program Expanded Abstracts | 2010

Estimating Fault Displacements In Seismic Images

Luming Liang; Dave Hale; Marko Maucec

Geologic faults complicate the mapping of depositional layers. Most existing seismic image processing highlights fault locations but fails to estimate fault displacements. We model faults as a displacement vector field. Unlike traditional attributes (e.g., semblance or coherence), our estimated vector field provides information about fault displacements, as well as fault locations. This vector field can be used to automatically determine relative displacements of faulted layers and thereby simplify the mapping of such layers.


SPE Annual Technical Conference and Exhibition | 2011

Geology-guided Quantification of Production-Forecast Uncertainty in Dynamic Model Inversion

Marko Maucec; Stan Cullick; Genbao Shi

The presence of a large number of geologic uncertainties and limited well data typically increase the challenges associated with hydrocarbon recovery forecasting. Although recent advances in geologic modeling enable the automation of the model generation process by means of next-generation geostatistical tools, the computation of the reservoir dynamic response with full-physics reservoir simulation remains a computationally expensive task, which in practice requires considering only a few (but which?) of the many probable realizations. This paper presents a workflow that demonstrates the potential of capturing the inherent model uncertainty more accurately and assists in production-forecast business decisions. This workflow uses a history matching approach that directly interfaces the Earth modeling software with a forward simulator. It relies on the rapid characterization of the main features of the geologic uncertainty space, represented by an ensemble of sufficiently diverse history matched model realizations at the high-resolution geological scale. This workflow generates a more accurate result by obeying known geostatistics (variograms) and well constraints. We implement a multi-step, Bayesian Markov chain Monte Carlo inversion in which the proxy model is guided by streamline-based sensitivities. This process eliminates the need to run a forward simulation for each model realization, which significantly reduces the computation time. Efficient model parameterization and updates in the wave-number domain, based on discrete cosine transform (DCT), is used for fast characterization of the main features of the geologic uncertainty space, including structural framework, stratigraphic layering, facies distribution, and petrophysical properties. The application of the history matching workflow is demonstrated with a case study the combines the geological model with approximately 900K cells, four different depositional environments, and 30 wells with a 10-year waterflood history. Finally, the method is described to dynamically rank the reconciled model realizations to identify the highest potential of capturing bypassed oil and to optimize business decisions for implementing improved oil recovery (IOR). The main features include the following:  Calculation of pattern-dissimilarity distances, which distinguish two individual model realization in terms of recovery response  Deployment of very fast streamlined simulations to evaluate distances  Application of pattern-recognition techniques to assign several realizations, representative for production forecasting, to full-physics simulation  Derivation of the probability distribution of dynamic model responses (e.g., recovery factors) from the intelligently selected simulation runs


SPE Reservoir Characterization and Simulation Conference and Exhibition | 2013

Engineering Workflow for Probabilistic Assisted History Matching and Production Forecating: Application to a Middle East Carbonate Reservoir

Marko Maucec; A. P. Singh; Gustavo Carvajal; S. Mirzadeh; Steven Patton Knabe; R. Chambers; G. Shi; Ahmad Al-Jasmi; I. H. Hossam El Din; H. Nasr

Traditional reconciliation of geomodels with production data is one of the most laborious tasks in reservoir engineering. The uncertainty associated with the great majority of model variables only adds to the overall complexity. This paper introduces an engineering workflow for probabilistic assisted history matching that captures inherent model uncertainty and allows for better quantification of production forecasts. The workflow is applied to history matching of the pilot area in a major, structurally complex Middle East (ME) carbonate reservoir. The simulation model combines 49 wells in five waterflood patterns to match 50 years of oil production and 12 years of water injection and to predict eight years of production. Initially, the reservoir model was calibrated to match oil production by modifying permeability and/or porosity at well locations and by fine-tuning rock-type properties and water saturation. The second level history match implemented two-stage Markov chain Monte Carlo (McMC) stochastic optimization to minimize the misfit in water cut on a well-by-well basis. While relative to evolutionary algorithms or the ensemble Kalman filter (EnKF), the McMC methods provide a statistically rigorous alternative for sampling posterior distribution; when deployed in direct simulation, they impose a high computational cost. The approach presented here accelerates the process by parameterizing the permeability using discrete cosine transform (DCT), constraining the proxy model using streamline-based sensitivities and utilizing parallel and cluster computing. While probabilistic assisted history matching (AHM) successfully reduced the misfit for most producing wells, the computational convergence was sensitive to the level of preserved geological detail. The optimal number of representative history-matched models was identified to capture the uncertainty in reservoir spatial connectivity using rigorous optimization and dynamic model ranking based on forecasted oil recovery factors (ORFs). The reduced set of models minimized the computational load for forecast-based analysis, while retaining the knowledge of the uncertainty in the recovery factor. The comprehensive probabilistic AHM workflow was implemented at the operator’s North Kuwait Integrated Digital Oilfield (KwIDF) collaboration center. It delivers an optimized reservoir model for waterflood management and automatically updates the model quarterly with geological, production, and completion information. This allows engineers to improve the reservoir characterization and identify the areas that require more data capture. Introduction As part of a comprehensive strategy to transform the Kuwait Oil Company (KOC) through the application of digital oilfield (DOF) concepts, KOC initiated an assessment of the major Sabriyah-Mauddud (SaMa) reservoir for conversion to an integrated digital oilfield (iDOF) master platform, with the goal of increasing effectiveness through automating work processes and shortening observation-to-action cycle time. The group of nine first-generation production engineering workflows focuses on production and operational activities and was launched at KwIDF in 2012. The workflows are introduced in Al-Abbasi et al. (2013) and described in greater detail in Al-Jasmi et al. (2013) and references therein. With the vision to drive future KOC operations to the next level of excellence and to realize a large return on the investment in iDOF, the operator’s senior management endorsed the development of a family of advanced integrated asset management (IAM) workflows, referred to as “smartflows,” to optimize and integrate the subsurface models with well models and network surface systems in various time horizons.


SPE Annual Technical Conference and Exhibition | 2013

Causal Analysis and Data Mining of Well Stimulation Data Using Classification and Regression Tree with Enhancements

Srimoyee Bhattacharya; Marko Maucec; Jeffrey Marc Yarus; Dwight Fulton; Jon Orth; Ajay Pratap Singh

In the well-treatment program certain variables, like Job Pause Time (JPT) and fracture screen-out, can affect its efficiency. JPT is the time during which pumping is paused in-between subsequent treatments and screen-out occurs when the fluid flow is restricted inside the fracture. We investigate whether it is possible to identify characteristic patterns in existing data that affect the extreme values of JPT as well as the most critical variables causing fracture screen-out. We apply Classification and Regression Tree (CART) analysis, validate the approach with well-stimulation case studies and enhance predictive capability by implementing normal score transform and data clustering.


SPE Kuwait Oil and Gas Show and Conference | 2013

Next Generation of Workflows for Multilevel Assisted History Matching and Production Forecasting: Concept, Implementation and Visualization

Marko Maucec; Ajay Pratap Singh; Gustavo Carvajal; S. Mirzadeh; Steven Patton Knabe; Aneesh Mahajan; Joydeep Dhar; Ahmad Al-Jasmi; Ibrahim Hossam El Din

Traditional reconciliation of geomodels with production data is one of the most laborious tasks in reservoir engineering. The uncertainty associated with the great majority of model variables only adds to the overall complexity. This paper describes the conceptualization, implementation, and visualization characteristics of the multilevel assisted history matching (AHM) technique that captures inherent model uncertainty and allows for better quantification of production forecasts. The workflow is applied to history matching of the pilot area in a major, structurally complex Middle East (ME) carbonate reservoir. The simulation model combines 49 wells in five waterflood patterns to match 50 years of oil production and 12 years of water injection and to predict eight years of production. Initially, the reservoir model was calibrated to match oil production by modifying permeability and/or porosity at well locations and by fine-tuning rock-type properties and water saturation. The second level history match implemented two-stage Markov chain Monte Carlo (McMC) stochastic optimization to minimize the misfit in water cut on a well-by-well basis. The inversion process is dramatically accelerated by the efficient parameterization of permeability, constraining the proxy model using streamline-based sensitivities and using parallel and cluster computing. The optimal number of representative history-matched models was identified to capture the uncertainty in reservoir spatial connectivity using rigorous optimization and dynamic model ranking based on forecasted oil recovery factors (ORFs). The reduced set of models minimized the computational load for forecast-based analysis, while retaining the knowledge of the uncertainty in the recovery factor. The comprehensive probabilistic AHM workflow was implemented at the operator’s North Kuwait Integrated Digital Oilfield (KwIDF) collaboration center. It delivers an optimized reservoir model for waterflood management and automatically updates the model quarterly with geological, production, and completion information. This allows engineers to improve the reservoir characterization and identify the areas that require more data capture. Introduction With the vision to transform the Kuwait Oil Company (KOC) through the application of integrated digital oilfield (iDOF) concepts and drive the future KOC operations to the next level of excellence, the operator’s senior management endorsed the development of a family of advanced integrated asset management (IAM) workflows, referred to as “smart flows,” to optimize and integrate the subsurface models of the major Sabriyah-Mauddud (SaMa) reservoir with well models and network surface systems in various time horizons. The objective is to increase the effectiveness through automating work processes and shortening observation-to-action cycle time. The group of nine first-generation production engineering workflows focuses on production and operational activities and was launched at KwIDF in 2012. The workflows are introduced in Al-Abbasi et al. (2013) and described in greater detail in Al-Jasmi et al. (2013) and references therein. The second generation of smart flows combines subsurface waterflooding optimization (SWFO) (Khan et al. 2013), integrated production optimization (IPO), and simulation model update and ranking (SMUR). The preceding publication, Maucec et al. (2013), briefly discusses the outstanding challenges of the model reconciliation and history matching and reviews the recent approaches the oil industry is taking to quantify the uncertainty and increase the accuracy of reservoir models. Moreover, in Maucec et al. (2013), the engineering concepts of SMUR smart flow are described in detail, combining the processes of building the high-resolution geocellular model and the associated reservoir simulation model, leading into a history-matching case study of the SaMa field. The design of the SMUR smart flow is leveraged with the technology for


Seg Technical Program Expanded Abstracts | 2010

Modeling Distribution of Geological Properties Using Local Continuity Directions

Marko Maucec; Derek Parks; Maurice C. Gehin; Genbao Shi; Jeffrey Marc Yarus; Richard Chambers

Summary We present an innovative technology for 3D volumetric modeling of geological properties, using a Maximum Continuity Field. The method provides the user unique opportunity to a) directly control the local continuity directions, b) interactively operate with “geologically intuitive” datasets and c) retain the maximum fidelity of geological model by postponing the creation of grid/mesh until the final stage of static model building. We validate the method by modeling a permeability distribution in a fluvial system of complex synthetic field-case.


SPE Middle East Intelligent Energy Conference and Exhibition | 2013

Multivariate Analysis of Job Pause Time Data Using Classification and Regression Tree and Kernel Clustering

Marko Maucec; Ajay Pratap Singh; Srimoyee Bhattacharya; Jeffrey Marc Yarus; Dwight Fulton; Jon Orth

The well treatment program is an important part of the field development plan, and certain variables, such as job pause time (JPT), can affect its efficiency. JPT is the time during which pumping is paused between subsequent treatments of a job. The objectives of this work are to investigate whether, from existing data, it is possible to find patterns in significant variables that affect the extreme values of JPT in a particular region. The answers are sought by applying a classification and regression tree (CART) to both categorical and continuous variables in the database. The practical application of CART is presented using case studies first using classical CART analysis, then using CART analysis with enhancement tools such as the normal score transform (NST), and then dividing the large dataset into smaller groups using clustering. Significant variables are found that affect the response variables, and predictor variables are ranked in order of their importance. Such information can be used to control predictor variables that cause high JPT. The results are outlined in an intuitive way, including categorical, continuous, and missing values. Because CART is a data driven, deterministic model, we cannot calculate the confidence interval of the predicted response. Confidence in the results is purely based on the historical values, and the accuracy of the result produced by a tree model depends on the quality of the recorded data measured in terms of volume, reliability, and consistency. The prediction capability of CART is enhanced by the use of NST and clustering techniques. The approach presented in this paper analyzes a dataset with limited information and high uncertainty and should lead to developing a method for generating proxy models to find future success indices (e.g., for drilling efficiency or production from a fracture). This could standardize stimulation and generate decision ‘best practices’ to save costs in field development and the optimization process.


Archive | 2018

Introduction to Digital Oil and Gas Field Systems

Gustavo Carvajal; Marko Maucec; Stan Cullick

World energy demand will grow from about 550 QBTU in 2012 to 850 QBTU in 2040 according to 2016 projections from the International Energy Agency (IEA). Although renewable energy sources will grow by a large percentage, petroleum-based liquids (oil) and natural gas will continue to be the largest contributors to energy utilization by the worlds population, representing about 55% of the total in 2040. As any current oil and gas production naturally declines, the continued growth of petroleum fuels will be made possible only by forward leaps in technology in finding, drilling, and producing those resources more efficiently and economically. One of the great stories in oil and gas production is the industrys implementation of new digital technologies that increase production for less unit cost. This “revolution” of the “digital oil field” is the subject of this book.


Archive | 2018

Data Filtering and Conditioning

Gustavo Carvajal; Marko Maucec; Stan Cullick

This chapter presents the major features of such a system, which includes (1) data processing, (2) basic error detection, conditioning, and alerting, (3) well and equipment status detection, (4) advanced validation, and (5) workflow-based conditioning. This chapter is a condensed tutorial on how to validate and condition data appropriately for digital oil field (DOF) systems. The process flow of the chapter is summarized in Fig 3.1, which has the main steps for a DOF data validation and conditioning system. You can also reference a myriad of specialty material on signal processing (e.g., Vetterli et al., 2014 ), which is not covered here.


Archive | 2018

Smart Wells and Techniques for Reservoir Monitoring

Gustavo Carvajal; Marko Maucec; Stan Cullick

This chapter introduces concepts associated with smart well technology and its application to maximize the oil recovery factor and improve the financial indicator for the oil company. In 1997, the first successful completion incorporating permanently installed, downhole pressure and temperature measurements integrated with remotely controlled, high fidelity flow control valves was installed in a well in the Norwegian (in 1997) sector of the North Sea. Konopczynski and Ajayi (2008) stated that this event marked the genesis of the intelligent well era. The use of intelligent well technology has “crossed the technology adoption chasm” in many regions over the past decade as oil and gas producers have increasingly incorporated this technology in field developments to capture the benefit of enhanced reservoir management that intelligent well technology delivers. Nowadays, technical challenges remain without a direct answer particularly with control strategies to operate the valves during the water or gas breakthrough. In this chapter, the readers are guided with technical aspects to optimize oil production with smart wells.

Collaboration


Dive into the Marko Maucec's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard L. Chambers

Memorial University of Newfoundland

View shared research outputs
Researchain Logo
Decentralizing Knowledge