Razi Gaskari
West Virginia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Razi Gaskari.
Abu Dhabi International Petroleum Exhibition and Conference | 2006
Shahab D. Mohaghegh; Hafez H. Hafez; Razi Gaskari; Masoud Haajizadeh; Maher Mahmoud Kenawy
Simulation models are routinely used as a powerful tool for reservoir management. The underlying static models are the result of integrated efforts that usually includes the latest geophysical, geological and petrophysical measurements and interpretations. As such, these models carry an inherent degree of uncertainty. Typical uncertainty analysis techniques require many realizations and runs of the reservoir simulation model. In this day and age, as reservoir models are getting larger and more complicated, making hundreds or sometimes thousands of simulation runs can put considerable strain on the resources of an asset team, and most of the times are simply impractical. Analysis of these uncertainties and their effects on well performance using a new and efficient technique is the subject of this paper. The analysis has been performed on a giant oil field in the Middle East using a surrogate reservoir model. The surrogate reservoir model that runs and provides results in real-time is developed to mimic the capabilities of a full field simulation model that includes one million grid blocks and takes 10 hours to run using a cluster of twelve 3.2 GHz CPUs. In order to effectively demonstrate the robustness of Surrogate Reservoir Models and their capabilities as tools that can be used for uncertainty analysis, one must demonstrate that SRMs are competent in providing reasonably accurate results for multiple realizations of the reservoir being studied. In order to demonstrate such robustness and their predictive capabilities as well as their limitations, this paper will examine the performance of the surrogate reservoir models on different geologic realizations of the static model.
North Africa Technical Conference and Exhibition | 2012
Shahab D. Mohaghegh; Jim S. Liu; Razi Gaskari; Mohammad Maysami; Olugbenga Olukoko
Application of the Surrogate Reservoir Model (SRM) to an onshore green field in Saudi Arabia is the subject of this paper. SRM is a recently introduced technology that is used to tap into the unrealized potential of the reservoir simulation models. High computational cost and long processing time of reservoir simulation models limit our ability to perform comprehensive sensitivity analysis, quantify uncertainties and risks associated with the geologic and operational parameters or to evaluate a large set of scenarios for development of green fields. SRM accurately replicates the results of a numerical simulation model with very low computational cost and low turnaround period and allows for extended study of reservoir behavior and potentials. SRM represents the application of artificial intelligence and data mining to reservoir simulation and modeling. In this paper, development and the results of the SRM for an onshore green field in Saudi Arabia is presented. A reservoir simulation model has been developed for this green field using Saudi Aramco’s in-house POWERS™ simulator. The geological model that serves as the foundation of the simulation model is developed using an analogy that incorporates limited measured data augmented with information from similar fields producing from the same formations. The reservoir simulation model consists of 1.4 million active grid blocks, including 40 vertical production wells and 22 vertical water injection wells. Steps involved in developing the SRM are identifying the number of runs that are required for the development of the SRM, making the runs, extracting static and dynamic data from the simulation runs to develop the necessary spatio-temporal dataset, identifying the key performance indicators (KPIs) that rank the influence of different reservoir characteristics on the oil and gas production in the field, training and matching the results of the simulation model, and finally validating the performance of the SRM using a blind simulation run. SRM for this reservoir is then used to perform sensitivity analysis as well as quantification of uncertainties associated with the geological model. These analyses that require thousands of simulation runs were performed using the SRM in minutes.
SPE Eastern Regional Meeting | 2005
Shahab D. Mohaghegh; Razi Gaskari
Most of the mature fields in the United States have been producing for many years. Production in these fields started at a time when reservoir characterization was not a priority; therefore they lack data that can help in reservoir characterization. On the other hand to re-vitalize these fields in a time that price of hydrocarbon is high, requires certain degree of reservoir characterization in order to identify locations with potentials of economical production. The most common type of data that may be found in many of the mature fields is production data. This is due to the fact that usually production data is recorded as a regulatory obligation or simply because it was needed to perform economic analysis. Using production data as a source for making decisions have been on the petroleum engineer’s agenda for many years and several methods have been developed for accomplishing this task. There are three major shortcomings related to the efforts that focus on production data analysis. The first one has to do with the fact that due to the nature of production data its analysis is quite subjective. Even when certain techniques show promise in deducing valuable information from production data, the issue of subjectivity remains intact. Furthermore, as the second shortcoming, existing production data analysis techniques usually address individual wells and therefore do not undertake the entire field or the reservoir as a coherent system. The third short-coming is the lack of a user friendly software product that can perform production data analysis with minimum subjectivity and reasonable repeatability while addressing the entire field (reservoir) instead of autonomous, disjointed wells. It is a well known fact that techniques such as decline curve analysis and type curve matching address individual wells (or sometime groups of wells without geographic resolution) and are highly subjective. In this paper a new methodology is introduced that attempts to address the first and the second, i.e. unify a comprehensive production data analysis with reduced subjectivity while addressing the entire reservoir with reasonable geographic resolution. The geographic mapping of the depletion or remaining reserves can assists engineers in making informed decision on where to drill or which well to remediate. The third shortcoming will be addressed in a separate paper where a software product is introduced that would perform the analysis with minimum user interaction. The techniques introduced here are statistical in nature and focuses on intelligent systems to analyze production data. This methodology integrates conventional production data analysis techniques such as decline curve analysis, type curve matching and single well radial simulation model, with new techniques developed based on intelligent systems (one or more of techniques such as neural networks, genetic algorithms and fuzzy logic) in order to map fluid flow in the reservoir as a function of time. A set of two dimensional maps are generated to identify the relative reservoir quality and three dimensional maps that track the sweet spots in the field with time in order to identify the most appropriate locations that may still have reserves to be produced. This methodology can play an important role in identifying new opportunities in mature fields. In this paper the methodology is introduced and its application to a field in the mid-continent is demonstrated. INTRODUCTION Techniques of production data analysis (PDA) have improved significantly over the past several years. These techniques are used to provide information on reservoir permeability, fracture length, fracture conductivity, well drainage area, original gas in place (OGIP), estimated ultimate recovery (EUR) and skin. Although there are many available methods identified, there is no one clear method that always yields the most reliable answer. Furthermore, tools that make these techniques available to the engineers are not readily available. The goal of this study is to develop a comprehensive tool for production data analysis and make it available for use by industry. Production data analysis techniques started systematically SPE 98010 New Method for Production Data Analysis to Identify New Opportunities in Mature Fields: Methodology and Application Mohaghegh, S. D., Gaskari, R. and Jalali, J., West Virginia University 2 New Method for Production Data Analysis to Identify New Opportunities in Mature Fields SPE 98010 with a method presented by Arps in the 1950s. Arps decline analysis is still being used because of its simplicity, and since it’s an empirical method, it doesn’t require any reservoir or well parameters. Fetkovich proposed a set of equations described by an exponent, b, having a range between 0 and 1. Arps’ equation is based on empirical relationships of rate vs. time for oil wells and is shown below:
SPE Annual Technical Conference and Exhibition | 2009
Yorgi Gomez; Yasaman Khazaeni; Shahab D. Mohaghegh; Razi Gaskari
Conventional reservoir simulation and modeling is a bottom-up approach. It starts with building a geological model of the reservoir that is populated with the best available petrophysical and geophysical information at the time of development. Engineering fluid flow principles are added and solved numerically so as to arrive at a dynamic reservoir model. The dynamic reservoir model is calibrated using the production history of multiple wells and the history matched model is used to strategize field development in order to improve recovery.
SPE Eastern Regional Meeting | 2001
Shahab D. Mohaghegh; Razi Gaskari; Andrei Popa; S. Ameri; S. Wolhart; R. Siegfried; David G. Hill
Hydraulic fracturing is an economic way of increasing gas well productivity. Hydraulic fracturing is routinely performed on many gas wells in fields that contain hundreds of wells. Companies have developed databases that include information such as methods and materials used during the fracturing process of their wells. These databases usually include general information such as date of the job, name of the service company performing the job, fluid type and fluid amount, proppant type and proppant amount, and pumped rate. Sometimes more detail information may be available such as breakers, amount of nitrogen, and ISIP, to name a few. These data usually is of little use if some of the complex 3-D hydraulic fracture simulators are used to analyze them. But valuable information can be deduced from such data using virtual intelligence tools. The process covered in this paper takes the available data and couples it with general information from each well (things like latitude, longitude and elevation), any information available from log analysis and production data and uses a data mining and knowledge discovery process to identify a set of best practices for the particular field. The technique is capable of patching the data in places that certain information is missing. Complex virtual intelligence routines are used to insure that the information content of the database is not compromised during the data patching process. The conclusion of analysis is a set of best practices that has been implemented in a particular field on a well or on a group of wells basis. Since the entire process is mostly data driven we let the data “speak for itself” and “tell us” what has “worked” and what “has not worked” in that particular field and how the process can be enhanced on a single well basis. In this paper the results of applying this process to Medina formation in New York State will be presented. This data set was furnished by Belden & Blake during a GRI / NYSERDA sponsored projects. This process provides an important step toward achieving a comprehensive set of tools and processes for data mining, knowledge discovery, and data-knowledge fusion from data sets in oil and gas industry.
SPE Western Regional Meeting | 2012
Shohreh Amini; Shahab D. Mohaghegh; Razi Gaskari; Grant S. Bromhal
While CO2 Capture and Sequestration (CCS) is considered a part of the solution to overcoming the ever increasing level of CO2 in the atmosphere, one must be sure that significant new hazards are not created by the CO2 injection process. The risks involved in different stages of a CO2 sequestration project are related to geological and operational uncertainties. This paper presents the application of a grid-based Surrogate Reservoir Model (SRM) to a real case CO2 sequestration project in which CO2 were injected into a depleted gas reservoir. An SRM is a customized model that accurately mimics reservoir simulation behavior by using Artificial Intelligence & Data Mining techniques. Initial steps for developing the SRM included constructing a reservoir simulation model with a commercial software, history matching the model with available field data and then running the model under different operational scenarios or/and different geological realizations. The process was followed by extracting some static and dynamic data from a handful of simulation runs to construct a spatio-temporal database that is representative of the process being modeled. Finally, the SRM was trained, calibrated, and validated. The most widely used Quantitative Risk Analysis (QRA) techniques, such as Monte Carlo simulation, require thousands of simulation runs to effectively perform the uncertainty analysis and subsequently risk assessment of a project. Performing a comprehensive risk analysis that requires several thousands of simulation runs becomes impractical when the time required for a single simulation run (especially in a geologically complex reservoir) exceeds only a few minutes. Making use of surrogate reservoir models (SRMs) can make this process practical since SRM runs can be performed in minutes. Using this Surrogate Reservoir Model enables us to predict the pressure and CO2 distribution throughout the reservoir with a reasonable accuracy in seconds. Consequently, application of SRM in analyzing the uncertainty associated with reservoir characteristics and operational constraints of the CO2 sequestration project is presented.
SPE Western Regional Meeting | 2012
Shahab D. Mohaghegh; Shohreh Amini; Vida Gholami; Razi Gaskari; Grant S. Bromhal
Developing proxy models has a long history in our industry. Proxy models provide fast approximated solutions that substitute large numerical simulation models. They serve specific useful purposes such as assisted history matching and production/injection optimization. Most common proxy models are either reduced models or response surfaces. While the former accomplishes the run-time speed by grossly approximating the problem the latter accomplishes it by grossly approximating the solution space. Nevertheless, they are routinely developed and used in order to generate fast solutions to changes in the input space. Regardless of the type of model simplifications that is used, these conventional proxy models can only provide, at best, responses at the well locations, i.e. pressure or rate profiles at the well. In this paper we present application of a new approach to building proxy models. This method has one major difference with the traditional proxy models. It has the capability of replicating the results of the numerical simulation models, away from the wellbores. The method is called Grid-Based Surrogate Reservoir Model (SRM) since it is has the unique capability of being able to replicate the pressure and saturation distribution throughout the reservoir at the grid block level, and at each time step, with reasonable accuracy. Grid-Based SRM performs this task at high speed, when compared with conventional numerical simulators such as those currently in use (commercial and in-house) in our industry. To demonstrate the capabilities of Grid-Based SRM, its application to three reservoir simulation models are presented. Fist is a giant oil field in the Middle East with a large number of producers, second, to a CO2 sequestration project in Australia, and finally to a numerical simulation study of potential carbon storage site in the United States. The numerical reservoir simulation models are developed using two of the most commonly used commercial simulators 1 . Two of the models presented in this manuscript are consisted of hundreds of thousands of grid blocks and one includes close to a million cells. The Grid-based SRM that learns and replicates the fluid flow through these reservoirs can open new doors in reservoir modeling by providing the means for extended study of reservoir behavior with minimal computational cost. Surrogate Reservoir Modeling (SRM) is
SPE Western Regional Meeting | 2012
Shahab D. Mohaghegh; Jim S. Liu; Razi Gaskari; Mohammad Maysami; Olugbenga Olukoko
Well-based Surrogate Reservoir Model (SRM) may be classified as a new technology for building proxy models that represent large, complex numerical reservoir simulation models. The well-based SRM has several advantages over traditional proxy models, such as response surfaces or reduced models. These advantages include (1) to develop an SRM one does not need to approximate the existing simulation model, (2) the number of simulation runs required for the development of an SRM is at least an order of magnitude less than traditional proxy models, and (3) above and beyond representing the pressure and production profiles at each well individually, SRM can replicate, with high accuracy, the pressure and saturation changes at each grid block. Well-based SRM is based on the pattern recognition capabilities of artificial intelligence and data mining (AI&DM) that is also referred to as predictive analytics. During the development process the SRM is trained to learn the principles of fluid flow through porous media as applied to the complexities of the reservoir being modeled. The numerical reservoir simulation model is used for two purposes: (1) to teach the SRM the physics of fluid flow through porous media as applied to the specific reservoir that is being modeled, and (2) to teach the SRM the complexities of the heterogeneous reservoir represented by the geological model and its impact on the fluid production and pressure changes in the reservoir. Application of well-based SRM to two offshore fields in Saudi Arabia is demonstrated. The simulation model of these fields includes millions of grid blocks and tens of producing and injection wells. There are four producing layers in these assets that are contributing to production. In this paper we provide the details that is involved in development of the SRM and show the result of matching the production from the all the wells. We also present the validation of the SRM through matching the results of blind simulation runs. The steps in the development of the SRM includes design of the required simulation runs (usually less than 20 simulation runs are sufficient), identifying the key performance indicators that control the pressure and production in the model, identification of input parameters for the SRM, training and calibration of the SRM and finally validation of the SRM using blind simulation runs.
SPE Intelligent Energy Conference & Exhibition | 2014
Shohreh Amini; Shahab D. Mohaghegh; Razi Gaskari; Grant S. Bromhal
Reservoir simulation models are used extensively to model complex physics associated with fluid flow in porous media. Such models are usually large with high computational cost. The size and computational footprint of these models make it impractical to perform comprehensive studies which involve thousands of simulation runs. Uncertainty analysis associated with the geological model and field development planning are good examples of such studies. In order to address this problem, efforts have been made to develop proxy models which can be used as a substitute for a complex reservoir simulation model in order to reproduce the outputs of the reservoir models in short periods of time (seconds). In this study, by using artificial intelligence techniques a Grid-Based Surrogate Reservoir Model (SRMG) is developed. Gridbased SRM is a replica of the complex reservoir simulation models that is trained, calibrated and validated to accurately reproduce grid block level results. This technology is applied to a CO2 sequestration project in Australia. This paper presents the development of the reservoir simulation model and the Grid-based SRM. The SRM is able to generate pressure and gas saturation at the grid block level. The results demonstrate that this technique is capable of generating the reservoir simulation output very accurately within seconds.
SPE Annual Technical Conference and Exhibition | 2014
Shahab D. Mohaghegh; Y. Al-Mehairi; Razi Gaskari; Mohammad Maysami; Yasaman Khazaeni
A novel approach to reservoir management applied to a mature giant oilfield in the Middle East is presented. This is a prolific brown field producing from multiple horizons with production data going back to mid-1970s. Periphery water injection in this filed started in mid-1980s. The field includes more than 400 producers and injectors. The production wells are deviated (slanted) or horizontal and have been completed in multiple formations. An empirical, full field reservoir management technology, based on a data-driven reservoir model was used for this study. The model was conditioned to all available types of field data (measurements) such as production and injection history, well configurations, well-head pressure, completion details, well logs, core analysis, time-lapse saturation logs, and well tests. The well tests were used to estimates the static reservoir pressure as a function of space and time. Time-lapse saturation (PulseNeutron) logs were available for a large number of wells indicating the state of water saturation in multiple locations in the reservoir at different times. The data-driven, full field model was trained and history matched using machine learning technology based on data from all wells between 1975 and 2001. The history matched model was deployed in predictive mode to generate (forecast) production from 2002 to 2010 and the results was compared with historical production (Blind History Match). Finally future production from the field (2011 to 2014) was forecasted. The main challenge in this study was to simultaneously history match static reservoir pressure, water saturation and production rates (constraining well-head pressure) for all the wells in the field. History matches on a well-by-well basis and for the entire asset is presented. The quality of the matches clearly demonstrates the value that can be added to any given asset using pattern recognition technologies to build empirical reservoir management tools. This model was used to identify infill locations and water injection schedule in this field. RESERVOIR MANAGEMENT Reservoir management has been defined as use of financial, technological, and human resources, to minimizing capital investments and operating expenses and to maximize economic recovery of oil and gas from a reservoir. The purpose of reservoir management is to control operations in order to obtain the maximum possible economic recovery from a reservoir on the basis of facts, information, and knowledge (Thakur 1996). Historically, tools that have been successfully and effectively used in reservoir management integrate geology, petrophysics, geophysics and petroleum engineering throughout the life cycle of a hydrocarbon asset. Through the use of technologies such as remote sensors and simulation modeling, reservoir management can improve production rates and increase the total amount of oil and gas recovered from a field (Chevron 2012). Reservoir simulation and modeling has proven to be one of the most effective instruments that can integrate data and expertise from a wide range of disciplines such as geology, petrophysics, geophysics, reservoir and production engineering in order to model fluid flow in the reservoir. Reservoir simulation model is history matched using the pressure and production measurements from the asset in order to tune the geological understandings and provide predictive capabilities. Since no two hydrocarbon reservoirs are the same and each asset has its own unique geological characteristics and drive mechanisms, the art and science of reservoir simulation and modeling must be adapted to unique situations in order to be able to realistically model 2 Mohaghegh, et al. SPE 170660 the past and predict the future of a hydrocarbon producing reservoir. Although reservoir simulation and modeling remains one of the major contributors to reservoir management practices for the foreseeable future, its realistic application to reservoir management practices continues to face challenges. These challenges are related to exploration of a very large solution space that is a natural and required step during a reservoir management study. During reservoir management process it is required to generate, evaluate and rank multiple potential development scenarios as early as possible in the workflow. Furthermore, important practices such as quantification and analysis of uncertainties associated with the geological model as well as economic analysis and planning require large number of scenarios to be generated and evaluated in order to assist the decision making process. Performing reservoir management studies without such capabilities reduces the informed decision making to guess work, albeit, educated guess work. DATA DRIVEN RESERVOIR MODELING & MANAGEMENT Also referred to as “Fact-Based Reservoir Modeling”, data-driven reservoir modeling is a novel approach to build models representing fluid flow in hydrocarbon producing porous media that are completely based on field measurements. Instead of starting from first principle physics that result in partial differential equations such as the diffiusivity equation, data-driven reservoir modeling starts from field measurements such as well configurations, well completion, well logs, core analysis, well tests, and production/injection history. Contrary to the assessment of some critics, physics is not ignored during data-driven reservoir modeling. In data-driven reservoir modeling role of physics is changed from the architect of the governing equations to the guiding light and the blue-print that provides the framework for the model development process. Data-driven reservoir modeling does not adhere to the dogma that all modeling of natural phenomena must start with physics to have credibility, nor it finds credible the notion, by naïve statisticians, that no physics (or petroleum engineers) are needed for modeling and that data can be the answer to all problems in the upstream E&P industry. Data-driven reservoir modeling starts with the premise that data, especially in our discipline, carries information. Data collected during the drilling, reservoir, and production operations in an oilfield includes footprints, in space and time, of fluid flow in the porous media. If large enough volume of such data is assembled in a proper fashion, and appropriate tools are used in interpreting them by well-trained petroleum engineers, then there is a realistic chance of being able to build comprehensive and cohesive full field models that not only will not violate the known physics, but would be able to shed light on complex and highly nonlinear behavior that might have been missed by a purely physics -based approach. The reason is obvious. Physics-based approaches are bound to be limited to our current understanding of the natural phenomenon which continues to improve as a function of time. Our understanding of complexity of fluid flow in a large and diverse combinations of porous media is far more advance today than in was 40 years ago and it is bound to advance even further in the next 40 years. But the facts that are intrinsic to (and the patterns that exist in a collection of) data are permanent and do not change with time. The question is; do we have the tools and the techniques and the knowhow to extract them? The first and the most comprehensive data-driven reservoir modeling technology developed by reservoir engineers (not mathematicians or statisticians) is Top-Down Modeling (TDM). TDM was introduced a few years ago and has enjoyed continuous R&D to enhance its capabilities ever since. Several IOCs, NOCs, and independents have successfully adopted this technology and are benefiting from its results. Data-driven reservoir management is referred to as a process whereby the key models used to make critical decisions are data-driven models. The main advantages of data-driven reservoir models are (a) they are fact-based and include minimal pre-modeling interpretation and human biases, (b) time required for their development (training and validation of the predictive models) is a fraction of the time required for a comprehensive numerical simulation model, and (c) they have small computational footprint that accommodates large number of scenarios to be investigated in relatively short period of time. TOP-DOWN MODELING (TDM) TECHNOLOGY Traditional numerical reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of numerical simulation studies calls for integration of static and dynamic measurements into a reservoir model that has been formulated based on our current understanding of fluid flow in porous media and numerical solution of the formulation in the context of an interpreted geological model. Numerical simulation is a bottom-up approach that starts with building a geological (geo-cellular or static) model of the 1 Although our skills and tools to extract the facts may change and advance with time. 2 This technology was invented and introduced to the E&P industry by Intelligent Solutions, Inc. in 2006. SPE 170660 Data-Driven Reservoir Management of a Giant Mature Oilfield in the Middle East 3 reservoir. Using modeling and geo-statistical manipulation of the data the geo-cellular model is populated with the best available geological and petrophysical information. Engineering fluid flow principles are added and solved numerically to arrive at a dynamic reservoir model. The dynamic reservoir model is calibrated using the production history of multiple wells by modification of several of the parameters involved in the geological model in a process called history matching and the final history matched model is used in predictive mode to strategize the field development in order to improve recovery. Characteristics of the numerical reservoir simulation and modeling include: It takes a significant investment (time and mo