Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrei Popa is active.

Publication


Featured researches published by Andrei Popa.


SPE Eastern Regional Meeting | 2000

Reservoir Characterization Through Synthetic Logs

Shahab D. Mohaghegh; Carrie Goddard; Andrei Popa; Sam Ameri; Moffazal Bhuiyan

Magnetic resonance logs provide the capability of in-situ measurement of reservoir characteristics such as effective porosity, fluid saturation, and rock permeability. This study presents a new and novel methodology to generate synthetic magnetic resonance logs using readily available conventional wireline logs such as spontaneous potential, gamma ray, density, and induction logs. The study also examines and provides alternatives for situations in which all required conventional logs are unavailable for a particular well. Synthetic magnetic resonance logs for wells with an incomplete suite of conventional logs are generated and compared with actual magnetic resonance logs for the same well. In order to demonstrate the feasibility of the concept being introduced here, the methodology is applied to a highly heterogeneous reservoir in East Texas. The process was verified by applying it to a well away from the wells used during the development process. This technique is capable of providing a better image of the reservoir properties (effective porosity, fluid saturation, and permeability) and more realistic reserve estimation at a much lower cost.


SPE Eastern Regional Meeting | 2001

Identifying Best Practices in Hydraulic Fracturing Using Virtual Intelligence Techniques

Shahab D. Mohaghegh; Razi Gaskari; Andrei Popa; S. Ameri; S. Wolhart; R. Siegfried; David G. Hill

Hydraulic fracturing is an economic way of increasing gas well productivity. Hydraulic fracturing is routinely performed on many gas wells in fields that contain hundreds of wells. Companies have developed databases that include information such as methods and materials used during the fracturing process of their wells. These databases usually include general information such as date of the job, name of the service company performing the job, fluid type and fluid amount, proppant type and proppant amount, and pumped rate. Sometimes more detail information may be available such as breakers, amount of nitrogen, and ISIP, to name a few. These data usually is of little use if some of the complex 3-D hydraulic fracture simulators are used to analyze them. But valuable information can be deduced from such data using virtual intelligence tools. The process covered in this paper takes the available data and couples it with general information from each well (things like latitude, longitude and elevation), any information available from log analysis and production data and uses a data mining and knowledge discovery process to identify a set of best practices for the particular field. The technique is capable of patching the data in places that certain information is missing. Complex virtual intelligence routines are used to insure that the information content of the database is not compromised during the data patching process. The conclusion of analysis is a set of best practices that has been implemented in a particular field on a well or on a group of wells basis. Since the entire process is mostly data driven we let the data “speak for itself” and “tell us” what has “worked” and what “has not worked” in that particular field and how the process can be enhanced on a single well basis. In this paper the results of applying this process to Medina formation in New York State will be presented. This data set was furnished by Belden & Blake during a GRI / NYSERDA sponsored projects. This process provides an important step toward achieving a comprehensive set of tools and processes for data mining, knowledge discovery, and data-knowledge fusion from data sets in oil and gas industry.


SPE Eastern Regional Conference and Exhibition | 1999

Reducing the Cost of Field-Scale Log Analysis Using Virtual Intelligence Techniques

Shahab D. Mohaghegh; Andrei Popa; George Koperna; David G. Hill

One of the costliest parts of field-scale reservoir studies is log analysis. A recent GRI project required a detailed study of a field with hundreds of wells. As part of this study all the well logs were to be analyzed by an engineer in order to identify net pay, porosity, and saturation. It soon became apparent that a considerable amount of time must be devoted to well log analysis in order to obtain consistent and high quality reservoir characteristics throughout the field. This was mainly due to the fact that logs for several wells were missing and many wells did not have the suite of logs that were necessary for analysis. This paper presents a novel approach to reduce the cost of well log analysis while maintaining the quality of the analysis. The cost reduction is achieved by analyzing only a group of the wells in the field. Using the detailed analysis of this group of the well logs by an expert engineer, an intelligent software tool is built to learn and reproduce the analyzing capabilities of the engineer on the remaining wells. This approach provides a means to increase the efficiency of the engineering team. It can decrease the time needed to analyze a large number of well logs while considerably reducing the project cost to the operator. It will provide means to attain well log analysis for wells that do not have all the necessary logs needed for the analysis. This is achieved by generating virtual wireline logs for these wells. Virtual intelligence techniques are used in construction of the intelligent software tool presented in this study.


SPE Western Regional/AAPG Pacific Section Joint Meeting | 2003

Identification of Contaminanted Data in Hydraulic Fracturing Databases: Application to the Codell Formation in the DJ Basin

Andrei Popa; Shahab D. Mohaghegh; Razi Gaskari; S. Ameri

With the advance of computer technologies, digitized data is becoming increasingly available. Currently, many companies are in possession of oil or gas-field databases that contain large amounts of information related to hydraulic fracturing, reservoir characterization, production, drilling, etc. However, not all the records are completely accurate or reflect reality. Errors in stored data can be subjective or objective and can be the result of improper or incomplete data collection, errors in data entry, lack of proper interpretation and others. These errors can later lead to poor, erroneous, or even impossible interpretation of the data. This leads to the question: how much of the data is reliable and how can the contaminated data be identified?


SPE Annual Technical Conference and Exhibition | 2004

Determining In-Situ Stress Profiles From Logs

Shahab D. Mohaghegh; Andrei Popa; Razi Gaskari; S. Wolhart; R. Siegfreid; S. Ameri

This paper presents a new and novel technique for determining the in-situ stress profile of hydrocarbon reservoirs from geophysical well logs using a combination of fuzzy logic and neural networks. It is well established, that in-situ stress cannot be generated from well logs alone. This is because two sets of formations may have very similar geologic signatures but possess different in-situ stress profiles because of varying degrees of tectonic activities in each region. By using two new parameters as surrogates for tectonic activities, fuzzy logic to interpret the logs and rank parameter influence, and neural networks as a mapping tool, it has become possible to accurately generate in-situ stress profiles from logs. This paper demonstrates the improved performance of this new approach over conventional approaches used in the industry.


Computers & Geosciences | 2000

Design optimum frac jobs using virtual intelligence techniques

Shahab D. Mohaghegh; Andrei Popa; Sam Ameri

Abstract Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These networks will be used as the fitness function for a genetic algorithm routine that will search for the best combination of the design parameters for the frac job. The genetic algorithm will search through the entire solution space and identify the optimal combination of parameters to be used in the design process. Considering the complexity of this task this methodology converges relatively fast, providing the engineer with several near-optimum scenarios for the frac job design. These scenarios, which can be achieved in just a minute or two, can be valuable initial points for the engineer to start his/her design job and save him/her hours of runs on the simulator.


SPE Annual Technical Conference and Exhibition | 2005

Analysis of Best Hydraulic Fracturing Practices in the Golden Trend Fields of Oklahoma

Shahab D. Mohaghegh; Razi Gaskari; Andrei Popa; Iraj Salehi; S. Ameri

In the past decades several hundred stimulation procedures have been performed in the Golden Trend fields of Oklahoma. The outcome of these stimulation jobs have not been the same for all wells. The effectiveness of the stimulation is a function of several factors including reservoir quality, completion and stimulation techniques. Completion and stimulation techniques can be further itemized as completion type such as open hole versus cased hole, type and amount of fluids and proppant and the rate at which they are pumped into the formation.


international symposium on neural networks | 2004

Determining in-situ stress profiles of hydrocarbon reservoirs from geophysical well logs using intelligent systems

Shahab D. Mohaghegh; Andrei Popa; Razi Gaskari; Steve Wolhart; Bob Siegfried; Sam Ameri

This work presents a new and novel technique for determining the in-situ stress profile of hydrocarbon reservoirs from geophysical well logs using a combination of fuzzy logic and neural networks. It is well established, that in-situ stress cannot be generated from well logs alone. This is because two sets of formations may have very similar geologic signatures but possess different in-situ stress profiles because of varying degrees of tectonic activities in each region. By using two new parameters as surrogates for tectonic activities, fuzzy logic to interpret the logs and rank parameter influence, and neural network as a mapping tool, it has become possible to accurately generate in-situ stress profiles. This paper demonstrates the superiority of this new approach over conventional approaches used in the oil and gas industry.


SPE Annual Technical Conference and Exhibition | 2002

Identification of Successful Practices in Hydraulic Fracturing Using Intelligent Data Mining Tools; Application to the Codell Formation in the DJ -Basin

Shahab D. Mohaghegh; Andrei Popa; Razi Gaskari; S. Ameri; S. Wolhart

In a detail data mining study about 150 wells that have been completed in the Codell formation, DJ Basin, have been analyzed to identify successful practices in hydraulic fracturing. The Codell formation is a low permeability sandstone within the Wattenburg field in the DJ Basin of Colorado. Since 1997 over 1500 Codell wells have been restimulated. As part of a Gas Research Institute restimulation project 150 wells were studied to optimize candidate selection and identify successful practices. Hydraulic fracturing is an economic way of increasing gas well productivity. Hydraulic fracturing is routinely performed on many gas wells in fields that contain hundreds of wells. During the process of hydraulically fracturing gas wells over many years, companies usually record the relevant data on methods and materials in a database. These databases usually include general information such as date of the job, Service Company performing the job, fluid type and amount, proppant type and amount, and pump rate. Sometimes more detail information may be available such as breakers, additives, amount of nitrogen, and ISIP to name a few. These data are usually of little use in complex 3-D hydraulic fracture simulators. These models require additional and more detailed information. On the other hand, the collected data contain valuable information that can be processed using virtual intelligence tools. The process covered in this paper takes the above-mentioned data and couples it with general information from each well (things like latitude, longitude and elevation), any information available from log analysis and production data. The conclusion of the analysis is a set of successful practices that has been implemented in a particular field and recommendations on how to precede with further hydraulic fracture jobs. In this paper the results of applying this process to about 150 Codell wells during the GRI sponsored project is presented. This process provides an important step toward constructing a comprehensive set of methods and processes for data mining, knowledge discovery, and data-knowledge fusion from data sets in oil and gas industry. Introduction Patina Oil and Gas has been very active in the DJ basin in recent years. They have been one of the most active operators in the United States in identifying and restimulationg tigh gas sand wells. Patina has over 3,400 producing wells in the basin, and has restimulated over 230 Niobrara/Codell completions so far. Furthermore, it is estimated that the results they are achieving in terms of incremental recoveries are up to 60% better than other operators. Studies and analysis such as the one being presented in this paper has the potential to help operators like Patina Oil & Gas to increase their chance of success even to a higher percentage. It also has the potential to help other operators in increasing their chances of success in DJ Basin or any other locations throughout the North America. This stuy is probably one of the most comprehensive analyses of its kind ever to be performed on a set of wells in the United States. In this technical paper the authors’ intention is to introduce this new and novel methodology in its entirety and present as much of the results as the page limitations of this paper allows. Please note that due to the comprehensive nature of this methodology many of the topics cannot be discussed in much detail. It is our intention to introduce these topics in much more detail in series of upcoming technical papers. Methodology The process of “Successful Practices Identification” using state-of-the-art data mining, knowledge discovery and dataknowledge fusion techniques includes the following five steps. In order to comprehensively cover the theoretical 2 MOHAGHEGH, POPA, GASKARI, AMERI & WOLHART SPE 77597 background of each step involved in this process, a separate paper may be needed for each. Some of the ideas have been introduced in the past and are referred to in the references. Detail on other topics will be the subject of future papers. In this article the goal is to provide a view of the methodology as a whole, and therefore, authors will simply introduce and give a brief explanation of each of the topics to clarify their role in the process. Step One: Data Quality Control The process starts with a thorough quality control of the data set. During this process the outliers and their nature (are they really valid data elements or are they the result of human error either in measurement or in recording the values?) as well as missing data are identified, and using advanced intelligent techniques, the data set is repaired. It is important to note that the repair of the data set at this stage of the analysis is aimed at rescuing the remaining data elements in a particular record (a data record here is referred to a row of the data matrix) that includes many features (features are the columns of the data matrix). The goal is not to “magically” find the piece of the missing data or substitute the outlier with the correct value. The goal is merely to put the best possible value in place of the missing data or the outlier that would allow the analysis to continue without losing the information content that exists in the rest of the features in a particular data record. Moreover, a new and novel methodology has been developed in order to identify and eliminate erroneous data records from the data set. These techniques, verification of their accuracy and how they are implemented are subjects of a future paper. Step Two: Fuzzy Combinatorial Analysis The second step of the process is a complete “Fuzzy Combinatorial Analysis – FCA” that examines each feature in the data set in order to identify its influence on the process outcome. The process outcome is a feature (usually a production indicator such as cumulative production, 5 year cum., 10 year cum., best 12 months of production, etc.) in the data set that is designated to identify the success of the practices in a field. For example if 5 year cumulative gas production is selected to be the process outcome, then a high 5 year cum. would indicate performance of good practices for that particular well. During that “Fuzzy Combinatorial Analysis” each feature’s influence is examined on the process outcome both individually and in combination with other features. This is due to the fact that influence of a particular feature (say a fracturing fluid) on the process outcome may be altered once it is combined with the effects of other features (say specific additives) that are present in the process. Therefore it is important to perform the analysis in a combinatorial fashion (hence the name combinatorial analysis) in order to reveal the true influence of the features present in the data set on the process outcome. A note of caution is in order here. Many commercial, off-theshelf neural network software applications claim to identify the influence of features on the output once a neural network model is build for a data set, and many practitioners in our industry have been using them as the true influence of parameters on the output. These products simply use the summation of the weights connected to a particular input neuron in order to achieve this. Authors believe that this is a gross simplification of a complex problem, and does not provide an accurate account of the influence of each feature and therefore should not be used as such. This method simply is an artifact of the modeling process and in the event of changing the architecture or learning algorithms of the modeling process the influence of features may be altered. Step Three: Intelligent Production Data Analysis The third step in the “Successful Practices Identification” methodology is a process called “Intelligent Production Data Analysis – IPDA.” The word “intelligent” in the above phrase refers to the use of intelligent systems techniques in the production data analysis process. During the IPDA process production data is used to identify a series of “Production Indicators” that would represent the state of the production from a particular field in time and space. The time and space representation is aimed at capturing the depletion and pressure decline in the field as new wells are drilled and put into production at different rates. The dynamic nature of this analysis (simultaneous analysis of the data in four dimensions x, y, z, and t) allows the user to identify the sweet spots as well as bad (unproductive) spots in a field. Such analysis would prove quite valuable during field development strategies that include infill drilling programs and candidate selection for stimulation, restimulation and workovers. Step Four: Neural Model Building Next step (Step 4) in the process calls for building a predictive neural model based on the available data. This step has been coved in detail in the several prior papers and will not be repeated here. Step Five: Successful Practices Analysis Once a representative neural model is successfully trained, calibrated and verified, the process of “Successful Practices Identification” using state-of-the-art data mining, knowledge discovery and data-knowledge fusion techniques is concluded with a three stage analysis. The three stage analysis combines the neural model with Monte Carlo simulation, genetic algorithms search and optimization routines, and fuzzy set theory to identify the successful practices on a single well basis, on groups of wells basis, and on a field wide basis. During the single well analysis, each well is thoroughly analyzed in order to identify the sensitivity of that particular well to different operational conditions. This analysis can identify the distance of actual practices on that well from the successful practices that could have been performed on that well. The larger the distance the higher would be the potential SPE 77597 IDENTIFICATION OF SUCCESSFUL PRACTICES IN HYDRAULIC FRACTURING 3 of a goo


SPE Eastern Regional Meeting | 2000

Hyperbolic Decline Parameter Identification Using Optimization Procedures

Sinisha Jikich; Andrei Popa

This paper describes two techniques for hyperbolic decline parameter identification. The first technique uses a genetic algorithm in the optimization procedure. The genetic algorithms are potentially useful in solving optimization problems when the objective function contains irregularities. The second technique uses linear regression for fitting a decline curve to data. The method weights equally the production rates during curve fitting, resulting in a stable solution. Consequently, the results are reproducible for a wide range of applications. Both methods were tested against field and literature data, demonstrating rapid, stable convergence and reproducible curves.

Collaboration


Dive into the Andrei Popa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iraj Ershaghi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Razi Gaskari

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

S. Ameri

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

Sam Ameri

West Virginia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge