Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberto Cominelli.
annual simulation symposium | 2011
Honggang Wang; David Echeverría-Ciaurri; Louis J. Durlofsky; Alberto Cominelli
This paper (SPE 141950) was accepted for presentation at the SPE Reservoir Simulation Symposium, The Woodlands, Texas, USA, 21–23 February 2011, and revised for publication. Original manuscript received for review 15 December 2010. Revised manuscript received for review 12 May 2011. Paper peer approved 15 July 2011. Summary Subsurface geology is highly uncertain, and it is necessary to account for this uncertainty when optimizing the location of new wells. This can be accomplished by evaluating reservoir performance for a particular well configuration over multiple realizations of the reservoir and then optimizing based, for example, on expected net present value (NPV) or expected cumulative oil production. A direct procedure for such an optimization would entail the simulation of all realizations at each iteration of the optimization algorithm. This could be prohibitively expensive when it is necessary to use a large number of realizations to capture geological uncertainty. In this work, we apply a procedure that is new within the context of reservoir management—retrospective optimization (RO)—to address this problem. RO solves a sequence of optimization subproblems that contain increasing numbers of realizations. We introduce the use of k -means clustering for selecting these realizations. Three example cases are presented that demonstrate the performance of the RO procedure. These examples use particle swarm optimization (PSO) and simplex linear interpolation (SLI)-based line search as the core optimizers (the RO framework can be used with any underlying optimization algorithm, either stochastic or deterministic). In the first example, we achieve essentially the same optimum using RO as we do using a direct optimization approach, but RO requires an order of magnitude fewer simulations. The results demonstrate the advantages of cluster-based sampling over random sampling for the examples considered. Taken in total, our findings indicate that RO using cluster sampling represents a promising approach for optimizing well locations under geological uncertainty.
Spe Journal | 2012
Honggang Wang; David Echeverría-Ciaurri; Louis J. Durlofsky; Alberto Cominelli
This paper (SPE 141950) was accepted for presentation at the SPE Reservoir Simulation Symposium, The Woodlands, Texas, USA, 21–23 February 2011, and revised for publication. Original manuscript received for review 15 December 2010. Revised manuscript received for review 12 May 2011. Paper peer approved 15 July 2011. Summary Subsurface geology is highly uncertain, and it is necessary to account for this uncertainty when optimizing the location of new wells. This can be accomplished by evaluating reservoir performance for a particular well configuration over multiple realizations of the reservoir and then optimizing based, for example, on expected net present value (NPV) or expected cumulative oil production. A direct procedure for such an optimization would entail the simulation of all realizations at each iteration of the optimization algorithm. This could be prohibitively expensive when it is necessary to use a large number of realizations to capture geological uncertainty. In this work, we apply a procedure that is new within the context of reservoir management—retrospective optimization (RO)—to address this problem. RO solves a sequence of optimization subproblems that contain increasing numbers of realizations. We introduce the use of k -means clustering for selecting these realizations. Three example cases are presented that demonstrate the performance of the RO procedure. These examples use particle swarm optimization (PSO) and simplex linear interpolation (SLI)-based line search as the core optimizers (the RO framework can be used with any underlying optimization algorithm, either stochastic or deterministic). In the first example, we achieve essentially the same optimum using RO as we do using a direct optimization approach, but RO requires an order of magnitude fewer simulations. The results demonstrate the advantages of cluster-based sampling over random sampling for the examples considered. Taken in total, our findings indicate that RO using cluster sampling represents a promising approach for optimizing well locations under geological uncertainty.
Eurosurveillance | 2007
Alberto Del Bianco; Alberto Cominelli; Laura Dovera; Geir Nævdal; Brice Vallès
During history match reservoir models are calibrated against production data to improve forecasts reliability. Often, the calibration ends up with a handful of matched models, sometime achieved without preserving the prior geological interpretation. This makes the outcome of many history matching projects unsuitable for a probabilistic approach to production forecast, then motivating the quest of methodologies casting history match in a stochastic framework. The Ensemble Kalman Filter (EnKF) has gained popularity as Monte-Carlo based methodology for history matching and real time updates of reservoir models. With EnKF an ensemble of models is updated whenever production data are available. The initial ensemble is generated according to the prior model, while the sequential updates lead to a sampling of the posterior probability function. This work is one of the first to successfully use EnKF to history match a real field reservoir model. It is, to our knowledge, the first paper showing how the EnKF can be used to evaluate the uncertainty in the production forecast for a given development plan for a real field model. The field at hand was an on-shore saturated oil reservoir. Porosity distribution was one of the main uncertainties in the model, while permeability was considered a porosity function. According to the geological knowledge, the prior uncertainty was modeled using Sequential Gaussian Simulation and ensembles of porosity realizations were generated. Initial sensitivities indicated that conditioning porosity to available well data gives superior results in the history matching phase. Next, to achieve a compromise between accuracy and computational efficiency, the impact of the size of the ensemble on history matching, porosity distribution and uncertainty assessment was investigated. In the different ensembles the reduction of porosity uncertainty due to production data was noticed. Moreover, EnKF narrowed the production forecast confidence intervals with respect to estimate based on prior distribution. Introduction Reservoir management of modern oil and gas fields requires periodic updates of the simulation models to integrate in the geological parameterization production data collected over time. In these processes the challenges nowadays are many. First, a coherent view of the geomodel requires updating the simulation decks in ways consistent with geological assumptions. Second, the management is requiring more and more often a probabilistic assessment of the different development scenarios. This means that cumulative distribution functions, reflecting the underlying uncertainty in the knowledge of the reservoir, for key production indicators, e.g. cumulative oil production at Stock Tank condition (STC), along the entire time-life of the field, are expected outcomes of a reservoir modeling project. Moreover, production data are nowadays collected with increasing frequencies, especially for wells equipped with permanent down-hole sensors. Decision making, based on most current information, requires frequent and rapid updates of the reservoir models. The Ensemble Kalman Filter (EnKF) is a Monte-Carlo based method developed by Evensen to calibrate oceanographic models by sequential data assimilation. Since the pioneering application on near-well modeling problems by Naevdal et al., EnKF has become in the reservoir simulation community a popular approach for history matching and uncertainty assessment. This popularity is motivated by key inherent features of the method. EnKF is a sequential data assimilation methodology, and then production data can be integrated in the simulation model as they are available. This makes EnKF well suited for realtime application, where data continuously collected have to be used to improve the reliability of predictive models. EnKF maintains a Gaussian ensemble of models aligned with the most current production data by linear updates of the model parameters. In that way the statistical properties of the Gaussian ensemble, that is to say mean, variance and twopoint correlations are preserved. Because EnKF does not need either history matching gradients or sensitivity coefficients, any reservoir simulator with restarting capabilities can be used in an EnKF workflow, without modifying simulator source code. This represents an obvious advantage with respect to methods like the Randomized Maximum Likelihood (RML) method, which requires a simulator with adjoint gradient capabilities. These reasons motivate the interest on EnKF in the Upstream Industry. Nonetheless, only a few real applications were published before this work. Skjervheim et. al. compared results on using EnKF to assimilate 4D seismic data and production data, and obtained results that slightly improved the base case used for comparison. Haugen et al., see Ref. 13, report that the EnKF was used to successfully history match the simulation model of a Northern sea field, with substantial improvement compared to the reference case. In this paper we applied EnKF to history match the Zagor simulation model, quantifying also the reduction of uncertainty due to the assimilation of the production data. Different ensembles were used to investigate the connection between the effectiveness of EnKF and the size of the statistical samples. Next, we used one of the ensembles updated with EnKF to assess the uncertainty in the production forecasts. To our knowledge, this is the first paper where EnKF was used on a real reservoir from history match to uncertainty analysis of production forecasts. The paper proceeds as follows. The next section is dedicated to the discussion of the EnKF methodology, including its mathematical background and some remarks on the current limitations. Then the Zagor reservoir model is described. That includes the geological parameterization used in this work and the presentation of the different ensembles utilized in the application. The results of the application are presented in two subsequent sections. The first is dedicated to history matching and the second dedicated to the assessment of the uncertainty in the production forecasts. Finally, conclusions based on our results are drawn and some perspectives for future works are given. The Ensemble Kalman Filter The EnKF is a statistical methodology suitable to solve inverse problem, especially in cases where observed data are available sequentially in time. Assuming that the evolution of a physical system can be approximated by a numerical model, typically by the discretisation of a partial differential equation, a state vector can be used to represent the model parameters and observations. Using multiple realizations of the state vector one is able to explicitly express the model uncertainty. The EnKF can describe the evolution of the system by updating the ensemble of state vectors whenever an observation is available. In reservoir simulation, EnKF can be applied to integrate production data by updating sequentially an ensemble of reservoir models during the simulation. Each reservoir model in the ensemble is kept up-to-date as production data are assimilated sequentially. In this context every reservoir state vector comprises three types of parameters: static parameters, dynamic parameters and production data. The static parameters are the parameters that in traditional history matching do not vary with time during a simulation, such as permeability (K) and porosity (φ). The dynamic parameters include the fundamental variables of the flow simulation. These are, for black oil models, the cell pressure (p), water saturation (Sw), gas saturation (Sg) and solution gas-oil ratio (RS). In addition to the variables for each cells one add observations of the production data in each well. Production data usually include simulated data corresponding to observations such as well production rates, bottom-hole pressure values, water cut (WCT) and gas oil ratio (GOR) values. Thus, using the notation by X. H. Wen and W. H. Chen, the ensemble of state variables is modelled by multiple realizations:
ECMOR X - 10th European Conference on the Mathematics of Oil Recovery | 2006
P. de Montleau; Alberto Cominelli; K. Neylon; D. Rowan; I. Pallister; O. Tesaker; I. Nygard
The introduction of controllable downhole devices has greatly improved the ability of the reservoir engineer to implement complex well control strategies to optimize hydrocarbon recovery. The determination of these optimal control strategies, subject to limitations imposed by production and injection constraints, is an area of much active research and generally involves coupling some form of control logic to a reservoir simulator. Some of these strategies are reactive: interventions are made when conditions are met at particular wells or valves, with no account taken for the effect on the future lifetime of the reservoir. Moreover, it may be too late to prevent unwanted breakthrough when the intervention is applied. Alternative proactive strategies may be applied to the lifetime of the field and fluid flow controlled early enough to delay breakthrough. This paper presents a proactive, gradient-based method to optimize production throughout the field life. This method requires the formulation of a constrained optimization problem, where bottomhole pressure or target flow rates of wells, or flow rates of groups, represent the controllable parameters. To control a large number of wells or groups at a reasonably high frequency, efficient calculation of accurate well sensitivities (gradients) is required. Hence, the adjoint method has been implemented in a commercial reservoir simulator to compute these gradients. Once these have been calculated, the simulator can be run in optimization mode to find a locally optimal objective function (e.g., cumulative production). This optimization procedure usually involves progressively activating constraints, with each new constraint representing a significant improvement in the objective. Proper management of degrees of freedom of the parameters is essential when calculating the constrained optimization search direction. Adjoint methods have already been used for production optimization within reservoir simulation; however, an accurate analysis of optimal management of active and inactive constraints for different type of recovery processes in field-like cases has not been discussed to our knowledge.
Eurosurveillance | 2010
Emanuele Vignati; William Bonotto; Alberto Cominelli; Elena Stano; Claire Le Maitre
It is a common practice to use a limited number of components to describe hydrocarbon fluids in reservoir simulation, even though process engineering needs a richer set of components to model the top-side separation process. This requires delumping of the well streams from reservoir models to meet the detailed description needed by the process engineers. We have tackled this problem on the reservoir model of the Goliat field, located in the south-western part of Barents Sea. Economically producible hydrocarbons have been proven in the Realgrunnen and Kobbe formations. Production strategy consists of periferic water injection for pressure maintenance and gas injection in the gas cap for disposal purpose. The development plan includes the distillation of intermediate components from the separated gas phase. This can not be easily modelled within a black-oil reservoir simulator, because the process efficiency depends on gas volumes and gas molar composition that change along with the depletion of the reservoir. A possible solution is to build a reservoir-process model to provide corrected production profiles, and the efficiency of the solution relies on accurate and robust method to delump the black-oil streams. We modified a well-known delumping method to account for Goliat peculiarity. Detailed fluid composition is calculated at each well completion. Phase molar flow is characterized interpolating the composition measured in differential liberation test. Since the gas is expected to be partially produced from the gas cap, the original method has been modified to distinguish between dissolved and dry gas composition, following the latter in the simulation as a tracer carried by the gas phase. The application of this delumping algorithm offers several advantages. First, this method is cost-effective because it can be implemented efficiently by post-processing black-oil well streams, avoiding a much more time-consuming compositional simulation. Secondly, it increases the accuracy of forecast profiles since it takes into account the variation of the produced fluid composition with time. Finally, it provides sale-gas composition forecast according to the surface process facilities.
Spe Reservoir Evaluation & Engineering | 2009
Emanuele Vignati; Alberto Cominelli; Roberto Rossi; Paolo Roscini
This paper (SPE 113769) was accepted for presentation at the EUROPEC/EAGE Conference and Exhibition, Rome, 9–12 June 2008, and revised for publication. Original manuscript received for review 22 February 2008. Revised manuscript received for review 17 October 2008. Paper peer approved 7 March 2009. Summary In reservoir simulations, it is common practice to use a limited number of components to describe the reservoir fluids, ranging from 2 (black-oil models) to 5–12 [compositional equation-of-state (EOS) based models]. On the opposite, surface models usually require a greater number of components, typically from 15 to 30. The hydrocarbon components used in surface models are usually lumped into pseudocomponents in reservoir simulations. This disparity in the description of the mixture may become an issue whenever integrated asset models are developed, with the aim of linking surface process and reservoir engineering. In this paper we implemented a consistent integrated simulation workflow from reservoir to process. Process models with detailed compositional formulation were linked to compositionally simpler reservoir models, using Leibovic’s delumping scheme (Leibovic et al. 1996, 2000) to convert fluid composition between EOSs with different component numbers. The proposed workflow was applied to simulate two different recovery processes, namely a gas injection below the dewpoint in a sour gas/condensate reservoir; and a miscible gas injection in a near-critical volatile oil reservoir. Results of this implementation, in which detailed components are traced in reservoir model, are compared to integrated simulations in which all stages are modeled using detailed composition.
Computational Geosciences | 2016
Alberto Cominelli
The 14th European Conference on the Mathematics of Oil Recovery (ECMOR XIV) was held in Catania, a beautiful city lying between Mount Etna and the Mediterranean Sea on the magnificent island of Sicily (Italy) [1] from 9 to 11 September, 2014. ECMOR has been organized every even year since 1988 to provide a place where engineers and scientists from Academia and the oil and gas industry could meet to focus advancements on mathematical aspects of reservoir simulation and modelling. After ECMOR VI, held in Scotland in September 1998, the European Association of Geoscientists and Engineers (EAGE) included this event in its regular program. In Catania, a total of 88 oral presentations and 47 posters covered a wide range of topics and received attention from 185 attendees, a new record for ECMOR. The full list of papers presented at the conference can be found on the EAGE website (http://www. eage.org/event/index.php?eventid=1093&evp=12651), and all papers are electronically accessible via the EarthDoc online data base (http://earthdoc.eage.org/). This issue of Computational Geosciences presents 24 papers developed upon selected works from the Sicilian conference and that have been peer reviewed according to the journal’s policy. These papers address complex mathematical challenges arising from the need to meet the world energy demand in the next years, and provide evidences of the technical success of ECMOR XIV. I wish to congratulate the authors for
IFAC Proceedings Volumes | 2007
Geir Næivdal; Alberto Del Bianco; Alberto Cominelli; Laura Dovera; Rolf Johan Lorentzen; Brice Vallès
Abstract The ensemble Kalman filter is presented and it is shown how it is applied for updating models for fluid flow in oil reservoirs. A set of updated models are produced, which fit better to the available measurements and will be useful for further decision making.
annual simulation symposium | 2005
Alberto Cominelli; Fabrizio Ferdinandi; P.C. De Montleau; Roberto Rossi
Reservoir management is based on the prediction of reservoir performance by means of numerical simulation models. Reliable predictions require that the numerical model mimics the known production history of the field. Then, the numerical model is iteratively modified to match the production data with the simulation. This process is termed history matching (HM). Mathematically, history matching can be seen as an optimisation problem where the target is to minimize an objective function quantifying the misfit between observed and simulated production data. One of the main problems in history matching is the choice of an effective parameterization: a set of reservoir properties that can be plausibly altered to get an history matched model. This issue is known as parameter identification problem and its solution usually represents a significant step toward the achievement of an history matched model In this paper we propose a practical implementation of a multiscale approach to identify effective parameterizations in reallife HM problems. The approach is based on the availability of gradient simulators, capable of providing the user with derivatives of the objective function with respect to the parameters at hand. Those derivatives can then be used in a multi-scale setting to define a sequence of richer and richer parameterisations. At each step of the sequence the matching of the production data is improved. The methodology validated on synthetic case and has been applied to history match the simulation model of a North-Sea oil reservoir. The proposed methodology can be considered a practical solution for parameter identification problems in many real cases. This until sound methodologies, primarily adaptive multi scale estimation of parameters, will become available in commercial software programs.
SPE Annual Technical Conference and Exhibition | 2003
O. Gosselin; S.I. Aanonsen; I. Aavatsmark; Alberto Cominelli; R. Gonard; M. Kolasinski; F. Ferdinandi; L. Kovacic; K. Neylon