On tomography velocity uncertainty in relation with structural imaging
OOn velocity and migration structural uncertainties: A new approach using non-linear slope tomography
J´er´emie Messud, Patrice Guillaume, Gilles Lambar´e
CGG, Massy
September 3, 2020
Evaluating structural uncertainties associated with seismic imaging and targethorizons can be of critical importance for decision-making related to oil and gasexploration and production. An important breakthrough for industrial applica-tions has been made with the development of industrial approaches to velocitymodel building. We propose here an extension of these approaches, using non-linear slope tomography (rather than standard tomographic migration velocityanalysis as in previous publications). In addition to the advantages in terms ofaccuracy and efficiency of the velocity model building (compared to standard to-mography) it can be used to assess the quality of standard uncertainty-relatedassumptions (linearity and Gaussian hypothesis within the Bayesian theory)and estimate volumetric migration positioning uncertainties (a generalizationof horizon uncertainties). We derive and discuss the theoretical concepts un-derlying this approach and compare our derivations with those of previous pub-lications. A main advantage is that we work directly in the full model spacerather than in a preconditioned model space, (1) avoiding biased uncertaintyanalysis and (2) splitting the analysis into the resolved and unresolved tomog-raphy spaces. Another advantage is that, within the Bayesian formalism, wesample an equi-probable contour of the tomography posterior probability den-sity function (pdf) rather than the full pdf, stabilizing the estimation of errorbars. These advantages provide robustness to the approach. These concepts areillustrated on two different 3D field datasets. The first one illustrates structuraluncertainties on a merge of different seismic surveys in the North Sea. Thesecond one shows the impact of structural uncertainties on gross-rock volumecomputation.
Introduction
Decision-making and risk mitigation are critical for oil and gas exploration and production(E&P). Relying only on maximum-likelihood (or deterministic or single-valued) subsurfacemodels can lead to drastic misinterpretations of the risk. Assessing uncertainties relatedto such maximum-likelihood models is therefore necessary [Simpson et al., 2000], but is achallenging task. Indeed, such single-valued models are built by long and complex pro-cesses where various types of information are combined sequentially. Seismic migration is a1 a r X i v : . [ phy s i c s . g e o - ph ] A ug entral step within those processes, providing images of the general structure of the subsur-face through a reflectivity model. The positioning uncertainties associated with the struc-tures imaged in the reflectivity, or structural uncertainties, have been studied for decades[Hajnal and Sereda, 1981, Al Chalabi, 1994, Thore et al., 2002], in line with advances madein migration tools and workflows.The key step affecting migration structural uncertainties is velocity model building (VMB).According to [Fournier et al., 2013] VMB is related to one of the biggest ambiguities impact-ing E&P. While we have seen over the last decade the development of full-wave VMB ap-proaches [Virieux and Operto, 2009], ray-based reflection tomographic approaches remainan essential workhorse method [Woodward et al., 2008, Guillaume et al., 2013b, Lambar´e et al., 2014].This is due to their inherent characteristics, i.e. efficient numerical implementations, com-pressed kinematic data (picks) and ability to digest prior information. These advantagesare particularly appealing from the perspective of a structural uncertainties analysis inan industrial context. An important contribution in terms of theory, implementation andapplication has been delivered by the work of [Osypov et al., 2008a, Osypov et al., 2008b,Osypov et al., 2010, Osypov et al., 2011, Osypov et al., 2013]. While there had been ear-lier investigations in the context of reflection tomography [Duffet and Sinoquet, 2006], toour knowledge [Osypov et al., 2008b] were the first to implement a tool used in the industryto estimate structural uncertainties associated with ray-based tomography. Their approachis based on the tomography toolbox described by [Woodward et al., 2008]. The uncer-tainty analysis is performed around the maximum-likelihood tomography model within aBayesian approach, assuming a linearized modeling and a Gaussian probability density func-tion (pdf). A partial eigen-decomposition of the tomography data operator is performedin a “model preconditioned basis” [Osypov et al., 2008b]. This allows the generation ofperturbed tomography models related to a confidence level. Then map (or zero-offset kine-matic) migrations of a target horizon within those models give a set of perturbed horizons.These are analyzed statistically to derive estimations of horizon error bars related to a con-fidence level. [Osypov et al., 2010, Osypov et al., 2011, Osypov et al., 2013] give details onpractical aspects (such as calibrating the regularizations) and present applications relatingto the assessment of oil reserves or well placement. This work demonstrated the afford-ability and effectiveness of the corresponding horizon uncertainty analysis for industrialapplications.To continue these efforts, our paper details some recent work on structural uncertaintyanalysis [Messud et al., 2017a, Messud et al., 2017b, Messud et al., 2018], with major dif-ferences compared to previous work. Firstly, while the work of [Osypov et al., 2008a,Osypov et al., 2010, Osypov et al., 2011, Osypov et al., 2013] was based on the classical(linear) tomographic approach described by [Woodward et al., 2008], our work is based onthe non-linear slope tomography of [Guillaume et al., 2008, Lambar´e et al., 2014]. An im-portant advantage of non-linear tomography is the ability to compute all non-linear updatesof the tomography model with only one picking step [Adler et al., 2008, Lambar´e et al., 2014],whereas [Woodward et al., 2008] requires a new picking step (thus a new migration) for eachiteration (or linear update). Also, non-linear slope tomography has the advantage of be-longing to the family of slope tomography, where the model is recovered from picks oflocally coherent reflected events in the pre-stack unmigrated domain [Lambar´e, 2008]. Inbrief, the approach can extract in a non-linear way the kinematic information containedin a dense set of local picks. Numerous versions have been proposed covering a large setof configurations, i.e. multi-layer tomography [Guillaume et al., 2013a], dip-constrainedtomography [Guillaume et al., 2013a], high-definition tomography [Guillaume et al., 2011]and joint direct and reflected wave tomography [Allemand et al., 2017].In the context of structural uncertainty analysis, the non-linear approach provides anefficient way to QC the assumptions made within the Bayesian formalism (linearity and2aussian pdf hypothesis) [Messud et al., 2017a, Reinier et al., 2017]. It also makes it pos-sible to derive volumetric migration positioning uncertainties, a volumetric generalizationof the horizon positioning uncertainties that provides uncertainties also between horizons.Secondly, our approach works directly in the full model space rather than in a pre-conditioned model space. This makes it possible to (1) avoid biased uncertainty analysisand (2) split the uncertainty analysis into resolved and unresolved tomography spaces. Italso allows us, within the Bayesian formalism, to sample randomly a pdf’s equi-probablecontour (related to a clear confidence level) rather than the full pdf [Osypov et al., 2013,Duffet and Sinoquet, 2002, Duffet and Sinoquet, 2006], providing more robust error bar es-timates [Messud et al., 2017a, Messud et al., 2017b, Messud et al., 2018]. A general defini-tion of error bars for a given confidence level is given (accounting for the non-diagonal partof the covariance matrices, avoiding underestimation). All this provides more generalityand robustness to our approach than to previous efforts.Our approach can easily apply to full-waveform inversion (FWI)-derived models. Indeed,these models usually go through a last tomography pass (FWI “post-processing”) in or-der to obtain flatter common image gathers. The corresponding tomography uncertaintyanalysis can then naturally be performed to produce an estimate for FWI model kinematic-related uncertainties. This workflow is practical as long as rays can adequately describe thekinematics of FWI-derived models.Our paper carefully details the associated theoretical developments, providing a unifyingframework to compare the approaches, in particular, the differences with [Osypov et al., 2008a,Osypov et al., 2010, Osypov et al., 2011, Osypov et al., 2013].In the last part, we first present an application of the method to a North Sea 3D dataset:we show structural uncertainties in the image domain along selected horizons (positionedthrough zero-offset map migration) as well as volumetric migration positioning uncertaintiesassociated with the tomography residual move-out (RMO) data (considering also non-zerooffets). We also illustrate the concepts of resolved and unresolved spaces uncertainties in-troduced in the theoretical sections. We then present an illustration on a second field 3Ddataset of the impact of tomographic uncertainties on gross-rock volume (GRV) computa-tion. Tomography inverse problem
Non-linear slope tomography is a practical and efficient tool for velocity model building[Guillaume et al., 2013b]. Its input (or observed) data d obs consists of a set of picks or“kinematic invariants”, i.e. quantities that belong to the multi-dimensional unmigrateddata domain. Invariants are described by source and receiver positions, two-way travel-times and time-slopes of locally coherent reflected events in the unmigrated domain. Theinvariants can be kinematically migrated in any (sufficiently smooth) model to deducecorresponding events or picks in the corresponding migrated domain .The tomography model m consists of a set of parameters describing smooth velocityand anisotropy layers on a cardinal cubic bspline basis [Operto et al., 2003], separated byhorizons. We denote by N M the number of nodes that describe the model (in practice500,000 to 50 million). The model is updated through a non-linear optimization schemethat aims to minimize among others the residual move-out, well misties, etc. The conceptof invariants allows the implemention of an iterative non-linear inversion scheme where eachlinearized update consist of (1) a non-linear modeling (by finite-offset kinematic migration ofthe invariants) and a computation of tomography operator derivatives, and (2) a resolution The invariant can be measured either directly in the unmigrated time domain or indirectly in a prestackdepth-migrated domain associated with a given model. In the latter case, measured quantities arekinematically de-migrated in the corresponding model to deduce the invariants.
3f an inverse problem to deduce the update. After each linearized update, the data aremodeled again non-linearly (by finite-offset kinematic migration) before a new inversionstarts again. A simultaneous inversion of the model for all layers avoids the downwardpropagation of errors of a conventional top-down approach, and a multi-scale techniqueavoids getting trapped into a local minimum [Guillaume et al., 2013b]. The non-linear slopetomography is a flexible tool that drastically reduces the overall number of passes involvingpicking stages. It also makes it easier to fine-tune the regularizations and progressivelyreduce their level to let the data speak more.The inversion scheme outputs to the maximum-likelihood tomography model m ∗ , i.e. themodel that fits the best the data under some prior constraints [Tarantola, 1986, Tarantola, 2005],that satisfies m ∗ = arg min m || C − / D ( d obs − d ( m )) || + 12 || C − / M ( m − m prior ) || . (1)Here d ( m ) denotes the data modeled from a model m . C D is the covariance matrix inthe data space. It accounts for data (tomography picks) and modeling uncertainties. C M is the “prior” covariance matrix in the model space, associated with a prior model m prior .It helps regularization and accounts for uncertainties on the prior model. Non-zero non-diagonal elements of a covariance matrix C i mean that corresponding nodes are correlated.The diagonal elements of a covariance matrix C i are called variances, and the square rootsof the variances are called standard deviations. In the following, bold lowercase letterquantities represent vectors and all bold uppercase letter quantities represent matrices.Equation 1 is solved by a non-linear local optimization method, updating iteratively themodel by m k +1 = m k + δ m k , (2)where k ∈ [0 , n ] denotes the iteration number and n the last iteration number. At the lasttomography iteration, the obtained m n is considered to be the maximum-likelihood solution m ∗ . In practice, the updates ∆ m k in equation 2 are computed through a linearization ofequation 1 at each iteration, solvingmin δ m k || A k δ m k − b k || , (3)where A k is the Jacobian matrix and b k the “error vector”, containing information on thedata and the prior (these terms are defined in the next paragraphs). A k is defined by A k = (cid:34) C − / D G k C − / R (cid:35) , (4)containing the tomography modeling Jacobian matrix at iteration k G k = ∂ d ( m ) ∂ m (cid:12)(cid:12)(cid:12) m = m k , (5)and C / R gathers various contributions to the prior covariance in model space or constraints C − / R = DampOther ... Other L − − M = C − / R † C − / R = Damp † Damp + L − (cid:88) i =1 Other † i Other i , (6)where † denotes transpose. Each of the L prior contributions ( Damp and
Other i ) isrepresented by a square matrix of size N M × N M . Damp represents a damping or diag-onal matrix, scaled for each model contribution or subsurface parameter (velocity, variousanisotropies, etc.).
Other i represents other possible constraints, for instance, a Laplacianone and/or a structural one in our non-linear slope tomography. C − / R is a matrix of size( L × N M ) × N M . Denoting by N D the number of data, i.e. picks, A n is a matrix of size( N D + L × N M ) × N M .The error vector has size ( N D + L × N M ) and is defined by b k = (cid:34) C − / D ( d obs − d ( m k )) C − / R ( m k − m prior ) (cid:35) . (7)In practice, m prior is often taken to be the model m k of the previous iteration, which helpsconvergence. This implies that, at the final tomography iteration, the prior tends to beequal to the maximum-likelihood model.We emphasize that, in our implementation, all prior contributions to C / R are N M × N M square matrices. We do not use non-square matrices P of size p × N M with p << N M toparameterize constraints through P † P (rank( P † P ) ≤ p << N M ). In other words, we donot use low-rank prior constraints that would reduce the dimensionality of the problem,such as steering filters [Clapp et al., 1998]. Our basis is a cardinal cubic bspline one, whichis less constraining than steering filters. This gives more flexibility to “let the data talk”and makes it possible to avoid biased uncertainty analysis.The physical dimensions of the components of A k are the inverse of the physical di-mensions of the model. Thus, in the multi-kinds of parameter case (i.e. velocity withanisotropy), the coefficients of A k do not have the same physical dimensions. A precondi-tioning can be used: A (cid:48) k → A k D , (8)where D is a square matrix that gives the same physical dimension and similar scaling toall coefficients of matrix A k D and can be used to re-weight A n to obtain better inversionresults. D is symmetric, D † = D , and should be invertible so that it does not increase thenull space of the problem. A common preconditioning is to choose D diagonal and put theinverse of the L norm of each column of A k on the diagonal. We then solve, instead ofequation 3, min δ m (cid:48) k || A (cid:48) k δ m (cid:48) k − b k || . (9)and recover ∆ m k by δ m k = D δ m (cid:48) k . (10)The solution of equation 9 is usually computed at each iteration using an approximation of δ m (cid:48) k = A (cid:48)− gk b k , where A (cid:48)− gk = lim α → + ( A (cid:48)† k A (cid:48) k + α I N M ) − A (cid:48)† k is the generalized (or Moore-Penrose pseudo-) inverse, where A (cid:48)† k A (cid:48) k is called the Hessian matrix in the preconditioneddomain. In our implementation, the Least-Squares Quadratic Relaxation (LSQR) algorithmis used to approximate the effect of A (cid:48)− gk [Paige and Saunders, 1982, Choi, 2006].LSQR and other iterative algorithms share a close similarity with performing a partialsingular value decomposition (SVD) of A (cid:48) k or a partial eigenvalue value decomposition5EVD) of the Hessian A (cid:48)† k A (cid:48) k . LSQR needs some adaptations if we want to recover preciselythe eigenvectors and eigenvalues [Zhang and Thurber, 2007]. Suppose that we perform p ≤ N M iterations of SVD or EVD, each of them computing a new singular-value λ i and itscorresponding left and right eigenvectors u i and v i . We construct the U p = [ u , , etc., u p ]and V p = [ v , , etc., v p ] matrices, and the p × p matrix Λ / p that contains the singular valueson its diagonal . The partial SVD of A (cid:48) n gives the best possible rank p -approximation of A (cid:48) n A (cid:48) k ≈ U p Λ / p V † p ⇒ A (cid:48)− gk ≈ V p T p Λ − / p U † p , (11)and the partial EVD of the Hessian is given by A (cid:48)† k A (cid:48) k ≈ V p Λ p V † p ⇒ ( A (cid:48)† k A (cid:48) k ) − ≈ A (cid:48)− gk A (cid:48) + − gk ≈ V p T p Λ − p V † p , (12)where T p = [ Λ p + (cid:15) I p ] − Λ p (13)is the Tikhonov regularization operator that stabilizes the inversion result in the eventof very small components of Λ p [Zhang and McMechan, 1995, Zhang and Thurber, 2007].Note that this regularization is equivalent to including a “damping” level (cid:15) in a basis whereall model components have the same units. In our case, we take (cid:15) = 0 in T p as we alreadyintroduced a damping in our prior contributions, equation 6, that satisfies approximately D − † Damp † DampD − ≈ (cid:15) I N M in the preconditioned domain.The N M − p eigenvectors not resolved by the iterative algorithms define the tomography“effective null space”, a cause of multiple equivalent effective solutions (depending amongothers on the initial model m ) and thus of uncertainty. The effective null space projectoris [Mun˜oz and Rath, 2006] Π null p = I N M − V p V † p , (14)and the tomography “resolved” (or partial EVD spanned) space projector is [Mun˜oz and Rath, 2006] R p = V p V † p . (15) Bayesian formalism for tomography model uncertainties andmigration structural uncertainties
At the last tomography iteration ( k = n ), the obtained m n is considered to be themaximum-likelihood solution. The result is uncertain because the tomography input data,modeling, and constraints contain uncertainties. As a basis for uncertainty considerations,we will use the Bayesian formalism. This gives a clear definition of uncertainties in termsof physics plus a confidence level, or probability P that the true model belongs to a regionof the model space [Cowan, 1998]. This is important for reservoir risk analysis.We consider, within Bayesian theory, the “Gaussian posterior pdf in the model space” cor-responding to equation 1. It is defined up to a proportionality constant by [Tarantola, 2005]˜ ρ M ( m ) ∝ exp (cid:2) −
12 ( d ( m ) − d obs ) † C − D ( d ( m ) − d obs ) −
12 ( m − m prior ) † C − M ( m − m prior ) (cid:3) . (16) U p has size ( N D + L × N M ) × p and V p has size N M × p . Note that U p and V p are not unitary. Theysatisfy V † p V p = U † p U p = I p . But V p V † p (cid:54) = I N M for p < N M , and U p U † p (cid:54) = I N D + L × N M . R p Π null p = Π null p R p = I N M . m prior = m n in equation 16 (which implies that m prior is chosen to be the previousiteration result, in agreement with most methods used to solve inverse problems, cf. section“Tomography inverse problem...”). Consider the first-order (linear) approximation d ( m ) ≈ G n ( m − m n ) + d ( m n ) , (17)that holds in some region around m n (the weaker the non-linearity in d ( m ) the larger theregion). Then, equation 16 can be rewritten as˜ ρ M ( m ) ∝ exp (cid:2) −
12 ( m − m n ) † ˜ C − M ( m − m n ) (cid:3) , (18)where ˜ C M denotes the “posterior” covariance matrix in the model space, defined through˜ C − M = G † n C − D G n + C − M = H n . (19)Its inverse H n is the posterior Hessian matrix. ˜ ρ M ( m ) gives information on the confidenceregion associated with a confidence level P (equal to the integral of ˜ ρ M ( m ) over the confi-dence region). The maximum-likelihood model m n does not represent the true model, butthe most probable one according to the set of data and priors. Many other probabilisticallypertinent models (or “admissible” model perturbations) exist. ˜ ρ M ( m ) allows a character-ization of these models in terms of confidence levels, thus representing a key to extractinguncertainty information with a clear meaning.Using notations of section “Tomography inverse problem...”, the posterior inverse covari-ance matrix, equation 19, can be also computed through˜ C − M = A † n A n = D − A (cid:48)† n A (cid:48) n D − . (20)The EVD of ˜ C M , also defined through the SVD of A (cid:48) n , contains uncertainty information.Indeed, the principal axes of the Gaussian posterior pdf, equation 18, are given by theeigenvectors of ˜ C M and the deformations of the pdf by the eigenvalues of ˜ C M . The “moreextended” or poorly resolved directions correspond to smaller eigenvalues of ˜ C M .We are interested in computing migration structural uncertainties on target reflectors, i.e.migration uncertainties related to the kinematic part of the Kirchhoff operator. Tomogra-phy model uncertainties represent a main contributor to migration kinematic uncertainties.We can thefore first generate admissible tomography model perturbations, i.e. perturbedmodels from equations 18 and 20. Then, migrations using each perturbed model can beperformed and analyzed. A study of the results would allow us to estimate migrationkinematic uncertainty-related quantities.Full Kirchhoff migrations may be considered, but it is difficult to separate structuraluncertainties from other uncertainties in migrated images, such as those related to am-plitudes. [Li et al., 2014] propose to use the Euclidean and Procrustes distances to some-what perform such a separation. Here, we are interested mostly in the structural uncer-tainties related to target reflectors, a crucial component of migration uncertainties. Weconsider a kinematic approximation of the Kirchhoff operator, called kinematic migration[Duffet and Sinoquet, 2006, Guillaume et al., 2008]. h represents the result of the kine-matic migration of an event (depth and slope of a reflector, etc.). We have h = k ( m ) , (21)where k ( m ) is the kinematic migration operator for a given data (reflection event), nonlinear with respect to the tomography velocity m . We compute the maximum-likelihoodposition of a target reflector h n related to the maximum-likelihood tomography model m n : h n = k ( m n ) . (22)7et us consider a linearization of k ( m ) around m n :( h − h n ) ≈ K † ( m − m n ) K † = ∂ k ( m ) ∂ m (cid:12)(cid:12)(cid:12) m = m n , (23)where K represents the linearized approximation (or Jacobian matrix) of the kinematicmigration operator. The migration structural posterior covariance matrix related to h n isthen defined through ˜ C − K = K † ˜ C − M K , ( h − h n ) † ˜ C − K ( h − h n ) ≈ ( m − m n ) † ˜ C − M ( m − m n ) . (24)Using notation similar to equation 16, ˜ C H defines the structural migration posterior pdf˜ ρ H ( h ) in a similar way to equation 18, and contains information on migration struc-tural uncertainties (related to the tomography model uncertainties). Map migrations(i.e. zero-offset kinematic migrations) of target reflectors are most often used in practice[Duffet and Sinoquet, 2006, Osypov et al., 2008b, Osypov et al., 2013, Messud et al., 2017b,Messud et al., 2017a]. Once a set of perturbed models { m } that follow equation 18 is gener-ated (and possibly some spurious models removed), map migrations of target reflectors maybe performed. This will give perturbed horizons { h } of target reflectors that will be relatedto the migration structural posterior pdf according to equation 24. Some statistical analysisallows us to deduce structural uncertainty quantities related to a target reflector and a givenconfidence level P [Duffet and Sinoquet, 2006, Osypov et al., 2008b, Osypov et al., 2013].Note that, specific to our non-linear slope tomography, full kinematic migrations (using alloffsets) of all the tomography data (invariants) can be considered. This makes it possible todeduce positioning uncertainty quantities in the whole migrated volume, not only at targetreflector positions, which is an advantage over zero-offset computations.Of course, the obtained uncertainties should be interpreted in the light of the tomographicdata that have been used to compute them (for instance, if faults are not inverted by thetomography, uncertainties on faults cannot be handled by this method). Also, the method isvalid if the linearization, equation 23, holds. But its range of validity should approximatelybe the same as that of the linearization which allowed us to define the tomography ˜ C M ,equation 17, that is coherent. This will be further discussed below.The main question now is: How to generate perturbed models from equation 18, togetherwith a confidence level P ? We first review previous work in a unifying framework, thenhighlight open questions, and finally present our contribution. Sampling a normal distribution to generate a set of perturbedtomography models
Suppose we can find a matrix E that satisfies˜ C − M ≈ E † E ⇒ ˜ C M ≈ BB † where B = E − g . (25)Then, the posterior pdf, equation 18, can be rewritten˜ ρ M ( m ) ∝ exp (cid:2) − δ r † δ r (cid:3) where δ r = E ∆ m . (26)∆ m represents perturbations of the maximum-likelihood model∆ m = m − m n . (27)8quation 26 implies that the covariance matrix associated with the vector δ r is the identity I , i.e. that the δ r coordinates are not correlated and all have a variance of 1. One methodof generating tomography model perturbations ∆ m that follow the Gaussian distribution18 is to draw δ r from a normal distribution N ( , I ) and compute∆ m = B δ r , (28)where B is a matrix that contains the posterior covariance information and scaling. Oncea set of perturbed models { ∆ m + m n } is generated (and possibly some spurious modelsdiscarded), migration structural or kinematic uncertainty quantities can be deduced usingthe method presented above. We now discuss possible choices for B and correspondingdimensionality. The next three sections describe existing methods and their limitations,and underline questions still to be clarified. Then, the rest of the article presents ourmethod. Cholesky decomposition and partial SVD-based methods
Suppose we have computed ˜ C − M = A † n A n . [Duffet and Sinoquet, 2006] propose to performa Cholesky decomposition of ˜ C − M . This implies finding a lower-triangular square matrix E of size N M × N M that satisfies (exactly) equation 25 and computing ( E being invertible as˜ C − M is) B = E − , (29)to generate model perturbations using equation 28. B is a matrix of size N M × N M and δ r is a vector of size N M draw from N ( , I N M ). This scheme is costly and may lose accuracyif A n is ill-conditioned [Zhang and Thurber, 2007].Another method is to use a partial SVD of A (cid:48) n , equation 11. With equations 20 and 25,we deduce E = Λ / p V † p D − ⇒ B = E − g = DV p T / p Λ − / p , (30)where Tikhonov regularization has been added for the generalized inverse computation. B isa matrix of size N M × p and δ r a vector of size p , denoted by δ p to make the size explicit anddraw from N ( , I p ). This scheme reduces the space of model perturbations, usually to thedegrees of freedom that can be resolved by tomography, which is numerically advantageousbut implies an SVD-based low-rank approximation. (We did not find applications of thisscheme in the literature.) Prior-based low-rank decomposition method
This section details the formalism of [Osypov et al., 2013, Osypov et al., 2011, Osypov et al., 2008a]and the specific form obtained for B . Let us consider prior contributions to the prior covari-ance matrix in equation (31), that are not represented by square matrices of size N M × N M but by a non-square matrix P of size N M × p with p < N M C M → PP † . (31)As p << N M is chosen in practice, low-rank prior constraints are considered, unlike insection “Tomography inverse problem...” and equation 6. The obtained C M in equation31 is not strictly invertible, i.e. not strictly speaking a covariance matrix (even if thegeneralized inverse can be defined). P can for instance be a steering filter that contains9nformation on the structures, allowing us to reduce the dimensionality of the problem( p << N M ) [Clapp et al., 1998]. P is called a preconditioner in [Osypov et al., 2013,Osypov et al., 2011, Osypov et al., 2008a], but it has a very different role from our pre-conditioner D introduced in section “Tomography inverse problem...”. We thus here call it“model preconditioner”.Gathering many constraints within this formalism is done in the model preconditionedbasis (note that C / R is considered contrarywise to equation 6 that considers C − / R ) C / R = (cid:2) PQ , etc . PQ L (cid:3) C M = C / R C / R † = P (cid:0) L (cid:88) i =1 Q † i Q i (cid:1) P † , (32)where Q i is a p × p matrix that can for instance contain the coupling between the variousmodel contributions or subsurface parameters in the anisotropic case [Osypov et al., 2013,Osypov et al., 2011, Osypov et al., 2008a]. To lighten the notations, we do not considersuch additional constraints in the following.We have, using the generalized inverse˜ C M ≈ (cid:2) G † n C − D G n + ( PP † ) − g (cid:3) − g = P (cid:2) ( G n P ) † C − D ( G n P ) + I p (cid:3) − P † , (33)where ( G n P ) † C − D ( G n P ) + I p is a p × p matrix that is invertible because it is positivedefinite ( I p kills the null eigenvalues). Now, let us perform the following partial EVD with q ≤ p iterations (no additional preconditioning is needed as the matrices are dimensionlessand the problem is well conditioned in the model preconditioned domain)( G n P ) † C − D ( G n P ) ≈ V q Λ q V † q . (34)We deduce ˜ C M ≈ P (cid:2) V q Λ q V † q + I p (cid:3) − P † . (35)The binomial inverse theorem recalled in Appendix A leads to˜ C M ≈ P (cid:2) I p − V q V † q + V q ( Λ q + I q ) − V † q (cid:3) P † . (36)Using equation 25, we finally obtain [Osypov et al., 2013, Osypov et al., 2011, Osypov et al., 2008a] B = P (cid:2) I p − V q V † q + V q ( Λ q + I q ) − / V † q (cid:3) , (37)allowing us to generate perturbations in the model space by equation 28. As here δ r is avector of size p << N M , we denote it by δ p to make the size explicit. δ p must be drawnfrom N ( , I p ). B is a matrix of size N M × p that can be split into B = B resq + B un − resq % p B resq = PV q ( Λ q + I q ) − / V † q B un − resq % p = P (cid:2) I p − V q V † q (cid:3) ∆ m = B δ p where δ p ∼ N ( , I p ) , (38)where two terms appear: • B resq deals with the uncertainty information contained in the posterior covariancematrix and resolved by the partial EVD. Indeed, V q ( Λ q + I q ) − / V † q represents thepartial EVD of ˜ C / M in the model preconditioned domain.10 B un − resq % p contains a projector on an effective null space of dimension p − q , as q EVDiterations have been used to approximate a p × p matrix. So, only the effective nullspace of the EVD in the model preconditioned domain (of size p ) can be exploredby this method. This is a very limited part of the full null space of the tomographyoperator (because q ≤ p << N M ), producing only smooth perturbations. The advan-tage is that it allows us to explore some part of the effective null space of the EVDwhile keeping geological structures in the model perturbations (the perturbations areprojected on P ). Questions
The methods described in the previous two sections have limitations and raise questions.In particular, the following points still need to be clarified: • A QC of the validity of the Gaussian and linearized hypothesis. This is fundamentalto check if uncertainties obtained by our computations are pertinent. We propose anew sampling method that, together with non-linear slope tomography, will adressthis need, together with computational efficiency. • Separation and exploration of the full effective null space of the tomography operatorare not addressed by the previous methods, but this would certainly give interestingcomplementary uncertainty information (mainly relating to illumination issues, whichare a major source of uncertainty). A simple way to explore a part of the tomographynull space is to perform various tomographies with different priors and initial models,but this would be very costly numerically. In the following, we will propose a methodwithin the linearized approximation, using the effective null space projector. • A definition of error bars for the 68 .
3% confidence level, which accounts for the non-diagonal part of the covariance matrices (not only for the variances), will be given.This gives a more precise error bar evaluation (accounting only for variance underes-timates the errors). • Are the computed uncertainties quantitative or qualitative? We will extensively dis-cuss this point.
Our first new contribution: Sampling a Gaussian equi-probablecontour to generate a set of perturbed tomography models
Let us return to the tomography posterior Gaussian pdf, equation 18, and propose a differentsampling method that will reduce the sampled space and thus optimize the exploration.We consider an equi-probable contour ∆m † ˜ C − M ∆m = Q N M ( P ) , (39)where Q N M ( P ) is the quantile of order P of the Chi-square distribution [Cowan, 1998] (thisdistribution is among others used in confidence interval estimations related to Gaussianrandom variables).Resolving (or sampling) equation 39 for a given P value gives the set of maximum per-turbations (or the boundary of the confidence region) associated with a confidence level P ; Figure 1 gives an illustration. The probability that the true model m lies within the N M -dimensional hyper-ellipsoid of center m n defined by equation 39 is equal to P . Restrict-ing the sampled space to an equi-probable contour does not hamper the assessment of the11ncertainties compared to the sampling of the full pdf because information on the full Gaus-sian is contained in one contour and the corresponding P value. Indeed, all hyper-ellipsoidsdefined by equation 39 are related by a simple proportionality constant.Figure 1: Equi-probable contour of a posterior Gaussian pdf ( N M = 2) (red) and ∆ m samples (triangles).In the following, we consider a confidence level P = 68 . N M = 1 or when model parameters are non-correlated and theirsingle pdfs are considered independently, see Appendix B. For more generality, we defineuncertainties through the tomography confidence region related to a probability P = 68 . δ r † δ r = Q N M (68 . C M ≈ BB † ⇒ ∆ m = B δ r , (40)where δ r is a vector of size N M drawn from a uniform distribution and rescaled to havenorm || δ r || = (cid:112) Q N M (68 . B will be a N M × N M matrix as wedo not use low-rank prior constraints that reduce the dimensionality of the problem. Thisgives more flexibility and avoids curbing a priori the uncertainty analysis.Generating model perturbations from an equi-probable contour sampling has the advan-tage of reducing the sampled space to its most representative components, optimizing thecomputation compared to the previously described methods where the full pdf had to besampled. This allows us to obtain stable error bars with 200-500 random models, withoutintroducing any preconditioning that would reduce the dimensionality of the model space,as described further.Let us now specify a problem concerning error bars. Why not simply define errorbars by ± (cid:113) diag( ˜ C M ), where diag( ˜ C M ) denotes the vector containing the diagonal ele-ments of ˜ C M ? This is sometimes used as a first tomography model uncertainty indicator[Osypov et al., 2013], but it has pathologies. When ˜ C M is diagonal, a subset of the solutionsof equation 39 with P = 68 .
3% is∆ m = ± (cid:115) Q N M (68 . N M (cid:113) diag( ˜ C M ) , (41)defining a confidence interval. So, firstly, using equation 41 would be better than using ± (cid:113) diag( ˜ C M ) from a confidence level point of view (indeed, ± (cid:113) diag( ˜ C M ) is not associatedwith a constant confidence level P = 68 .
3% as N M varies). However, equation 41 is stilltoo restrictive, since: 12 Even in the diagonal case, the full solution of equation 39 with P = 68 .
3% definesa larger (hyper-ellipsoidal) confidence region, encompassing the confidence intervaldefined by equation 41. So, equation 41 underestimates uncertainties. • In the general case, diag( ˜ C M ) is not sufficient to generate a set of admissible tomogra-phy models. Indeed, non-diagonal elements of ˜ C M can have a strong contribution asthey describe correlations between model space nodes, which are crucial in tomogra-phy (because of the smoothness, structural conformity, etc. constraints on the model)[Duffet and Sinoquet, 2006]. Appendix B gives more formal details.So, equation 40 must be resolved to represent the full solution of equation 39. It remainsto define the B matrix and how to compute exhaustive error bars from the equi-probablecontour sampling. Our second new contribution: Resolved and total spaceuncertainties
We consider an EVD of ˜ C − M in the preconditioned domain (related to an SVD of A (cid:48) n inthe spirit of section “Tomography inverse problem...”) D † ˜ C − M D = V p Λ p V † p + V N M − p Λ N M − p V † N M − p ≈ Λ NM − p → (cid:15) I NM − p (cid:15) being the noise level V p Λ p V † p + (cid:15) I N M . (42)The second line shares similarity with a damping and is related to the noise-contaminatedeffective null space of the tomography. Equation 42 is a partial EVD of ˜ C − M in the pre-conditioned domain (all model components or parameters have the same units in this do-main), stopped after p iterations when the eigenvalues have reached a fixed prior level (cid:15) [Zhang and McMechan, 1995], and approximate the effect of the N M − p non-computedeigenvectors by (cid:15) I N M . In our applications, the damping level (cid:15) is fixed a priori and isselected sufficiently small so as not to affect the relevant information in the tomography op-erator, i.e. to be representative of the tomography noise level. The partial EVD is stoppedwhen the eigenvalues have reached the damping level (another method would be to definethe optimum p by eigenvalue decay and then deduce the corresponding (cid:15) ; because of ourimplementation in the preconditioned domain we do not need this). Using the binomialinverse theorem, Appendix A, we obtain˜ C M ≈ D (cid:104) V p Λ p V † p + (cid:15) I N M (cid:105) − D † = D (cid:104) V p (cid:16) Λ p + (cid:15) I p (cid:17) − V † p + 1 (cid:15) Π null p (cid:105) D † , (43)where Π null p is the effective null space (dimension N M − p ) projector, see section “Tomog-raphy inverse problem...”. Using equation 40, we can compute B = D (cid:104) V p (cid:16) Λ p + (cid:15) I p (cid:17) − / V † p + 1 √ (cid:15) Π null p (cid:105) , (44)which can be split into B = B resolvedp V † p + B un − resolvedp % N M (45) B resolvedp = DV p (cid:16) Λ p + (cid:15) I p (cid:17) − / un − resolvedp % N M = 1 √ (cid:15) DΠ null p ∆ m = B δ r where δ r is drawn from a uniform distributionand rescaled to have norm (cid:113) Q N M (68 . . Our method allows us to separate the following two uncertainty contributions: • B resolvedp deals with the uncertainty information contained in the posterior covariancematrix. It drives the contribution to ∆m of the eigenvectors with eigenvalues abovethe prior damping level (cid:15) , which span the so-called tomography “resolved” space (ofdimension p ). As those eigenvectors are greatly constrained by the tomography inputdata, so are the related perturbations ∆m resolved = B resolvedp δ p where δ p = V † p δ r is a vector of size p. (46)The perturbations tend to be structurally consistent and smooth (because tomographyresolves the large wavelengths of the velocity model), as illustrated in Figure 2. Theycan be explored independently. • B un − resolvedp % N M contains the tomography full effective null space (of dimension p − N M )projector. It drives the contribution in the uncertainties of eigenvectors with eigenval-ues below (cid:15) , which span the tomography “unresolved space”. This space representsthe full effective null space of tomography, constrained only by the regularizationsand not by the input data. The related perturbations ∆m un − resolved = B un − resolvedp % N M δ r , (47)are obviously much less structurally consistent, but are nevertheless interesting. Theygive exhaustive information focused on what tomography cannot resolve, mainly re-lated to illumination issues, which is an important source of uncertainty. One origi-nality of the method presented here is to give also access to this contribution. Notethat an explicit orthonormalization of the eigenvectors (like a Gram-Schmidt) can benumerically important [Zhang and Thurber, 2007], especially if we deal with the nullspace, for an accurate computation of Π null p .“Total” perturbations are given by the sum of both contributions, ∆m = ∆m resolved + ∆m un − resolved , (48)with an example in Figure 2. The resolved space perturbation looks more organized,smoother and more correlated to structures and the tomography final model than thetotal perturbation. The total perturbation looks more random and of higher frequency,mainly because the tomography unresolved space is large and thus a dominant contributorto the total perturbation (i.e. the magnitude of ∆m un − resolved is larger than the magnitudeof ∆m resolved , approximately four times in this example). As will be illustrated further,this gives complementary information on the migration structural uncertainties, i.e. on theindependent study of the ∆m resolved contribution.14igure 2: Left: Maximum-likelihood velocity (Vp) model m n . Perturbations ∆m displayedon sections and one horizon; Middle: Total space; Right: Resolved space. From[Messud et al., 2018].Using notations of § and equation 40 we have δ r = R p δ r + Π null p δ r ⇒ δ r † δ r = δ r † R p δ r + δ r † Π null p δ r = δ p † δ p + δ r † Π null p δ r = Q N M (68 . , (49)where the vector δ p of dimension p is related to the projection of δ r on the resolved subspace,equation 46. In the large N M case (like in tomography), where Q N M (68 . N M , one may think that equation 49 implies δ p † δ p = p and δ r † Π null p δ r = N M − p ,which would slightly simplify the problem. But this is not the case as amplitudes of theperturbations in resolved and unresolved subspaces are not fully independent from the totaluncertainty point of view. Even if this may be true on average, an advantage of consideringrandom perturbations δ r of dimension N M and projecting them on V † p to compute the p -dimensional perturbations δ p = V † p δ r is that the latter will account for the structures(geology, etc.) in the eigenvectors.Despite the formal similarities between our decomposition, equation 45, and the decom-position of “Prior based low-rank decomposition...”, equation 38, the content is different. • In equation 38, B is a N M × p matrix and δ p a p size vector, whereas in our case B is a N M × N M matrix and δ r a N M size vector. • In equation 38, tomography constraints are contained in the model preconditioner P (a steering filter) through equation 31, whereas in our case the tomography con-straints are contained in the eigenvectors and the illumination information in thepreconditioner D . This gives more flexibility to “let the data talk” and reduces thebias in the uncertainty analysis. • In equation 38, there is no contribution similar to B un − resolvedp % N M , that describes thetomography full effective null space (dimension p − N M ). The B un − resq % p of equation38 describes the effective null space (dimension p − q ) of the EVD in the modelpreconditioned domain, which is a very limited part of the full effective null space ofthe tomography operator (because q ≤ p << N M ). If we consider that the p value inequation 45 is approximately the same as in equation 38, we can deduce a similaritybetween the two decompositions at the resolved space level: B resolvedp δ p ≈ ( B resq + B un − resq % p ) δ p , (50)where our δ p is constrained to be equal to V † p δ r . The introduction of this constrainton δ p and of the B un − resolvedp % N M δ r term thus distinguishes our method. It allows us tosplit the uncertainty analysis into resolved and unresolved tomography spaces, thatcontain complementary information as illustrated further.15 ur third new contribution: Exhaustive error bars Uncertainty attributes can be computed statistically using the obtained set of perturbationsfor both resolved space, using the { ∆m resolved } , and total space, using the { ∆m } : • Tomography model 68 .
3% error bars on m n (velocity and anisotropy models):Computed by considering the maximum possible variations of the model perturbations( i here represents the model grid coordinates): σ ( m n ) i = max { ∆ m i } . (51)The true model belongs to m n ± σ ( m n ) with a probability P ≥ .
3% [Messud et al., 2017a].[Reinier et al., 2017] give an illustration of such error bars for the velocity and thetotal space. • Maximum horizon perturbations within the 68 .
3% confidence region:Kinematic or map migrations can be performed with each model perturbation toobtain a set of equi-probable horizon perturbations { ∆h } around the maximum-likelihood horizon position h n . Note that those perturbations must not be interpretedas a migration pdf sampling, but as maximum possible perturbations within the 68 . • Horizon position 68 .
3% error bars:A depth error bar can be defined as the maximum possible depth variation of thehorizon perturbations ( i here represents the horizon coordinates): σ ( h n ) i = max { ∆ h i } . (52)Horizons move vertically and laterally for each map migration; σ ( h n ) i considers thedepth “envelope” of all migrated horizons and thus accounts for lateral displacementsof migrated points. This gives 68 .
3% confidence level error-bars: the true horizondepth position belongs to h n ± σ ( h n ) with a probability P ≥ .
3% [Messud et al., 2017a].Lateral (x and y-directions) horizon error bars can be computed using the same prin-ciple, from differences of position between rays traced in m n and rays traced in theperturbed model.Note that our error bars ( σ ( m n ) and σ ( h n ) ) should not be computed from standard-deviations of the perturbations but from a maximum, as we sample a pdf equi-probablecontour. They account for the non-diagonal elements of the tomography and migrationposterior covariances, equations 19 and 24. Thus, they contain exhaustive uncertaintyinformation, contrary to the diagonal elements of ˜ C M and ˜ C H (i.e. the variances). Theytherefore can be considered as “generalized standard deviations” and can be computed forthe resolved and total spaces.We emphasize again that the pdf equi-probable contour sampling has the advantage ofreducing the sampled space to its most representative components, optimizing the error barcomputation. We obtain stable error bars with 200-500 random models, without introducingany preconditioning that would reduce the dimensionality of the model space.16igure 3: The maximum-likelihood prestack depth migration velocity model overlain witha subset of 20 (among several hundred) random migrated horizons from the per-turbed models. From [Messud et al., 2017b]. Our fourth new contribution: Migration volumetric positioningerror bars and QC of the Gaussian hypothesis (specific tonon-linear slope tomography)
The use of a non-linear approach based on kinematic invariants, i.e. non-linear slope to-mography [Guillaume et al., 2013b], also provides unique advantages: • Firstly, as discussed in section “Bayesian formalism...”, full kinematic migrations(using all offsets) of the tomography data (invariant picks) can be performed on eachmodel perturbation. Using the same “maximum” principle as in section “Exhaustiveerror bars...” allows us to deduce migration kinematic uncertainties in the wholemigrated volume, not only at target reflector positions. This output is specific to ournon-linear slope tomography and is called migration volumetric 68 .
3% error bars inthe following. • Non-linear slope tomography also provides an efficient way to assess the quality ofthe randomly generated model perturbations. Indeed, the cost functions related tothe perturbations can be estimated automatically and non-linearly, allowing the con-sideration of some non-linear aspects of the problem. Combined with our posteriorpdf equi-probable contour sampling, this allows us to QC the validity of the Gaussianhypothesis done in section “Bayesian formalism...”. This represents a strong advan-tage of our method. Figure 4 shows, for the first 46 perturbations, the non-linearlycomputed tomography cost functions (equal to equation 39 up to an additive constantin the linear approximation). It is obvious that almost iso-cost, i.e. equi-probable, δ m were generated. We observe only limited variations around the average cost functionvalue of the perturbed models, meaning that the linear hypothesis assumed in ouranalysis is appropriate. Interestingly, we see that no spurious perturbations related totoo large variations of the cost function were generated. It is not necessary to have astep discarding spurious perturbations with our method (whereas it may be necessarywith the method of section “Sampling a normal distribution...”).17igure 4: Left: Tomography cost function values for the final model (i.e. maximum-likelihood model) and the first 46 generated perturbed models among 500. Right:Cost function values after the velocity model building phases (first tomographypass and final tomography pass), represented with a larger nonlinear scale. From[Reinier et al., 2017]. Global proportionality constraint issue and error bar scaling
Many aspects that only slightly affect the maximum-likelihood search (i.e. the minimum ofthe cost function) may affect much more the uncertainty computation (i.e. the curvaturesof the cost function at the minimum). Often overlooked, those aspects tend to affectuncertainties up to a global proportionality constant: • Bayesian uncertainty reasoning holds strictly if the diagonals of C D (quality of thetomography picks) and C M (various model space constraints), that enter into ˜ C M computation, represent variances, related to a confidence level of 68 . C D and C M tend to be defined up to a global proportionality constant, i.e. theyare balanced together, so that they do not affect the maximum-likelihood. But thescaling of the global proportionality constant is not easy and itself uncertain. • Data decimation will produce less “illumination” of each model node and thereforewill tend to increase the uncertainties. Contrariwise, a larger model discretization stepwill produce more “illumination” of each model node and thus will tend to decreasethe uncertainties. Those effects could theoretically be compensated by fine adaptationof the prior covariances, but this is not easy and basically requires knowledge of alarge part of the inversion solution. Reasonable changes in data decimation and modeldiscretization will tend to affect the uncertainties globally and linearly on average,i.e. up to a global proportionality constant.The combination of those effects will tend to affect uncertainties approximately onlyup to a global proportionality constant (within equi-probable contour sampling as well asfull pdf sampling). The global proportionality constant can be rescaled using posteriorinformation external to the tomography, like wells. so that all well markers lie within thehorizon error bars, see illustration in Figure 5. The resolved space error bars should be usedfor such a matching, as they contain the physical tomography information; after rescalingthey become quantitative. The total error bars will remain qualitative (even after rescalingby the global constant found from the resolved space error bars) as null space explorationhas been thresholded at the noise level (cid:15) . However, their hierarchy gives an interestingcomplementary uncertainty information as illustrated in the next section.18igure 5: Zoom-in around one well with Base Flett marker (cyan), migrated image section,Base Flett horizon (pink dotted line) and horizon error bars (pink lines). From[Reinier et al., 2017].
Error bars illustrated on 3D field data
Let us illustrate our method on a first North Sea dataset merging four different overlap-ping narrow-azimuth towed-streamer surveys acquired over the years with different layoutconfigurations. Figure 6a shows a compounded fold map of the surveys labelled A to D.The arrows indicate the different acquisition directions, and the overlapping parts withhigher fold appear clearly. Figure 6b displays the computed total space depth error bars forthe top chalk horizon. One can observe a clear correlation between the illumination mapin Figure 6a and the total space depth error bars: the latter are smaller in overlap areaswhere the enlarged angle diversity (dip and azimuth) of raypaths improves the resolutionof the tomographic operator. On the other hand, lower-fold areas such as the rig zone in-side survey C show relatively higher total space error bars correlated with the poorer anglediversity of raypaths and the reduced illumination; we can also observe larger error bars onpoorly illuminated survey edges. So, total space error bars highlight the combined effectsof the acquisition fold and of the effective angle diversity that is in particular sensitive tostructural complexities.Figure 7 compares total and resolved space depth error bars. Again, total space error barsare dominated by the acquisition illumination variations (for instance, the narrow canyon inthe middle is incompletely illuminated). The resolved space error bars give complementaryinformation that tends to highlight how illumination diversity drives the discriminationpower of the tomographic operator. The zoom in Figure 7 clearly shows larger resolvedspace depth error bars in steeply dipping parts of the top Chalk horizon located belowvelocity features in the overburden. One can observe the correlation between the velocityfeatures in the overburden and the spatial distribution of the resolved space depth errorbars at top chalk level. Also demonstrated in figure 4 of [Messud et al., 2017b], resolvedspace error bars correlate very well with steeply dipping flanks or faults, and are strongerwhen shooting and dipping/fault plane directions are parallel, thus confirming that shootingalong the dip direction is better for resolution than shooting strike.The hierarchy (or variation) of our error bars is thus related to the whole tomographicinformation: the changes in illumination, the distribution of angle and azimuth diversity,the spatial distribution of tomographic inversion data and the velocity complexity. Thetotal space error bars are qualitative, cf. section “Global proportionality constraint issue...”,and more centered on the changes in illumination and the distribution of angle and azimuthdiversity. The hierarchy gives complementary uncertainty information compared to resolvedspace error bars, which is more centered on the spatial distribution of tomographic inversiondata and the velocity complexity. 19n the examples above, horizon error bars were computed by (zero-offset) map migrationin all model perturbations. However, as discussed in section “Migration volumetric posi-tioning error bars...”, available tomographic picks (invariants) used by the non-linear slopetomography allow computing migration volumetric error bars by full kinematic migration(using all offsets) of the picks in all model realizations. Figure 8 shows an example ofmigration volumetric error bars for the resolved space, extracted on vertical sections andalong a horizon. The error bars exhibit layered and velocity-correlated variations havinglonger spatial wavelengths as the full-offset range (not only zero-offset) is considered in thekinematic migrations. The advantage of the volumetric error bars is that they make itpossible to track and understand the buildup of positioning uncertainties in the overburdenand in-between horizons.Figure 6: Top Chalk horizon attributes: (a) Illumination map with shooting direction ofeach survey indicated by the direction of the associated arrow and (b) total spacehorizon depth error bars. From [Messud et al., 2017a].Figure 7: Horizon depth error bars. Left: Total space. Middle: Resolved space. Right: Vpextracted above the horizon. From [Messud et al., 2018].20igure 8: Migration volumetric depth error bars (resolved space), displayed on vertical sec-tions and on a horizon. (a) Left: Spatial variations of horizon depth error bars provide some assessment of the potential areawith higher uncertainty. Right: display of desired number of realizations for some key horizonssuperimposed on velocity model attributes (estimated velocity parameters, error bars).(b) Examples of reservoir contours for different spill point closure definitions and for different GRVscenarios (minimum case scenario on the left, average case in the center and maximum caseon the right for the four-way closure type). For each scenario, the blue isoline represents thefour-way closure, the red represents the three-way and the pink, the two-way closure.(c) Left: Probability map finding closure above the spill point for multiple realizations. Right: Onerealization of a calibrated top reservoir surface.
Figure 9: Illustrating steps in GRV computation workflow and intermediate products orQCs. From [Col´eou et al., 2019]. 21et us now illustrate the integration of structural uncertainties into a downstream gross-rock volume (GRV) calculation workflow on a second seismic dataset. In a conventionalstochastic approach, structural uncertainties are known at well locations and inferred else-where using variogram models. The presented tomography-based method allows more con-trol between wells and provides a realization-based way of assessing the positioning of reser-voir boundaries. Figure 9 illustrates key milestones in the workflow which breaks down asfollows: • Estimating the tomography maximum-likelihood velocity model and computing targethorizon error bars tuned to observed mis-ties at some well locations (Figure 9a), • For the Top and Base reservoir horizons, describing the channel system of interest,calibrating horizon realizations to well markers (Figure 9b). By doing so, uncertaintiesbetween wells are reflected by the spatial variations of horizon depth error bars derivedfrom the tomographic operator. • Estimating spill point depending on closure assumption for various horizon realiza-tions. Figure 9b shows three fault-driven types of closure: “four-way” closure madeof dip or channel limits,“three-way” and “two-way” closures add one or two sealedbounding faults. • Calculating GRV from Base and Top reservoir down to spill point closure level, withall these elements being affected by the error bars in the migrated domain and alongwell paths. • Assessing, for each identified prospect, the closure probability map made from allplausible horizon realizations and defining the chance of finding closure above themeasured spill point. Figure 9c shows the probability map for one prospect, demon-strating the presence of robust structural closures in the same channel system.This example emphasizes the importance of structural uncertainties for providing error barsbetween well locations and allowing the generation of corresponding horizon realizations.
Conclusion
The work presented by [Osypov et al., 2008a, Osypov et al., 2010, Osypov et al., 2011, Osypov et al., 2013]represents an important breakthrough for the industrial computation of migration struc-tural uncertainties. This work provided a theoretical and numerical framework and hasbeen applied to various E&P topics. Here we have proposed an extension of this work.Firstly, we developed the application in the context of non-linear slope tomography. Inaddition to the advantages in terms of accuracy and efficiency of the VMB (compared tostandard tomography) it provides the possibility to assess the quality of the linear andGaussian assumptions. It also allows us to compute volumetric migration positioning errorbars (using kinematic migration of each locally coherent event and not only map migrationof horizons).Secondly, we estimated error bars from the statistical analysis of perturbed models ob-tained from the sampling of an equi-probable contour of the posterior pdf (related to a clearconfidence level), not of the full pdf as [Osypov et al., 2008a]. Thirdly, while Osypov etal. worked in a preconditioned model space (with a smaller dimensionality and where theprior model covariance is the identity matrix), we propose to work in the full model space,avoiding biased uncertainty analysis. We introduced the splitting of the full model spaceinto resolved and unresolved spaces which allows characterization of both the contributions22f the data and regularizations, and of the null space. Fourthly, a general definition of errorbars for a given confidence level was given.Finally, all these concepts were illustrated on two field 3D datasets where emphasis wasplaced on the importance of horizon error bars, among others for GRV computation anduncertainty evaluation between well locations.Our approach can easily apply to FWI-derived models. Indeed, state-of-the-art workflowsinvolve interleaved FWI and tomography passes, ending with a tomography pass. Thecorresponding tomography uncertainty analysis can naturally be performed to produce anestimate for FWI model kinematic-related uncertainties.
Acknowledgments
The authors are particularly indebted to Mathieu Reinier, Herv´e Prigent, Thierry Col´eou,Jean-Luc Formento, Samuel Gray and Sylvain Masclet for collaboration and enlighteningdiscussions. The authors are grateful to CGG for granting permission to publish this work.The authors thank PremierOil, Edison, Bayerngas, WesternGeco, INEOS, OMV and CGGMulti-Client & New Ventures Division for their permission to show the data. Figures 2 and4-9 are courtesy of EAGE and figure 3 is courtesy of The Leading Edge.
A. The binomial inverse theorem and the Woodbury matrixidentity
Consider an invertible (thus square) matrix A of size N × N . Consider a matrix V of size N × K , a matrix V of size K × K , and a matrix W of size K × N . If the square matrix I K + ΛWA − V of size K × K is invertible, we have the following identity, called binomialinverse theorem (cid:0) A + VΛW (cid:1) − = A − − A − V (cid:0) I K + ΛWA − V (cid:1) − ΛWA − . (53)If the matrix Λ is invertible (thus square K = K = K ), the previous expression can bereduced to the Woodbury matrix identity (cid:0) A + VΛW (cid:1) − = A − − A − V (cid:0) Λ − + WA − V (cid:1) − WA − . (54)Consider A = I N , Λ square but not necessarilly invertible, and W = V † that satisfies V † V = I K . If the matrix I K + Λ is invertible, the binomial inverse theorem gives (cid:0) I N + VΛV † (cid:1) − = I N − V (cid:0) I K + Λ (cid:1) − ΛV † . (55)Using the binomial theorem again under same conditions, we obtain ( I K + Λ ) − Λ = I K − ( I K + Λ ) − ), and finally the following identity (cid:0) I N + VΛV † (cid:1) − = I N − VV † + V (cid:0) I K + Λ (cid:1) − V † . (56) B. Standard-deviations and error bars
The multi-dimensional Gaussian posterior pdf, equation 18, can be rewritten (we do notconsider the normalization factor here to simplify the notations without loss of generality,so that the maximum of the pdf is always 1)˜ ρ M (∆ m ) = exp (cid:2) −
12 ∆ m † ˜ C − M ∆ m (cid:3) (57)23 N M (cid:89) i =1 ˜ ρ M i (∆ m ) , where the posterior pdf for one model space node i is defined by˜ ρ M i (∆ m ) = exp (cid:2) −
12 ˜ C − M ii ∆ m i (cid:3) exp (cid:2) − A i (∆ m )∆ m i (cid:3) A i (∆ m ) = N M (cid:88) j =1 j (cid:54) = i ˜ C − M ij ∆ m j . (58) A i (∆ m )∆ m i may be negative and can be neglected only when the correlations ˜ C − M i,j (cid:54) = i aresufficiently small.Consider the models related to a given value a ∈ [0 ,
1] of the un-normalized posterior pdffor one model space node, i.e. ∀ i = 1 ..N M : ˜ ρ M i (∆ m ) = a . In other terms, a denotes apercentage of the maximum of each ˜ ρ M i . Using equation 57, this leads to˜ ρ M (∆ m ) = a N M , ∆ m † ˜ C − M ∆ m = 2 ln(1 /a ) × N M . (59)This equation defines the set of model perturbations ∆ m related to a chosen a value forthe posterior pdfs of each model space node ˜ ρ M i (∆ m ). Equation 59 is equivalent to N M (cid:88) i =1 ˜ C − M ii ∆ m i + N M (cid:88) i =1 A i (∆ m )∆ m i = 2 ln(1 /a ) × N M . (60)In the general case, one has to resolve the full equation 60 to obtain the solutions ∆ m i .But if | (cid:80) N M i =1 A i (∆ m )∆ m i | << | (cid:80) N M i =1 ˜ C − M ii ∆ m i | , the solutions become∆ m i = ± ˜ C / M ii (cid:112) /a ) , (61)i.e. error bars are then related to the posterior standard deviations ˜ C / M ii . Error bars becomeequal to ± ˜ C / M ii if we select a = 0 .
6, that is related to a confidence probability P ( a ) = 68 . ρ M (∆ m ) inside the corresponding hyper-ellipsoid). References [Adler et al., 2008] Adler, F., Baina, R., Soudani, M. A., Cardon, P., and Richard, J.-B.(2008). Nonlinear 3d tomographic least-squares inversion of residual moveout in kirchhoffprestack-depth-migration common image gathers.
Geophysics , 73(5):VE2–VE13.[Al Chalabi, 1994] Al Chalabi, M. (1994). Seismic velocities, a critique.
First Break ,12(12):589–596.[Allemand et al., 2017] Allemand, T., Sedova, A., and Hermant, O. (2017). Flattening com-mon image gathers after full-waveform inversion: the challenge of anisotropy estimation. , pages 1410–1415.[Choi, 2006] Choi, S.-C. (2006).
Iterative methods for singular linear equations and least-squares problems . PhD thesis, Stanford University.24Clapp et al., 1998] Clapp, R. G., Biondi, B. L., Fomel, S., and Claerbout, J. F. (1998).Regularizing velocity estimation using geologic dip information.
SEG, Expanded Ab-stracts .[Col´eou et al., 2019] Col´eou, T., Formento, J.-L., Prigent, H., Messud, J., Laurencin, D.,Reinier, M., Guillaume, P., Egreteau, A., and Damian, L. (2019). Use of tomographyvelocity uncertainty in grv calculation. .[Cowan, 1998] Cowan, G. (1998).
Statistical Data Analysis . Oxford Science Publications.[Duffet and Sinoquet, 2002] Duffet, C. and Sinoquet, D. (2002). Quantifying geologicaluncertainties in velocity model building. ,pages 926–929.[Duffet and Sinoquet, 2006] Duffet, C. and Sinoquet, D. (2006). Quantifying uncertaintieson the solution model of seismic tomography.
Inverse Problems , 22(5):525–538.[Fournier et al., 2013] Fournier, A., Ivanova, N., Yang, Y., Osypov, K., Yarman, C.,Nichols, D., Bachrach, R., You, Y., Woodward, M., and Centanni, S. (2013). Quan-tifying e&p value of geophysical information using seismic uncertainty analysis. .[Guillaume et al., 2008] Guillaume, P., Lambar´e, G., Leblanc, O., Mitouard, P., Le Moigne,J., Montel, J., Prescott, A., Siliqi, R., Vidal, N., Zhang, X., and Zimine, S. (2008).Kinematic invariants: an efficient and flexible approach for velocity model building. , pages 3687–3692.[Guillaume et al., 2011] Guillaume, P., Lambar´e, G., Sioni, S., Carotti, D., D´epr´e, P., Cu-lianez, G., Montel, J.-P., Mitouard, P., Depagne, S., Frehers, S., and Vosberg, H. (2011).Geologically consistent velocities obtained by high definition tomography. , pages 4061–4065.[Guillaume et al., 2013a] Guillaume, P., Reinier, M., Lambar´e, G., Cavali´e, A., Adamsen,M., and Bruun, B. (2013a). Dip constrained non-linear slope tomography - an applicationto shallow channel characterization. , (Tu 02 09).[Guillaume et al., 2013b] Guillaume, P., Zhang, X., Prescott, A., Lambar´e, G., Reinier, M.,Montel, J.-P., and Cavali´e, A. (2013b). Multi-layer non-linear slope tomography. , (Th 04 01).[Hajnal and Sereda, 1981] Hajnal, Z. and Sereda, I. T. (1981). Maximum uncertainty ofinterval velocity estimates.
Geophysics , 46(11):15431547.[Lambar´e, 2008] Lambar´e, G. (2008). Stereotomography.
Geophysics , 73(5):VE25VE34.[Lambar´e et al., 2014] Lambar´e, G., Guillaume, P., and Montel, J.-P. (2014). Recent ad-vances in ray-based tomography. , (We G103 01).[Li et al., 2014] Li, L., Caers, J., and Sava, P. (2014). Uncertainty maps for seismic imagesthrough geostatistical model randomization.
SEG, Expanded Abstracts , pages 1496–1500.[Messud et al., 2017a] Messud, J., Guillaume, P., and Lambar´e, G. (2017a). Estimatingstructural uncertainties in seismic images using equi-probable tomographic model. , (Th B4 09).25Messud et al., 2018] Messud, J., Guillaume, P., Reinier, M., and Hidalgo, C. (2018). Mi-gration confidence analysis: Resolved space uncertainties. , (Th A12 09).[Messud et al., 2017b] Messud, J., Reinier, M., Prigent, H., Guillaume, P., Col´eou, T., andMasclet, S. (February 2017b). Extracting seismic uncertainties from tomographic velocityinversion and their use in reservoir risk analysis.
The Leading Edge, SEG , 36(2):127–132.[Mun˜oz and Rath, 2006] Mun˜oz, G. and Rath, V. (2006). Beyond smooth inversion: theuse of nullspace projection for the exploration of non-uniqueness in mt.
Geophys. J. Int. ,164(2):301–311.[Operto et al., 2003] Operto, S., Lambar´e, G., Podvin, P., and Thierry, P. (2003). 3dray+born migration/inversion-part2: Application to the seg/eage overthrust experiment.
Geophysics , 68(4):1357–1370.[Osypov et al., 2008a] Osypov, K., Nichols, D., Woodward, M., and Yarman, C. E. (2008a).Tomographic velocity model building using iterative eigendecomposition. , (P164).[Osypov et al., 2008b] Osypov, K., Nichols, D., Woodward, M., Zdraveva, O., and Yarman,C. E. (2008b). Uncertainty and resolution analysis for anisotropic tomography usingiterative eigendecomposition.
SEG, Expanded Abstracts , pages 3244–3249.[Osypov et al., 2010] Osypov, K., Nichols, D., Yang, Y., Qiao, F., O’Briain, M., andZdraveva, O. (2010). Quantifying structural uncertainty in anisotropic depth-imaging.gulf of mexico case study. , (CO 31).[Osypov et al., 2011] Osypov, K., O’Briain, M., Whitfield, P., Nichols, D., Douillard, A.,Sexton, P., and Jousselin, P. (2011). Quantifying structural uncertainty in anisotropicmodel building and depth imaging: Hild case study. , (FO 10).[Osypov et al., 2013] Osypov, K., Yang, Y., Fourier, A., Ivanova, N., Bachrach, R.,Yarman, C. E., You, Y., Nichols, D., and Woodward, M. (2013). Model-uncertaintyquantification in seismic tomography: method and applications.
Geophysical Prospect-ing , 61(6):1114–1134.[Paige and Saunders, 1982] Paige, C. C. and Saunders, M. A. (1982). Lsqr: An algorithmfor sparse linear equations and sparse least squares.
ACM Transactions on MathematicalSoftware , 8(2):43–71.[Reinier et al., 2017] Reinier, M., Messud, J., Guillaume, P., and Rebert, T. (2017). Tomo-graphic model uncertainties and their effect on imaged structures. .[Simpson et al., 2000] Simpson, G., Lamb, F., Finch, J., and Dinnie, N. (2000). The appli-cation of probabilistic and qualitative methods to asset management and decision making.
SPE , (59455-MS).[Tarantola, 1986] Tarantola, A. (1986). A strategy for nonlinear elastic inversion of seismicreflection data.
Geophysics , 51(10):18931903.[Tarantola, 2005] Tarantola, A. (2005).
Inverse Problem Theory and Methods for ModelParameters Estimation . Society for Industrial and Applied Mathematics (Philadelphia).26Thore et al., 2002] Thore, P., Shtuka, A., Lecour, M., Ait-Ettajer, T., and Cognot, R.(2002). Structural uncertainties: Determination, management, and applications.
Geo-physics , 67(3):840852.[Virieux and Operto, 2009] Virieux, J. and Operto, S. (2009). An overview of full-waveforminversion in exploration geophysics.
Geophysics , 74(6):WCC1WCC26.[Woodward et al., 2008] Woodward, M., Nichols, D., Zdraveva, O., Whitfield, P., andJohns, T. (2008). A decade of tomography.
Geophysics , 73(5):VE5VE11.[Zhang and Thurber, 2007] Zhang, H. and Thurber, C. H. (2007). Estimating the modelresolution matrix for large seismic tomography problems based on lanczos bidiagonaliza-tion with partial reorthogonalization.
Geophys. J. Int. , 170(1):337–345.[Zhang and McMechan, 1995] Zhang, J. and McMechan, G. A. (1995). Estimation of res-olution and covariance for large matrix inversions.