Eric Verschuur
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Verschuur.
Geophysics | 2010
Bill Dragoset; Eric Verschuur; Ian Moore; Richard Bisley
Surface-related multiple elimination (SRME) is an algorithm that predicts all surface multiples by a convolutional process applied to seismic field data. Only minimal preprocessing is required. Once predicted, the multiples are removed from the data by adaptive subtraction. Unlike other methods of multiple attenuation, SRME does not rely on assumptions or knowledge about the subsurface, nor does it use event properties to discriminate between multiples and primaries. In exchange for this “freedom from the subsurface,” SRME requires knowledge of the acquisition wavelet and a dense spatial distribution of sources and receivers. Although a 2D version of SRME sometimes suffices, most field data sets require 3D SRME for accurate multiple prediction. All implementations of 3D SRME face a serious challenge: The sparse spatial distribution of sources and receivers available in typical seismic field data sets does not conform to the algorithmic requirements. There are several approaches to implementing 3D SRME that address the data sparseness problem. Among those approaches are pre-SRME data interpolation, on-the-fly data interpolation, zero-azimuth SRME, and true-azimuth SRME. Field data examples confirm that (1) multiples predicted using true-azimuth 3D SRME are more accurate than those using zero-azimuth 3D SRME and (2) on-the-fly interpolation produces excellent results.
Seg Technical Program Expanded Abstracts | 2008
A.J. Guus Berkhout; Gerrit Blacquière; Eric Verschuur
Seismic acquisition surveys are designed such that the time interval between shots is sufficiently large to avoid the tail of the previous source response to interfere with the next one (zero overlap in time). To economize on survey time and processing effort, the current compromise is to keep the number of shots to some acceptable minimum. The result is that the source domain is poorly sampled. In this paper it is proposed to abandon the condition of non-overlapping shot records. Instead, a plea is made to move to densely sampled and wide-azimuth source distributions with relatively small time intervals between shots (‘blended acquisition’). The underlying rationale is that interpolating missing shot records, i.e., generating data that have not been recorded (aliasing problem), is much harder than separating the data of overlapping shot records (interference problem). In this paper we summarize the principle of blended acquisition and show how to process blended data. Two processing routes can be followed: reconstructing the unblended data (‘deblending’) followed by conventional processing, or directly processing the blended measurements. Both approaches will be described and illustrated with numerical examples. A theoretical framework is presented that enables the design of blended 3D seismic surveys.
Seg Technical Program Expanded Abstracts | 2002
Panos G. Kelamis; Eric Verschuur; Robert L. Clark; Roy Burnstad
Summary Surface-related and internal multiple elimination schemes, firmly rooted to the acoustic wave equation, have been successfully applied to marine datasets. In land however, the applicability of this type of technology is rather limited. Using the CFP-based, layer-related internal multiple removal algorithm, we propose two data-driven, practical strategies aiming for the estimation and subsequent attenuation of internal multiples on land data. Specific issues and assumptions related to this type of technology with emphasis on land data applications are also considered. The effectiveness of the proposed methodologies is demonstrated with a number of field datasets from the Arabian Peninsula.
Seg Technical Program Expanded Abstracts | 2004
Felix J. Herrmann; Eric Verschuur
Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatialtemporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
Seg Technical Program Expanded Abstracts | 1999
Panos G. Kelamis; Eric Verschuur; A. J. Berkhout; Kevin Erickson
Processing techniques aiming at full prestack datuming of seismic data require a detailed knowledge of the velocitydepth model. They are usually based on wave theoretical principles and are successfully employed in marine datasets where both the water depth and velocity are readily available. In land datasets however, particularly those with a complicated near surface, a detailed knowledge of the velocity-depth model is always questionable and thus the application of conventional datuming methods produces unreliable results. In this paper we employ the concept of Common Focus Point (CFP) technology and develop a novel, full prestack datuming methodology for seismic reflection data. The main advantage of our approach is that it requires no knowledge of the velocity-depth model. Instead, the velocity-depth model is expressed in terms of propagation operators which can be easily updated.
IEEE Signal Processing Magazine | 2012
Hannes Kutscha; Eric Verschuur
In measurements for seismic exploration, the sampling of sources and receivers is usually not adequate to perform subsequent processing and imaging algorithms. Therefore, reconstruction of the seismic data to obtain aliasing-free, dense, and regularly sampled data is an important preprocessing step. In most reconstruction algorithms, information about the subsurface can not be utilized, even if such is available. Focal transformation is a way to effectively incorporate prior knowledge of the subsurface in seismic data reconstruction. The basis functions of this transformation are the focal operators. They can be understood as one-way propagation operators from certain effective depth levels to the measurement surface in a prior (approximate) velocity model. A sparseness constraint in the focal domain is used to penalize aliasing noise. By using several depth levels simultaneously, the data can be described with less parameters in the transform domain. This results in a better signal to noise separation and, therefore, improved reconstruction. The principles are described and some illustrations on synthetic seismic data demonstrate the virtues of the approach.
Geophysical Prospecting | 2016
Tomohide Ishiyama; Gerrit Blacquière; Eric Verschuur; Wim A. Mulder
Surface waves in seismic data are often dominant in a land or shallow-water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near-surface characterization. However, their complex properties make the surface-wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three-dimensional surface-wave estimation and separation using an iterative closed-loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward-modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface-wave model is parameterized by the frequency-dependent slowness and source properties for each surface-wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.
Seg Technical Program Expanded Abstracts | 1998
K. M. Schalkwijk; Kees Wapenaar; Eric Verschuur
The decomposition procedure for separating multicomponent ocean-bottom data into upand downgoing Pand S-waves is based on a combination of the pressure, horizontal and vertical velocity components. This makes the decomposition method less easy to apply on field data – differences between the components not due to the earth properties (e.g. instrument response, coupling with the sea-bottom) have to be compensated for. In addition the medium parameters just below the sea-bottom are needed as input for the decomposition. Without a priori knowledge of at least some of these unknowns it is a difficult task to arrive at a correct decomposition result. By performing the decomposition in two steps – first decomposition into upand downgoing wavefields, then decomposition of the upand downgoing wavefields, respectively, into Pand Swaves – the number of unknowns in each step is reduced. This offers possibilities for performing an elastic decomposition on field data without any a priori knowledge of medium parameters or coupling. An adaptive decomposition scheme, applicable to field data, is presented here. Introduction In order to get elastic information about the sub-sea-bottom, ocean-bottom data can be decomposed into upand downgoing Pand converted S-waves ([1],[4]). The converted S-waves can then be separately processed (e.g. migration, inversion) from the P-waves. There are certain conditions which the decomposed data have to satisfy, namely for the decomposition just above the bottom, no sub-bottom primaries should be present in the downgoing waves; for the decomposition just below the bottom, no direct source wave and no water bottom multiples should be present in the upgoing Pand S-waves.
Seg Technical Program Expanded Abstracts | 2010
Guus Berkhout; Eric Verschuur
Seismic reflection wavefields can be described by three fundamental steps: downward propagation, reflection and upward propagation (WRW). Hence, complexity of the wavefields is caused by the combination of propagation and reflection properties. Many algorithms for preprocessing and imaging rely on some parameterization of the wavefields, using mathematically convenient basis functions, such as plane waves, parabolas, wavelets and curvelets. However, none of these are very adequate for describing complex wavefields without a large amount of basis function parameters. In this paper it is proposed to use a more physically-based parameterization in terms of gridpoint responses (GPRs). In our parameter estimation process, the wavefield of each gridpoint source is propagated towards the source and receiver coordinates of the acquisition surface in order to match the measured data. If introduced properly, the propagation operators need only to be approximate (the background velocity model) without losing the ability to fully describe the data. An important aspect of this double (‘bi-focal’) transform is the fact that no imaging condition is applied. Furthermore, this parameterization of seismic data can be easily extended to include surface-related multiples as well.
Seg Technical Program Expanded Abstracts | 2006
Riaz Alá; Eric Verschuur
Summary In this paper a strategy for surface and internal multiple removal multiple is demonstrated on land field data. The method is applied on pre-stack data in the CMP gather domain under the assumption of locally laterally invariance of the earth. The improved SNR and regular offset sampling is obtained by forming CMP super gathers from each group of CMP gathers which allows the possibility of trace mixing, regularization and the signal enhancement in the NMO corrected domain. After conditioning and attenuating the pre-stack gathers from surface and internal multiples, the velocity picking procedure can be performed with more accuracy, which is very crucial for structural