Tamas Nemeth
Chevron Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tamas Nemeth.
Geophysics | 1999
Tamas Nemeth; Chengjun Wu; Gerard T. Schuster
A least-squares migration algorithm is presented that reduces the migration artifacts (i.e., recording footprint noise) arising from incomplete data. Instead of migrating data with the adjoint of the forward modeling operator, the normal equations are inverted by using a preconditioned linear conjugate gradient scheme that employs regularization. The modeling operator is constructed from an asymptotic acoustic integral equation, and its adjoint is the Kirchhoff migration operator. We tested the performance of the least-squares migration on synthetic and field data in the cases of limited recording aperture, coarse sampling, and acquisition gaps in the data. Numerical results show that the least-squares migrated sections are typically more focused than are the corresponding Kirchhoff migrated sections and their reflectivity frequency distributions are closer to those of the true model frequency distribution. Regularization helps attenuate migration artifacts and provides a sharper, better frequency distribution of estimated reflectivity. The least-squares migrated sections can be used to predict the missing data traces and interpolate and extrapolate them according to the governing modeling equations. Several field data examples are presented. A ground-penetrating radar data example demonstrates the suppression of the recording footprint noise due to a limited aperture, a large gap, and an undersampled receiver line. In addition, better fault resolution was achieved after applying least-squares migration to a poststack marine data set. And a reverse vertical seismic profiling example shows that the recording footprint noise due to a coarse receiver interval can be suppressed by least-squares migration.
Geophysics | 1997
Tamas Nemeth; Egon Nørmark; Fuhao Qin
Variable-size (dynamic) smoothing operator constraints are applied in crosswell traveltime tomography to reconstruct both the smooth- and fine-scale details of the tomogram. In mixed and underdetermined problems a large number of iterations may be necessary to introduce the slowly varying slowness features into the tomogram. To speed up convergence, the dynamic smoothing operator applies adaptive regularization to the traveltime prediction error function with the help of the model covariance matrix. By so doing, the regularization term has a larger weight at initial iterations and the prediction error term dominates the final iterations with a small regularization term weight. In addition, it is shown that adaptive regularization acts by reweighting the adjoint modeling operator (preconditioning) and by providing additional damping. Comparisons of two dynamic smoothing operators, the low-pass filter smoothing and the multigrid technique, with the fixed-size (static) smoothing operators show that the dynamic smoothing operator yields more accurate velocity distributions with greater stability for larger velocity contrasts. Consequently, it is a preferred choice for regularization.
Geophysics | 2000
Tamas Nemeth; Hongchuan Sun; Gerard T. Schuster
A key issue in wavefield separation is to find a domain where the signal and coherent noise are well separated from one another. A new wavefield separation algorithm, called migration filtering, separates data arrivals according to their path of propagation and their actual moveout characteristics. This is accomplished by using forward modeling operators to compute the signal and the coherent noise arrivals. A linearized least‐squares inversion scheme yields model estimates for both components; the predicted signal component is constructed by forward modeling the signal model estimate. Synthetic and field data examples demonstrate that migration filtering improves separation of P-wave reflections and surface waves, P-wave reflections and tube waves, P-wave diffractions, and S-wave diffractions. The main benefits of the migration filtering method compared to conventional filtering methods are better wavefield separation capability, the capability of mixing any two conventional transforms for wavefield sepa...
Geophysics | 2007
Kenneth P. Bube; Tamas Nemeth
Linear systems of equations arise in traveltime tomography, deconvolution, and many other geophysical applications. Nonlinear problems are often solved by successive linearization, leading to a sequence of linear systems. Overdetermined linear systems are solved by minimizing some measure of the size of the misfit or residual. The most commonly used measure is the l2 norm (squared), leading to least squares problems. The advantage of least squares problems for linear systems is that they can be solved by methods (for example, QR factorization) that retain the linear behavior of the problem. The disadvantage of least squares solutions is that the solution is sensitive to outliers. More robust norms, approximating the l1 norm, can be used to reduce the sensitivity to outliers. Unfortunately, these more robustnorms lead to nonlinear minimization problems, even for linear systems, and many efficient algorithms for nonlinear minimiza-tion problems require line searches. One iterative method for solving linear ...
Seg Technical Program Expanded Abstracts | 2008
Rob Dimond; Oliver Pell; Tamas Nemeth; Wei Liu; Joe Stefani; Ray Ergas
Hardware accelerators as co-processors are emerging as a powerful solution to computationally intensive problems. A standard desktop PC or cluster node can be augmented with additional hardware dedicated to providing substantially increased performance for particular applications. Previous efforts have shown that FPGA-based hardware accelerators can offer order-of-magnitude greater performance than conventional CPUs, providing the target algorithm performs a large number of operations per data point. FPGAs are off-the-shelf chips with a configurable ‘sea’ of logic and memory that can be used to implement digital circuits. FPGAs can be attached to the compute system either through the main system bus or as PCI Express cards (or similar) and are typically configured as highly parallel stream processors. FPGA acceleration has been successfully demonstrated in a variety of application domains including computational finance (Zhang et al., 2005), fluid dynamics (Sano et al., 2007), cryptography (Cheung et al., 2005) and seismic processing (Bean and Gray, 1997; He et al., 2005a; He et al., 2005b; Pell and Clapp, 2007).
Seg Technical Program Expanded Abstracts | 2009
Wei Liu; Tamas Nemeth; Alexander Loddoch; Joseph P. Stefani; Ray Ergas; Ling Zhuo; Bill Volz; Oliver Pell; James Huggett
Co-processors offer attractive acceleration opportunities to waveform-based imaging and inversion applications in challenging exploration and production environments. Unlike seismic forward modeling, the large amount of data involved in seismic imaging and inversion can pose a significant challenge to scalable acceleration. We provide and compare several computational schemes to perform anisotropic reverse-time migration on two co-processor platforms: FPGAs and GPUs. Our ongoing experiments so far indicate that both platforms can potentially achieve high speedups using acceleration-friendly schemes which minimize interruptions to computation from data movement and storage.
Geophysics | 2006
Yue Wang; Tamas Nemeth; Robert T. Langan
We present a procedure that solves the eikonal equation for general anisotropic media. It allows one to incorporate arbitrary shapes and types of anisotropic formations. We use an expanding-wavefront scheme and explicit tracking of group velocity propagation directions to choose the causal upgrade stencils for computing traveltime. The method is of first degree in accuracy and is unconditionally stable. The relative traveltime errors are controlled mainly by initial wavefront formation and by the choice of the update stencils. We illustrate the method’s ability to generate smoothly changing qP traveltimes in models of arbitrary anisotropy.
Geophysics | 2002
Joongmoo Byun; James W. Rector; Tamas Nemeth
Vertical seismic profiling/common depth point (VSP‐CDP) mapping is often preferred to crosswell migration when imaging crosswell seismic reflection data. The principal advantage of VSP‐CDP mapping is that it can be configured as a one‐to‐one operation between data in the acquisition domain and data in the image domain and therefore does not smear coherent noise such as tube waves, guided waves, and converted waves as crosswell migration could. However, unlike crosswell migration, VSP‐CDP mapping cannot collapse diffractions; therefore, the lateral resolution of reflection events suffers.We present a migration algorithm that is applied to the crosswell data after they have been mapped. By performing crosswell migration in two distinct steps—mapping followed by diffraction stacking—noise events can be identified and filtered in the mapped domain without smearing effects commonly associated with conventional crosswell migration operators. Tests on noise‐free synthetic crosswell data indicate that the two‐ste...
Seg Technical Program Expanded Abstracts | 2004
Kenneth P. Bube; Jonathan Kane; Tamas Nemeth; Don Medwede; Oleg Mikhailov
Summary Errors in the velocity eld used to migrate seismic data are a leading cause of errors in the positionining of structural events in the processing of seismic data: uncertainty in the velocity eld leads to structural uncertainty. In this paper, we investigate the broader question of how errors in stacking velocity, time to an event in a stacked section, and the slope of an event in a time section lead to errors in the positioning of structural events for an isotropic medium. We perform a sensitivity analysis, obtaining simple formulas for the errors in structure that are rstorder in the errors in stacking velocity, zero-oset time, and slope. These formulas are geometrically explicit: if we make a small change in stacking velocity (or time or slope), we then know the direction and magnitude of the resulting change to each point on the selected event. Being the result of sensitivity analysis, these formulas are linear. Thus if we had a probability distribution for the errors in velocity (i.e., we knew the uncertainty in velocity), we could use these formulas to obtain a probability distribution for the errors in position for points on the selected event (i.e., the uncertainty in structure). Our analysis focuses on the neighborhood of a single point on an event and assumes a homogeneous velocity eld. Although the analysis is based on a very simple model, numerical experiments show that the relationships are valid approximately for moderate heterogeneities in the velocity eld. In a companion paper (Bube et al., 2004), we use these results to investigate errors in structural location due to uncertainty in weak anisotropy.
Geophysics | 2010
Sergey Fomel; Tamas Nemeth; Mauricio D. Sacchi
Seismic data processing and imaging is concerned with the reconstruction of the subsurface image of the earth from measured seismic data. The quality of the reconstruction is highly dependent on the quantity and quality of the acquired seismic data and the skillful representation and processing of the data for imaging. Recent advances in wide-azimuth data acquisition and processing have amply demonstrated that better input data can dramatically change the exploration potential of subsalt plays in ways that improved imaging and velocity estimation methods on traditional data would not have been able to provide.