Paul L. Donoho
Chevron Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul L. Donoho.
data compression conference | 1996
John D. Villasenor; R. A. Ergas; Paul L. Donoho
Seismic data have a number of unique characteristics that differentiate them from the still image and video data that are the focus of most lossy coding research efforts. Seismic data occupy three or four dimensions, and have a high degree of anisotropy with substantial amounts of noise. Two-dimensional coding approaches based on wavelets or the DCT achieve only modest compression ratios on such data because of these statistical properties, and because 2D approaches fail to fully leverage the redundancy in the higher dimensions of the data. We describe here a wavelet-based algorithm that operates directly in the highest dimension available, and which has been used to successfully compress geophysical data with no observable loss of geophysical information at compression ratios substantially greater than 100:1. This algorithm was successfully field tested on a vessel in the North Sea in July 1995, demonstrating the feasibility of performing on-board real-time compression and satellite downloading from marine seismic data acquisition platforms.
Seg Technical Program Expanded Abstracts | 1995
Jonathan P. Stigant; Raymond A. Ergas; Paul L. Donoho; Anthony S. Minchella; Pierre Y. Galibert
Using the compression technique described by Donoho et al. we performed a field test of this technology by transmitting highly compressed seismic data from a multi-streamer ship to a land-based processing center in nearly real time. Using the M/V CGG Mistral, which acquired a 3D dataset over the Ninian field in the North Sea during the summer of 1995, and Ku band (14 GHz) satellite channels, we established the feasibility for sending both the traces and navigation data directly into the seismic processing system. While for the test, only a subset of the whole survey was sent to shore, the test was designed to demonstrate both technically and economically the feasibility of sending an entire 3D dataset on a day-to-day basis. The received data were decompressed and processed in parallel with the original data recorded on shipboard tape, to determine the effects of the compression-induced noise at the stack and migration stages will be shown.
SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1998
Paul L. Donoho; Raymond A. Ergas; Robert S. Polzer
As new methods of interpreting 3D seismic data, particularly prestack and derived attribute data, increase in popularity, the management of ever-larger data volumes becomes critical. Compared with acquisition and processing, however, the interpretation use of seismic data requires faster and non- sequential, random access to large data volumes. In addition, quantitative interpretations lead to an increasing need for full 32-bit resolution of amplitudes, rather than the 8 or 16 bit representations that have been used in most interpretation systems up to the present. Seismic data compression can be a significant tool in managing these on- line datasets, but the implementation previously used (e.g. Donoho, et al. 1995) are not well suited to providing rapid random access in arbitrary directions (CDP, crossline, timeslice). If compression ratios of twenty or greater can be routinely provided, with full random access to all parts of the dataset, then both larger cubes and more cubes can be handled within the finite memory and disk systems available on interpretation systems. Such on-line data accessibility will lead to higher productivity by interpreters and greater value for existing seismic surveys. We address here several problems that arise when using the lossy wavelet-transform data compression algorithms currently available. We demonstrate that wavelet compression introduces less noise than currently accepted truncation compression. We also show how compressing small blocks of data needed for random access leads to artifacts in the data, and we provide a procedure for eliminating these artifacts.
Seg Technical Program Expanded Abstracts | 1996
Raymond A. Ergas; Robert S. Polzer; Paul L. Donoho; John D. Villasenor
With the increasing use of lossy seismic data compression techniques by the industry (Bosman and Reiter, 1993; Stigant et al., 1995; Ergas et al., 1996), a dilemma facing users of this technology is to avoid loss of real geophysical information while still obtaining the advantages of smaller data volumes. At this time, each survey requires both testing and an explicit decision to accept or reject a compressed version of the original data. In this paper, we will discuss the available testing methods and how this decision may be made as routine as others we make in acquisition, processing, and interpretation of seismic data. The choice of a level of compression performance could be systematically formulated in terms of the economic cost of recovering from “overcompression” and its probability of occurrence versus the value derived from moving or storing less data. The probability is a difficult number to estimate, and will depend upon the purpose to which the data is put. A straightforward structural interpretation is likely to be more forgiving of compression errors than a quantitative inversion for lithology or fluid content. As a practical matter, we need to reduce the probabilityof compression failure to a very small number based on testing and experience, and to have a backup plan to use the either a lower compression ratio or the original, uncompressed data. Our experiences to date have been that loss of critical information is not difficult to detect at high compression ratios, and that the performance is reliable if we make a choice of compression ratio significantly lower than the point at which failure is evident. There are several possible approaches to testing compression performance on seismic data. These span the range from visual inspection of the original and compressed data on conventional plots to detailed statistical experiments of relative interpreter performance on sections with and without compression. We have found displays of the data along with the subtraction of the compressed and original data to be a robust, if labor-intensive, approach. Care must be taken with amplitudes in both processing and plotting, as simple but data-dependent processes such as AGC can make analysis difficult. Summary statistics, such as compression signal to noise ratio (SNR), which is the ratio of the energy in the original to the energy in the subtraction, can be useful as a monitoring device, but do not give a clear indication when unacceptable artifacts begin to appear. SNR can be measured as a function of time, space, frequency, wavenumber, etc. to provide more detailed criteria which can be used to decide compression acceptability. Other statistics such as entropy (Chen, 1995) can be used, as well as derived attributes from the seismic data. Selection of statistics and threshold values will vary in different geologic and seismic acquisition environments. A judgement of acceptability for a given compression dataset can be made on the basis of the data, or it can be done through measurements of interpreter performance, with the quality of the interpretation as the metric of compression performance. In the field of medical radiology, the same situation has arisen, with expert readers of images working on both compressed and original datasets. There is a well-developed statistical methodology (Cosman et al., 1994) which can be used to determine if significant performance differences can be detected. Such a test could be attempted in our field, using either real data or synthetic seismograms. In the later case, the ground truth is known, which is normally not the case in seismology. As with many other geophysical methods, the experiences and comfort level of the users of seismic data will determine when and whether compression becomes commonplace. Initial results indicate that wavelet transform compression, with proper attention to compression parameters, is robust and predictable. Over the next few years, if this continues, practical methods to insure that geophysical information is preserved through the compression step should evolve and stabilize, and compression will become a routine part of the geophysicist’s toolbox.
SPIE's 1996 International Symposium on Optical Science, Engineering, and Instrumentation | 1996
Robert S. Polzer; Paul L. Donoho; Raymond A. Ergas; Jonathan P. Stigant
Chevron recently developed and deployed a seismic data compression algorithm based on multidimensional wavelet transforms. Development was motivated by the large volumes of data acquired in modern 3D marine surveys. We demonstrate an algorithm that can compress seismic data at ratios between 50 and 100 to 1 without losing geophysically significant information. The algorithm was successfully field tested on a vessel in the North Sea in July 1995, demonstrating the feasibility of on board real-time compression and satellite transmission of the data to a land based processing center. Compressed and decompressed data from the field test were processed into a final image. Differences between this image and the image based on the original data are geophysically insignificant, demonstrating that all geophysical information in the original was retained
Journal of the Acoustical Society of America | 1994
Paul L. Donoho; Mitchell F. Peterson; Hughie Ryder; William H. Keeling
Archive | 1995
Raymond A. Ergas; Paul L. Donoho; John D. Villasenor
Archive | 1987
Bibhas R. De; Paul L. Donoho; David E. Revus; Russell E. Boyer
Seg Technical Program Expanded Abstracts | 1995
Paul L. Donoho; Raymond A. Ergas; John D. Villasenor
Seg Technical Program Expanded Abstracts | 1999
Paul L. Donoho; Raymond A. Ergas; Robert S. Polzer