Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramesh Neelamani is active.

Publication


Featured researches published by Ramesh Neelamani.


Geophysics | 2009

Fast full-wavefield seismic inversion using encoded sources

Jerome R. Krebs; John E. Anderson; David L. Hinkley; Ramesh Neelamani; Sunwoong Lee; Anatoly Baumstein; Martin-Daniel Lacasse

Full-wavefield seismic inversion (FWI) estimates a subsurface elastic model by iteratively minimizing the difference between observed and simulated data. This process is extremely computationally intensive, with a cost comparable to at least hundreds of prestack reverse-time depth migrations. When FWI is applied using explicit time-domain or frequency-domain iterative-solver-based methods, the seismic simulations are performed for each seismic-source configuration individually. Therefore, the cost of FWI is proportional to the number of sources. We have found that the cost of FWI for fixed-spread data can be significantly reduced by applying it to data formed by encoding and summing data from individual sources. The encoding step forms a single gather from many input source gathers. This gather represents data that would have been acquired from a spatially distributed set of sources operating simultaneously with different source signatures. The computational cost of FWI using encoded simultaneous-source gathers is reduced by a factor roughly equal to the number of sources. Further, this efficiency is gained without significantly reducing the accuracy of the final inverted model. The efficiency gain depends on subsurface complexity and seismic-acquisition parameters. There is potential for even larger improvements of processing speed.


Geophysics | 2008

Coherent and random noise attenuation using the curvelet transform

Ramesh Neelamani; Anatoly Baumstein; Dominique G. Gillard; Mohamed T. Hadidi; William L. Soroka

This paper discusses an effective approach to attenuate random and coherent linear noise in a 3D data set from a carbonate environment. Figure 1 illustrates a seismic inline section from a noisy 3D seismic cube. Clearly, the section in Figure 1 is corrupted by undesirable random noise and coherent noise that are linear and vertically dipping in nature


IEEE Transactions on Image Processing | 2006

JPEG compression history estimation for color images

Ramesh Neelamani; R.L. de Queiroz; Zhigang Fan; S. Dash; Richard G. Baraniuk

We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the images current representation, the previous JPEG compressions various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compressions quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.


Geophysics | 2010

Efficient seismic forward modeling using simultaneous random sources and sparsity

Ramesh Neelamani; Christine E. Krohn; Jerry Krebs; Justin K. Romberg; Max Deffenbaugh; John E. Anderson

The high cost of simulating densely sampled seismic forward modeling data arises from activating sources one at a time in sequence.Toincreaseefficiency,onecouldleveragerecentinnovations in seismic field-data acquisition and activate several e.g., 2‐6 sources simultaneously during modeling. However, such approaches would suffer from degraded data quality because of the interference between the model’s responses to the simultaneoussources.Twonewefficientsimultaneous-sourcemodeling approachesareproposedthatrelyonthenoveltandemuseofrandomness and sparsity to construct almost noise-free model response to individual sources. In each approach, thefirst step is to measure the model’s cumulative response with all sources activated simultaneously using randomly scaled band-limited impulses or continuous band-limited random-noise waveforms. In the second step, the model response to each individual source is estimated from the cumulative receiver measurement by exploiting knowledge of the random source waveforms and the sparsity of the model response to individual sources in a known transformdomaine.g.,curveletdomain.Theefficiencyachievable by the approaches is primarily governed by the sparsity of the model response. By invoking results from the field of compressive sensing, theoretical bounds are provided that assert that the approaches would need less modeling time for sparser i.e., simpler or more structured model responses.Asimulated modelingexampleisillustratedthatshowsthatdatacollectedwithas manyas8192sourcesactivatedsimultaneouslycanbeseparated into the 8192 individual source gathers with data quality comparable to that obtained when the sources were activated sequentially.Theproposedapproachescouldalsodramaticallyimprove seismic field-data acquisition efficiency if the source signatures actuallyprobingtheearthcanbemeasuredaccurately.


IEEE Transactions on Signal Processing | 2006

Robust Distributed Estimation Using the Embedded Subgraphs Algorithm

Véronique Delouille; Ramesh Neelamani; Richard G. Baraniuk

We propose a new iterative, distributed approach for linear minimum mean-square-error (LMMSE) estimation in graphical models with cycles. The embedded subgraphs algorithm (ESA) decomposes a loopy graphical model into a number of linked embedded subgraphs and applies the classical parallel block Jacobi iteration comprising local LMMSE estimation in each subgraph (involving inversion of a small matrix) followed by an information exchange between neighboring nodes and subgraphs. Our primary application is sensor networks, where the model encodes the correlation structure of the sensor measurements, which are assumed to be Gaussian. The resulting LMMSE estimation problem involves a large matrix inverse, which must be solved in-network with distributed computation and minimal intersensor communication. By invoking the theory of asynchronous iterations, we prove that ESA is robust to temporary communication faults such as failing links and sleeping nodes, and enjoys guaranteed convergence under relatively mild conditions. Simulation studies demonstrate that ESA compares favorably with other recently proposed algorithms for distributed estimation. Simulations also indicate that energy consumption for iterative estimation increases substantially as more links fail or nodes sleep. Thus, somewhat surprisingly, sensor network energy conservation strategies such as low-powered transmission and aggressive sleep schedules could actually prove counterproductive. Our results can be replicated using MATLAB code from www.dsp.rice.edu/software


information processing in sensor networks | 2004

Robust distributed estimation in sensor networks using the embedded polygons algorithm

Véronique Delouille; Ramesh Neelamani; Richard G. Baraniuk

We propose a new iterative distributed algorithm for linear minimum mean-squared-error (LMMSE) estimation in sensor networks whose measurements follow a Gaussian hidden Markov graphical model with cycles. The embedded polygons algorithm decomposes a loopy graphical model into a number of linked embedded polygons and then applies a parallel block Gauss-Seidel iteration comprising local LMMSE estimation on each polygon (involving inversion of a small matrix) followed by an information exchange between neighboring nodes and polygons. The algorithm is robust to temporary communication faults such as link failures and sleeping nodes and enjoys guaranteed convergence under mild conditions. A simulation study indicates that energy consumption for iterative estimation increases substantially as more links fail or nodes sleep. Thus, somewhat surprisingly, energy conservation strategies such as low-powered transmission and aggressive sleep schedules could actually be counterproductive.


Inverse Problems | 2010

Sparse channel separation using random probes

Justin K. Romberg; Ramesh Neelamani

This paper considers the problem of estimating the channel response (or the Greens function) between multiple source–receiver pairs. Typically, the channel responses are estimated one-at-a-time: a single source sends out a known probe signal, the receiver measures the probe signal convolved with the channel response and the responses are recovered using deconvolution. In this paper, we show that if the channel responses are sparse and the probe signals are random, then we can significantly reduce the total amount of time required to probe the channels by activating all of the sources simultaneously. With all sources activated simultaneously, the receiver measures a superposition of all the channel responses convolved with the respective probe signals. Separating this cumulative response into individual channel responses can be posed as a linear inverse problem. We show that channel response separation is possible (and stable) even when the probing signals are relatively short in spite of the corresponding linear system of equations becoming severely underdetermined. We derive a theoretical lower bound on the length of the source signals that guarantees that this separation is possible with high probability. The bound is derived by putting the problem in the context of finding a sparse solution to an underdetermined system of equations, and then using mathematical tools from the theory of compressive sensing. Finally, we discuss some practical applications of these results, which include forward modeling for seismic imaging, channel equalization in multiple-input multiple-output communication and increasing the field-of-view in an imaging system by using coded apertures.


Geophysics | 2010

Adaptive subtraction using complex-valued curvelet transforms

Ramesh Neelamani; Anatoly Baumstein; Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the templates CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the datas and templates CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ( ±5 ms in 50-Hz data) in the template; it re...


Seg Technical Program Expanded Abstracts | 2009

Fast Full Wave Seismic Inversion Using Source Encoding

Jerome R. Krebs; John E. Anderson; David L. Hinkley; Anatoly Baumstein; Sunwoong Lee; Ramesh Neelamani; Martin-Daniel Lacasse

Full Wavefield Seismic Inversion (FWI) estimates a subsurface elastic model by iteratively minimizing the difference between observed and simulated data. This process is extremely compute intensive, with a cost on the order of at least hundreds of prestack reverse time migrations. For time-domain and Krylov-based frequency-domain FWI, the cost of FWI is proportional to the number of seismic sources inverted. We have found that the cost of FWI can be significantly reduced by applying it to data processed by encoding and summing individual source gathers, and by changing the encoding functions between iterations. The encoding step forms a single gather from many input source gathers. This gather represents data that would have been acquired from a spatially distributed set of sources operating simultaneously with different source signatures. We demonstrate, using synthetic data, significant cost reduction by applying FWI to encoded simultaneous-source data.


SIAM Journal on Discrete Mathematics | 2007

On Nearly Orthogonal Lattice Bases and Random Lattices

Ramesh Neelamani; Sanjeeb Dash; Richard G. Baraniuk

We study lattice bases where the angle between any basis vector and the linear subspace spanned by the other basis vectors is at least

Collaboration


Dive into the Ramesh Neelamani's collaboration.

Researchain Logo
Decentralizing Knowledge