Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. Sandeep Pradhan is active.

Publication


Featured researches published by S. Sandeep Pradhan.


IEEE Signal Processing Magazine | 2002

Distributed compression in a dense microsensor network

S. Sandeep Pradhan; Julius Kusuma; Kannan Ramchandran

Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.


IEEE Transactions on Information Theory | 2003

Duality between source coding and channel coding and its extension to the side information case

S. Sandeep Pradhan; Jim Chou; Kannan Ramchandran

We explore the information-theoretic duality between source coding with side information at the decoder and channel coding with side information at the encoder. We begin with a mathematical characterization of the functional duality between classical source and channel coding, formulating the precise conditions under which the optimal encoder for one problem is functionally identical to the optimal decoder for the other problem. We then extend this functional duality to the case of coding with side information. By invoking this duality, we are able to generalize the result of Wyner and Ziv (1976) relating to no rate loss for source coding with side information from Gaussian to more arbitrary distributions. We consider several examples corresponding to both discrete- and continuous-valued cases to illustrate our formulation. For the Gaussian cases of coding with side information, we invoke geometric arguments to provide further insights into their duality. Our geometric treatment inspires the construction and dual use of practical coset codes for a large class of emerging applications for coding with side information, such as distributed sensor networks, watermarking, and information-hiding communication systems.


data compression conference | 2000

Distributed source coding: symmetric rates and applications to sensor networks

S. Sandeep Pradhan; Kannan Ramchandran

We address the problem of distributed source coding using a practical and constructive approach, referred to as distributed source coding using syndromes (DISCUS), with applications to sensor networks. We propose low complexity encoding and decoding methods based on linear codes, to achieve all points in the achievable rate region of the Slepian-Wolf (1973) problem. The extension of these concepts to the construction of Euclidean-space codes is also studied and analyzed for the case of trellis and lattice codes. The performance of these symmetric methods for encoding with a fidelity criterion is shown to be the same as that of asymmetric encoding. Simulations are presented to corroborate these results.


IEEE Transactions on Information Theory | 2004

n-channel symmetric multiple descriptions - part I: (n, k) source-channel erasure codes

S. Sandeep Pradhan; Kannan Ramchandran

In this two-part paper, we present a new achievable rate region for the general n-channel symmetric multiple descriptions problem. In part I, inspired by the concept of maximum-distance separable (MDS) erasure channel codes, we consider a special case of this rate region, where the source is encoded into n descriptions each with rate R. These descriptions are transmitted over n bandwidth constrained and errorless channels. During transmission, a subset of these channels can break down, thus erasing the corresponding descriptions. The decoder is interested in recovering the source with the reception of at least k descriptions. Thus, the encoder is allowed to sample only one realization of this breakdown process during the entire transmission. For Gaussian sources, we have the following interesting result: when any k descriptions arrive, the achievable distortion exactly matches the optimal distortion-rate performance corresponding to a source rate of kR bits; with the reception of any m > k descriptions, the source reconstruction quality is strictly better, the improvement being nearly linear in the number of descriptions received.


IEEE Transactions on Information Theory | 2009

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Dinesh Krithivasan; S. Sandeep Pradhan

Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning.


IEEE Transactions on Information Theory | 2005

Generalized coset codes for distributed binning

S. Sandeep Pradhan; Kannan Ramchandran

In many multiterminal communication problems, constructions of good source codes involve finding distributed partitions (into bins) of a collection of quantizers associated with a group of source encoders. Further, computationally efficient procedures to index these bins are also required. In this work, we consider a constructive approach for distributed binning in an algebraic framework. Several application scenarios fall under the scope of this paper including the CEO problem, distributed source coding, and n-channel symmetric multiple description source coding with n>2. Specifically, in this exposition we consider the case of two codebooks while focusing on the Gaussian CEO problem with mean squared error reconstruction and with two symmetric observations. This problem deals with distributed encoding of correlated noisy observations of a source into descriptions such that the joint decoder having access to them can reconstruct the source with a fidelity criterion. We employ generalized coset codes constructed in a group-theoretic setting for this approach, and analyze the performance in terms of distance properties and decoding algorithms.


IEEE Transactions on Information Theory | 2007

Source Coding With Feed-Forward: Rate-Distortion Theorems and Error Exponents for a General Source

Ramji Venkataramanan; S. Sandeep Pradhan

In this work, we consider a source coding model with feed-forward. We analyze a system with a noiseless, feed-forward link where the decoder has knowledge of all previous source samples while reconstructing the present sample. The rate-distortion function for an arbitrary source with feed-forward is derived in terms of directed information, a variant of mutual information. We further investigate the nature of the rate-distortion function with feed-forward for two common types of sources- discrete memory- less sources and Gaussian sources. We then characterize the error exponent for a general source with feed-forward. The results are then extended to feed-forward with an arbitrary delay larger than the block length.


data compression conference | 2001

Enhancing analog image transmission systems using digital side information: a new wavelet-based image coding paradigm

S. Sandeep Pradhan; Kannan Ramchandran

We address digital transmission for enhancing, in a backward compatible way, the quality of analog image transmission systems. We propose a practical algorithm that treats the problem as one of wavelet image compression with side information (available in the form of a noisy analog version of the image) present at the decoder. We propose a rate allocation technique to efficiently allocate the rate among the wavelet coefficients of the image. In typical instances of the problem, we get gain up to 2.5 dB over conventional methods that ignore the side information. Surprisingly, this is typically achieved by modifying a very small fraction of the wavelet coefficients (typically around 10-20%) of the conventional source coder. Extensions of our proposed image transmission framework to that of video transmission finds application in the upgrade of current analog television broadcast systems to digital TV.


data compression conference | 2003

Turbo and trellis-based constructions for source coding with side information

Jim Chou; S. Sandeep Pradhan; Kannan Ramchandran

The problem of rate-distortion efficient constructions is studied for the problem of source coding with side information (SCSI), which has assumed heightened interest. While the Wyner-Ziv theorem from information theory has prescribed rate-distortion performance bounds for the SCSI problem, the gap between theory and practice has remained large. To reduce this gap, two different frameworks are proposed based on a trellis construction and a turbo-based construction respectively. Simulation results on the Gaussian SCSI problem reveal the promise of the proposed approaches: at 1 bit per sample, 0.5 bits/sample, 0.25 bits/sample and 0.125 bits/sample, these constructions attain performance within 1.3 dB, 1.1 dB, 0.85 dB and 0.5 dB respectively of the theoretical Wyner-Ziv rate-distortion bound.


IEEE Transactions on Information Theory | 2011

Distributed Source Coding Using Abelian Group Codes: A New Achievable Rate-Distortion Region

Dinesh Krithivasan; S. Sandeep Pradhan

A distributed source coding problem with a joint distortion criterion that depends on the sources and the reconstruction is considered in this work. While the prevalent trend in information theory has been to prove achievability results using Shannons random coding arguments, using structured random codes offer rate gains over unstructured random codes for many problems. Motivated by this, a new achievable rate-distortion region (an inner bound to the performance limit) is presented for this problem for discrete memoryless sources based on “good” structured random nested codes built over abelian groups. For certain sources and distortion functions, the new rate region is shown to be strictly bigger than the Berger-Tung rate region, which has been the best known achievable rate region for this problem till now. This is done using numerical plots. Achievable rates for single-user source coding using abelian group codes are also obtained as a corollary of the main coding theorem. It is shown that nested linear codes achieve the Shannon rate-distortion function in the arbitrary discrete memoryless case.

Collaboration


Dive into the S. Sandeep Pradhan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Chou

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Nazari

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge