Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kumar Viswanatha is active.

Publication


Featured researches published by Kumar Viswanatha.


IEEE Transactions on Information Theory | 2014

On Zero-Delay Source-Channel Coding

Emrah Akyol; Kumar Viswanatha; Kenneth Rose; Tor A. Ramstad

This paper studies the zero-delay source-channel coding problem, and specifically the problem of obtaining the vector transformations that optimally map between the m-dimensional source space and k-dimensional channel space, under a given transmission power constraint and for the mean square error distortion. The functional properties of the cost are studied and the necessary conditions for the optimality of the encoder and decoder mappings are derived. An optimization algorithm that imposes these conditions iteratively, in conjunction with the noisy channel relaxation method to mitigate poor local minima, is proposed. The numerical results show strict improvement over prior methods. The numerical approach is extended to the scenario of source-channel coding with decoder side information. The resulting encoding mappings are shown to be continuous relatives of, and in fact subsume as special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems. A well-known result in information theory pertains to the linearity of optimal encoding and decoding mappings in the scalar Gaussian source and channel setting, at all channel signal-to-noise ratios (CSNRs). In this paper, the linearity of optimal coding, beyond the Gaussian source and channel, is considered and the necessary and sufficient condition for linearity of optimal mappings, given a noise (or source) distribution, and a specified a total power constraint are derived. It is shown that the Gaussian source-channel pair is unique in the sense that it is the only source-channel pair for which the optimal mappings are linear at more than one CSNR values. Moreover, the asymptotic linearity of optimal mappings is shown for low CSNR if the channel is Gaussian regardless of the source and, at the other extreme, for high CSNR if the source is Gaussian, regardless of the channel. The extension to the vector settings is also considered where besides the conditions inherited from the scalar case, additional constraints must be satisfied to ensure linearity of the optimal mappings.


IEEE Transactions on Information Theory | 2012

On Conditions for Linearity of Optimal Estimation

Emrah Akyol; Kumar Viswanatha; Kenneth Rose

When is optimal estimation linear? It is well known that when a Gaussian source is contaminated with Gaussian noise, a linear estimator minimizes the mean square estimation error. This paper analyzes, more generally, the conditions for linearity of optimal estimators. Given a noise (or source) distribution, and a specified signal-to-noise ratio (SNR), we derive conditions for existence and uniqueness of a source (or noise) distribution for which the Lp optimal estimator is linear. We then show that if the noise and source variances are equal, then the matching source must be distributed identically to the noise. Moreover, we prove that the Gaussian source-channel pair is unique in the sense that it is the only source-channel pair for which the mean square error (MSE) optimal estimator is linear at more than one SNR values. Furthermore, we show the asymptotic linearity of MSE optimal estimators for low SNR if the channel is Gaussian regardless of the source and, vice versa, for high SNR if the source is Gaussian regardless of the channel. The extension to the vector case is also considered where besides the conditions inherited from the scalar case, additional constraints must be satisfied to ensure linearity of the optimal estimator.


IEEE Transactions on Information Theory | 2014

The Lossy Common Information of Correlated Sources

Kumar Viswanatha; Emrah Akyol; Kenneth Rose

The two most prevalent notions of common information (CI) are due to Wyner and Gács-Körner and both the notions can be stated as two different characteristic points in the lossless Gray-Wyner region. Although the information theoretic characterizations for these two CI quantities can be easily evaluated for random variables with infinite entropy (e.g., continuous random variables), their operational significance is applicable only to the lossless framework. The primary objective of this paper is to generalize these two CI notions to the lossy Gray-Wyner network, which hence extends the theoretical foundation to general sources and distortion measures. We begin by deriving a single letter characterization for the lossy generalization of Wyners CI, defined as the minimum rate on the shared branch of the Gray-Wyner network, maintaining minimum sum transmit rate when the two decoders reconstruct the sources subject to individual distortion constraints. To demonstrate its use, we compute the CI of bivariate Gaussian random variables for the entire regime of distortions. We then similarly generalize Gács and Körners definition to the lossy framework. The latter half of this paper focuses on studying the tradeoff between the total transmit rate and receive rate in the Gray-Wyner network. We show that this tradeoff yields a contour of points on the surface of the Gray-Wyner region, which passes through both the Wyner and Gács-Körner operating points, and thereby provides a unified framework to understand the different notions of CI. We further show that this tradeoff generalizes the two notions of CI to the excess sum transmit rate and receive rate regimes, respectively.


information theory workshop | 2010

On conditions for linearity of optimal estimation

Emrah Akyol; Kumar Viswanatha; Kenneth Rose

When is optimal estimation linear? It is well-known that, in the case of a Gaussian source contaminated with Gaussian noise, a linear estimator minimizes the mean square estimation error. This paper analyzes more generally the conditions for linearity of optimal estimators. Given a noise (or source) distribution, and a specified signal to noise ratio (SNR), we derive conditions for existence and uniqueness of a source (or noise) distribution that renders the L p norm optimal estimator linear. We then show that, if the noise and source variances are equal, then the matching source is distributed identically to the noise. Moreover, we prove that the Gaussian source-channel pair is unique in that it is the only source-channel pair for which the MSE optimal estimator is linear at more than one SNR values.


information theory workshop | 2010

On optimum communication cost for joint compression and dispersive information routing

Kumar Viswanatha; Emrah Akyol; Kenneth Rose

In this paper, we consider the problem of minimum cost joint compression and routing for networks with multiple-sinks and correlated sources. We introduce a routing paradigm, called dispersive information routing, wherein the intermediate nodes are allowed to forward a subset of the received bits on subsequent paths. This paradigm opens up a rich class of research problems which focus on the interplay between encoding and routing in a network. What makes it particularly interesting is the challenge in encoding sources such that, exactly the required information is routed to each sink, to reconstruct the sources they are interested in. We demonstrate using simple examples that our approach offers better asymptotic performance than conventional routing techniques. We also introduce a variant of the well known random binning technique, called ‘power binning’, to encode and decode sources that are dispersively transmitted, and which asymptotically achieves the minimum communication cost within this routing paradigm.


information theory workshop | 2011

A strictly improved achievable region for multiple descriptions using combinatorial message sharing

Kumar Viswanatha; Emrah Akyol; Kenneth Rose

We recently proposed a new coding scheme for the L-channel multiple descriptions (MD) problem for general sources and distortion measures involving ‘Combinatorial Message Sharing’ (CMS) [7] leading to a new achievable rate-distortion region. Our objective in this paper is to establish that this coding scheme strictly subsumes the most popular region for this problem due to Venkataramani, Kramer and Goyal (VKG) [3]. In particular, we show that for a binary symmetric source under Hamming distortion measure, the CMS scheme provides a strictly larger region for all L> 2. The principle of the CMS coding scheme is to include a common message in every subset of the descriptions, unlike the VKG scheme which sends a single common message in all the descriptions. In essence, we show that allowing for a common codeword in every subset of descriptions provides better freedom in coordinating the messages which can be exploited constructively to achieve points outside the VKG region.


international symposium on information theory | 2011

Combinatorial Message Sharing for a refined multiple descriptions achievable region

Kumar Viswanatha; Emrah Akyol; Kenneth Rose

This paper presents a new achievable rate-distortion region for the L-channel multiple descriptions problem. Currently, the most popular region for this problem is due to Venkataramani, Kramer and Goyal [3]. Their encoding scheme is an extension of the Zhang-Berger scheme to the L-channel case and includes a combinatorial number of refinement codebooks, one for each subset of the descriptions. All the descriptions also share a single common codeword, which introduces redundancy, but assists in better coordination of the descriptions. This paper proposes a novel encoding technique involving ‘Combinatorial Message Sharing’, where every subset of the descriptions may share a distinct common message. This introduces a combinatorial number of shared codebooks along with the refinement codebooks of [3]. These shared codebooks provide a more flexible framework to trade off redundancy across the messages for resilience to descriptions loss. We derive an achievable rate-distortion region for the proposed technique, and show that it subsumes the achievable region of [3].


international conference on acoustics, speech, and signal processing | 2010

Towards large scale distributed coding

Sharadh Ramaswamy; Kumar Viswanatha; Ankur Saxena; Kenneth Rose

This paper considers the problem of distributed source coding for a large sensor network. A typical shortcoming of current approaches to true distributed coding is the exponential growth of the decoder codebook size with the number of sources in the network. This growth in complexity renders many traditional approaches impractical for even moderately sized sensor networks. Inspired by our recent results on fusion coding for selective retrieval, we propose a new distributed coding approach that scales to a large number of sources. Central to our approach is a “bit-subset selector” module whose role is to judiciously extract an appropriate subset of the received bits for decoding per individual source. This, together with joint design of all system components, enables direct optimization of the decoder complexity-distortion tradeoff, and thereby the desired scalability. Experiments on both real and synthetic data-sets show considerable gains over heuristic schemes.


IEEE Transactions on Information Theory | 2016

Combinatorial Message Sharing and a New Achievable Region for Multiple Descriptions

Kumar Viswanatha; Emrah Akyol; Kenneth Rose

This paper presents a new achievable rate-distortion region for the general L channel multiple descriptions (MDs) problem. A well-known general region for this problem is due to Venkataramani, Kramer, and Goyal (VKG). Their encoding scheme is an extension of the El Gamal-Cover (EC) and Zhang-Berger (ZB) coding schemes to the L channel case and includes a combinatorial number of refinement codebooks, one for each subset of the descriptions. As in ZB, the scheme also allows for a single common codeword to be shared by all descriptions. This paper proposes a novel encoding technique involving combinatorial message sharing (CMS), where every subset of the descriptions may share a distinct common message. This introduces a combinatorial number of shared codebooks along with the refinement codebooks of. These shared codebooks provide a more flexible framework to tradeoff redundancy across the messages for resilience to descriptions loss. We derive an achievable rate-distortion region for the proposed technique, and show that it subsumes the VKG region for general sources and distortion measures. We further show that CMS provides a strict improvement of the achievable region for any source and distortion measures for which some two-description subset is such that ZB achieves points outside the EC region. We then show a more surprising result: CMS outperforms VKG for a general class of sources and distortion measures, including scenarios where the ZB and EC regions coincide for all two-description subsets. In particular, we show that CMS strictly improves on VKG, for the L-channel quadratic Gaussian MD problem, for all L ≥ 3, despite the fact that the EC region is complete for the corresponding two-descriptions problem. Consequently, the correlated quantization scheme (an extreme special case of VKG) that has been proven to be optimal for several cross sections of the L-channel quadratic Gaussian MD problem is strictly suboptimal in general. Using the encoding principles derived, we show that the CMS scheme achieves the complete rate-distortion region for several asymmetric cross sections of the L-channel quadratic Gaussian MD problem.


international symposium on information theory | 2012

Combinatorial message sharing and random binning for multiple description coding

Emrah Akyol; Kumar Viswanatha; Kenneth Rose

This paper proposes a new multiple description (MD) coding method and an associated achievable rate-distortion region for L ≥ 2 channels. The proposed scheme randomly bins codebooks chosen from the codebook structure, similar to that of the recently proposed combinatorial message sharing (CMS) scheme designed for conditional codebook encoding. The proposed scheme effectively performs multilayer random binning for each subset of the description, which hence enables to utilize the symmetry of a “subset” of the description rates wherever it exists. The new scheme specializes in to the conventional multilayer random binning as an extreme special case.

Collaboration


Dive into the Kumar Viswanatha's collaboration.

Top Co-Authors

Avatar

Kenneth Rose

University of California

View shared research outputs
Top Co-Authors

Avatar

Emrah Akyol

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tor A. Ramstad

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge