Jayanth Nayak
University of California, Riverside
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jayanth Nayak.
allerton conference on communication, control, and computing | 2008
Deniz Gunduz; Ertem Tuncel; Jayanth Nayak
A two-way relay channel in which two users communicate with each other over a relay terminal is considered. In particular, a ldquoseparatedrdquo two-way relay channel, in which the users do not receive each others signals is studied. Various achievable schemes are proposed and corresponding achievable rate regions are characterized. Specifically, a combination of partial decode-and-forward and compress-and-forward schemes is proposed. In addition, compress-and-forward relaying with two layered quantization, in which one of the users receive a better description of the relay received signal is studied. Extension of these achievable schemes to the Gaussian separated two-way relay channel is presented. It is shown that the compress-and-forward scheme achieves rates within half bit of the capacity region in the Gaussian setting. Numerical results are also presented for comparison of the proposed achievable schemes in the Gaussian case.
IEEE Transactions on Information Theory | 2010
Jayanth Nayak; Ertem Tuncel; Deniz Gunduz
This paper addresses lossy transmission of a common source over a broadcast channel when there is correlated side information at the receivers, with emphasis on the quadratic Gaussian and binary Hamming cases. A digital scheme that combines ideas from the lossless version of the problem, i.e., Slepian-Wolf coding over broadcast channels, and dirty paper coding, is presented and analyzed. This scheme uses layered coding where the common layer information is intended for both receivers and the refinement information is destined only for one receiver. For the quadratic Gaussian case, a quantity characterizing the combined quality of each receiver is identified in terms of channel and side information parameters. It is shown that it is more advantageous to send the refinement information to the receiver with ¿better¿ combined quality. In the case where all receivers have the same overall quality, the presented scheme becomes optimal. Unlike its lossless counterpart, however, the problem eludes a complete characterization.
international symposium on information theory | 2008
Deniz Gunduz; Jayanth Nayak; Ertem Tuncel
This paper deals with the design of coding schemes for transmitting a source over a broadcast channel when there is source side information at the receivers. Based on Slepian-Wolf coding over broadcast channels, three hybrid digital/analog schemes are proposed and their power-distortion tradeoff is investigated for Gaussian sources and Gaussian broadcast channels. All three transmit the same digital and analog information but with varying coding order. Although they are not provably optimal in general, they can significantly outperform uncoded transmission and separate source and channel coding.
IEEE Transactions on Information Theory | 2005
Jayanth Nayak; Kenneth Rose
This correspondence investigates the behavior of the compound channel under a zero-error constraint. We derive expressions for the capacity when a) neither the encoder nor decoder has side-information about the channel; b) when only the encoder has such side-information. These expressions are given in terms of capacities of appropriately defined sets of graphs. We clarify that an earlier treatment of the zero-error capacity of a compound channel corresponds to the case where the decoder has side-information about the channel. We also characterize the minimum asymptotic rate for the source coding dual of the compound channel problem, namely, source coding with compound side information. Finally, we contrast the zero-error and asymptotically vanishing error capacities of the compound channel.
IEEE Transactions on Information Theory | 2006
Jayanth Nayak; Ertem Tuncel; Kenneth Rose
This correspondence presents a novel application of the theta function defined by Lovasz. The problem of coding for transmission of a source through a channel without error when the receiver has side information about the source is analyzed. Using properties of the Lovasz theta function, it is shown that separate source and channel coding is asymptotically suboptimal in general. By contrast, in the case of vanishingly small probability of error, separate source and channel coding is known to be asymptotically optimal. For the zero-error case, it is further shown that the joint coding gain can in fact be unbounded. Since separate coding simplifies code design and use, conditions on sources and channels for the optimality of separate coding are also derived
international symposium on information theory | 2005
Jayanth Nayak; Sharadh Ramaswamy; Kenneth Rose
Motivated by the sensor network setting, we consider lossless storage of correlated discrete memoryless sources. The underlying tradeoff is between exploitation of inter-source correlation for low rate storage and efficient (low rate) selective retrieval from the fusion storage. We define the problem of shared descriptions (SD) source coding and relate it to the storage and retrieval problem. We present an achievable rate region for the SD problem and use it to characterize the storage vs. retrieval tradeoff
IEEE Transactions on Signal Processing | 2010
Ankur Saxena; Jayanth Nayak; Kenneth Rose
This paper considers the design of efficient quantizers for a robust distributed source coding system. The information is encoded at independent terminals and transmitted across separate channels, any of which may fail. The scenario subsumes a wide range of source and source-channel coding/quantization problems, including multiple descriptions and distributed source coding. Greedy descent methods depend heavily on initialization, and the presence of abundant (high density of) ¿poor¿ local optima on the cost surface strongly motivates the use of a global design algorithm. We propose a deterministic annealing approach for the design of all components of a generic robust distributed source coding system. Our approach avoids many poor local optima, is independent of initialization, and does not make any simplifying assumption on the underlying source distribution. Simulation results demonstrate a wide spread in the performance of greedy Lloyd-based algorithms, and considerable gains are achieved by using the proposed deterministic annealing approach.
knowledge discovery and data mining | 2002
Srinivasan Jagannathan; Jayanth Nayak; Kevin C. Almeroth; Markus Hofmann
There exists a huge demand for multimedia goods and services in the Internet. Currently available bandwidth speeds can support sale of downloadable content like CDs, e-books, etc. as well as services like video-on-demand. In the future, such services will be prevalent in the Internet. Since costs are typically fixed, maximizing revenue can maximize profits. A primary determinant of revenue in such e-content markets is how much value the customers associate with the content. Though marketing surveys are useful, they cannot adapt to the dynamic nature of the Internet market. In this work, we examine how to learn customer valuations in close to real-time. Our contributions in this paper are threefold: (1) we develop a probabilistic model to describe customer behavior, (2) we develop a framework for pricing e-content based on basic economic principles, and (3) we propose a price discovering algorithm that learns customer behavior parameters and suggests prices to an e-content provider. We validate our algorithm using simulations. Our simulations indicate that our algorithm generates revenue close to the maximum expectation. Further, they also indicate that the algorithm is robust to transient customer behavior.
IEEE Transactions on Information Theory | 2009
Jayanth Nayak; Ertem Tuncel
The rate-distortion (RD) problem for two-layer coding of a pair (X,Y) of correlated sources is considered. The first layer information enables reconstruction of X within a certain distortion DX, while reception of both layers additionally enables reconstruction of Y within distortion DY. Although this problem is a special case of the successive refinement problem, the computation of the RD region for this scenario is nontrivial. Using a general class of outer bounds (analogous to Shannon lower bound in the classical RD theory) to the successive refinement rate-distortion region, the successive coding RD region for the case where (X,Y) is a jointly Gaussian pair and the distortion measure is squared-error is explicitly characterized.
international conference on distributed smart cameras | 2008
Jayanth Nayak; Luis Gonzalez-Argueta; Bi Song; Amit K. Roy-Chowdhury; Ertem Tuncel
While wide-area video surveillance is an important application, it is often not practical, from a technical and social perspective, to have video cameras that completely cover the entire region of interest. For obtaining good surveillance results in a sparse camera networks requires that they be complemented by additional sensors with different modalities, their intelligent assignment in a dynamic environment, and scene understanding using these multimodal inputs. In this paper, we propose a probabilistic scheme for opportunistically deploying cameras to the most interesting parts of a scene dynamically given data from a set of video and audio sensors. The audio data is continuously processed to identify interesting events, e.g., entry/exit of people, merging or splitting of groups, and so on. This is used to indicate the time instants to turn on the cameras. Thereafter, analysis of the video determines how long the cameras stay on and whether their pan/tilt/zoom parameters change. Events are tracked continuously by combining the audio and video data. Correspondences between the audio and video sensor observations are obtained through a learned homography between the image plane and ground plane. The method leads to efficient usage of the camera resources by focusing on the most important parts of the scene, saves power, bandwidth and cost, and reduces concerns of privacy. We show detailed experimental results on real data collected in multimodal networks.