Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashish Jagmohan is active.

Publication


Featured researches published by Ashish Jagmohan.


IEEE Transactions on Multimedia | 2004

Wyner-Ziv coding of video: an error-resilient compression framework

Anshul Sehgal; Ashish Jagmohan; Narendra Ahuja

This paper addresses the problem of video coding in a joint source-channel setting. In particular, we propose a video encoding algorithm that prevents the indefinite propagation of errors in predictively encoded video-a problem that has received considerable attention over the last decade. This is accomplished by periodically transmitting a small amount of additional information, termed coset information, to the decoder, as opposed to the popular approach of periodic insertion of intra-coded frames. Perhaps surprisingly, the coset information is capable of correcting for errors, without the encoder having a precise knowledge of the lost packets that resulted in the errors. In the context of real-time transmission, the proposed approach entails a minimal loss in performance over conventional encoding in the absence of channel losses, while simultaneously allowing error recovery in the event of channel losses. We demonstrate the efficacy of the proposed approach through experimental evaluation. In particular, the performance of the proposed framework is 3-4 dB superior to the conventional approach of periodic insertion of intra-coded frames, and 1.5-2 dB away from an ideal system, with infinite decoding delay, operating at Shannon capacity.


international symposium on computer architecture | 2012

PreSET: improving performance of phase change memories by exploiting asymmetry in write times

Moinuddin K. Qureshi; Michele M. Franceschini; Ashish Jagmohan; Luis A. Lastras

Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation. This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.


IEEE Transactions on Information Theory | 2009

On the Redundancy of Slepian–Wolf Coding

Dake He; Luis A. Lastras-Montano; En-hui Yang; Ashish Jagmohan; Jun Chen

In this paper, the redundancy of both variable and fixed rate Slepian-Wolf coding is considered. Given any jointly memoryless source-side information pair {(Xi, Yi)}<sub>i=1</sub> <sup>infin</sup> with finite alphabet, the redundancy R<sup>n</sup>(isin<sub>n</sub>) of variable rate Slepian-Wolf coding of X<sub>1</sub> <sup>n</sup> with decoder only side information Y<sub>1</sub> <sup>n</sup> depends on both the block length n and the decoding block error probability isin<sub>n</sub>, and is defined as the difference between the minimum average compression rate of order n variable rate Slepian-Wolf codes having the decoding block error probability less than or equal to isin<sub>n</sub>, and the conditional entropy H(X|Y), where H(X|Y) is the conditional entropy rate of the source given the side information. The redundancy of fixed rate Slepian-Wolf coding of X<sub>1</sub> <sup>n</sup> with decoder only side information Y<sub>1</sub> <sup>n</sup> is defined similarly and denoted by R<sub>F</sub> <sup>n</sup>(isin<sub>n</sub>). It is proved that under mild assumptions about isin<sub>n</sub>, R<sup>n</sup>(isin<sub>n</sub>) = d<sub>v</sub>radic-log isin<sub>n</sub>/n + (oradic-log isin<sub>n</sub>/n) and R<sub>F</sub> <sup>n</sup>(isin<sub>n</sub>) - d<sub>f</sub>radic-log isin<sub>n</sub>/n + o(radic-log isin<sub>n</sub>/n), where df and dnu are two constants completely determined by the joint distribution of the source-side information pair. Since d<sub>v</sub> is generally smaller than d<sub>f</sub>, our results show that variable rate Slepian-Wolf coding is indeed more efficient than fixed rate Slepian-Wolf coding.


ieee conference on mass storage systems and technologies | 2010

Write amplification reduction in NAND Flash through multi-write coding

Ashish Jagmohan; Michele M. Franceschini; Luis A. Lastras

The block erase requirement in NAND Flash devices leads to the need for garbage collection. Garbage collection results in write amplification, that is, to an increase in the number of physical page programming operations. Write amplification adversely impacts the limited lifetime of a NAND Flash device, and can add significant system overhead unless a large spare factor is maintained. This paper proposes a NAND Flash system which uses multi-write coding to reduce write amplification. Multi-write coding allows a NAND Flash page to be written more than once without requiring an intervening block erase. We present a novel two-write coding technique based on enumerative coding, which achieves linear coding rates with low computational complexity. The proposed technique also seeks to minimize memory wear by reducing the number of programmed cells per page write. We describe a system which uses lossless data compression in conjunction with multi-write coding, and show through simulations that the proposed system has significantly reduced write amplification and memory wear.


IEEE Transactions on Information Theory | 2012

On Compression of Data Encrypted With Block Ciphers

Demijan Klinc; Carmit Hazay; Ashish Jagmohan; Hugo Krawczyk; Tal Rabin

This paper investigates compression of data encrypted with block ciphers, such as the Advanced Encryption Standard. It is shown that such data can be feasibly compressed without knowledge of the secret key. Block ciphers operating in various chaining modes are considered and it is shown how compression can be achieved without compromising security of the encryption scheme. Further, it is shown that there exists a fundamental limitation to the practical compressibility of block ciphers when no chaining is used between blocks. Some performance results for practical code constructions used to compress binary sources are presented.


international conference on image processing | 2003

A state-free causal video encoding paradigm

Anshul Sehgal; Ashish Jagmohan; Narendra Ahuja

A commonly encountered problem in the communication of predictively encoded video is that of predictive mismatch or drift. The problem of predictive mismatch manifests itself in numerous communication scenarios, including on-demand streaming, real-time streaming and multicast streaming. This paper proposes a state-free video encoding architecture that alleviates this problem. The main benefit of state-free encoding is that there is no need for the encoder and the decoder to maintain the same state, or equivalently, predict using the same predictor. This facilitates robust communication of causally encoded media. The proposed approach is based on the Wyner-Ziv theorem in information theory. Consequently, it leverages the superior performance of coset codes for the Wyner-Ziv problem for predictive coding. A video codec, with state-free functionality, based on the H.26L encoding standard is proposed. The performance of the proposed codec is within 1-2.5 dB of the H.26L encoder.


IEEE Transactions on Circuits and Systems for Video Technology | 2003

MPEG-4 one-pass VBR rate control for digital storage

Ashish Jagmohan; Krishna Ratakonda

One-pass, variable bit-rate (VBR) rate control is ideally suited to the requirements of real-time video encoding for the purpose of digital storage. Previous MPEG one-pass VBR rate control algorithms have been based on appropriate selection of quantization scale parameters for controlling the bit rate and quality of the output bitstream. The major disadvantage of relying solely on quantization scales, for rate control, is the introduction of significant perceptual distortion when high quantization scales are used. We propose an MPEG-4, 1-pass, VBR rate control scheme that relies on the selective use of the MPEG-4 reduced resolution mode to supplement modulation of the quantization scale and provide an effective rate control strategy. Experimental results show that the proposed algorithm can encode high-complexity, standard definition (720 /spl times/ 480) video sequences at rates as low as 750 kbps without incurring significant perceptual artifacts.


international conference on image processing | 2002

Predictive encoding using coset codes

Ashish Jagmohan; Anshul Sehgal; Narendra Ahuja

Predictive encoding with respect to multiple possible predictors is a common scenario encountered in many digital set-top box applications, such as redundant storage of video/audio data, real-time robust communication with peripherals and Internet video/audio telephony. A key problem associated with this scenario is that of predictive mismatch or drift. In the present paper, we pose the problem of predictive encoding with multiple possible predictors as a variant of the well-known Wyner-Ziv side-information problem. We propose an approach based on the use of coset codes for predictive encoding, for mitigating the effect of drift without overly sacrificing compression efficiency. The proposed approach can be used to improve coding performance in a wide range of practical applications such as multiple description coding, scalable coding and redundant storage of video/audio streams. We illustrate the efficacy of the proposed approach through a simple example based on the application of low-delay Internet telephony. Our results indicate that the proposed approach significantly outperforms conventional predictive encoding for communication over lossy channels.


asilomar conference on signals, systems and computers | 2003

Compression of lightfield rendered images using coset codes

Ashish Jagmohan; Anshul Sehgal; Narendra Ahuja

Image-based rendering (IBR) and lightfield rendering (LFR) techniques aim to represent a 3D real-world environment by densely sampling it through a set of fixed viewpoint cameras. Remote digital walkthroughs of the 3D environment are facilitated by synthesizing novel viewpoints from the captured view-set. The large amount of data generated by the dense capture process makes the use of compression imperative for practical IBR/LFR systems. In the present paper, we consider the design of compression techniques for streaming of IBR data to remote viewers. The key constraints that a compression algorithm for IBR streaming is required to satisfy, are those of random access for interactivity, and precompression. We propose a compression algorithm based on the use of coset codes for this purpose. The proposed algorithm employs H.264 source compression in conjunction with LDPC coset codes to precompress the IBR data. Appropriate coset information is transmitted to the remote viewers to allow interactive view generation. Results indicate that the proposed compression algorithm provides good compression efficiency, while allowing client interactivity and server precompression.


international symposium on information theory | 2010

Algorithms for memories with stuck cells

Luis A. Lastras-Montano; Ashish Jagmohan; Michele M. Franceschini

We present a class of algorithms for encoding data in memories with stuck cells. These algorithms rely on earlier code constructions termed cyclic Partitioned Linear Block Codes. For the corresponding q-ary BCH-like codes for u stucks in a codeword of length n, our encoding algorithm has complexity O((u logq n)2) Fq operations, which we will show compares favorably to a generic approach based on Gaussian elimination. The computational complexity improvements are realized by taking advantage of the algebraic structure of cyclic codes for stucks. The algorithms are also applicable to cyclic codes for both stucks and errors.

Researchain Logo
Decentralizing Knowledge