Aman Bhatia
SK Hynix
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aman Bhatia.
IEEE Journal on Selected Areas in Communications | 2014
Aman Bhatia; Minghai Qin; Aravind R. Iyengar; Brian M. Kurkoski; Paul H. Siegel
We consider t-write codes for write-once memories with n cells that can store multiple levels. Assuming an underlying lattice-based construction and using the continuous approximation, we derive upper bounds on the worst-case sum-rate optimal and fixed-rate optimal n-cell t-write write-regions for the asymptotic case of continuous levels. These are achieved using hyperbolic shaping regions that have a gain of 1 bit/cell over cubic shaping regions. Motivated by these hyperbolic write-regions, we discuss construction and encoding of codebooks for cells with discrete support. We present a polynomial-time algorithm to assign messages to the codebooks and show that it achieves the optimal sum-rate for any given codebook when n = 2. Using this approach, we construct codes that achieve high sum-rate. We describe an alternative formulation of the message assignment problem for n≥ 3, a problem which remains open.
information theory workshop | 2012
Aman Bhatia; Aravind R. Iyengar; Paul H. Siegel
We consider t-write codes for write-once memories with cells that can store multiple levels. Using worst-case sum-rate optimal 2-cell t-write code constructions for the asymptotic case of continuous levels, we derive 2-cell t-write code constructions that give good sum-rates for cells that support q discrete levels. A general encoding scheme for q-level 2-cell t-write codes is provided.
information theory workshop | 2015
Aman Bhatia; Veeresh Taranalli; Paul H. Siegel; Shafa Dahandeh; Anantha Raman Krishnan; Patrick J. Lee; Dahua Qin; Moni Sharma; Teik Ee Yeo
Polar codes provably achieve the capacity of binary memoryless symmetric (BMS) channels with low complexity encoding and decoding algorithms, and their finite-length performance on these channels, when combined with suitable decoding algorithms (such as list decoding) and code modifications (such as a concatenated CRC code), has been shown in simulation to be competitive with that of LDPC codes. However, magnetic recording channels are generally modeled as binary-input intersymbol interference (ISI) channels, and the design of polar coding schemes for these channels remains an important open problem. Current magnetic hard disk drives use LDPC codes incorporated into a turbo-equalization (TE) architecture that combines a soft-output channel detector with a soft-input, soft-output sum-product algorithm (SPA) decoder. An interleaved coding scheme with a multistage decoding (MSD) architecture with LDPC codes as component codes has been proposed as an alternative to TE for ISI channels. In this work, we investigate the use of polar codes as component codes in the TE and MSD architectures. It is shown that the achievable rate of the MSD scheme converges to the symmetric information rate of the ISI channel when the number of interleaves is large. Simulations results comparing the performance of LDPC codes and polar codes in TE and MSD architectures are presented.
IEEE Transactions on Magnetics | 2014
Aman Bhatia; Shaohua Yang; Paul H. Siegel
We study the problem of designing a rate-1 block-precoder to minimize bit/symbol error rate when storing a given source on a magnetic recording channel. A block-precoder of length b-bits is defined by a permutation π on 2b blocks. We show that the problem of finding a permutation for the block-precoder that minimizes bit/symbol error rate is equivalent to solving the quadratic assignment problem, a known combinatorial optimization problem that is NP-complete. We exploit the symmetry group of the b-dimensional hypercube to reduce the search space, allowing a branch-and-bound technique to find the optimal 5-bit precoders. We also implement a local search algorithm that can find good precoders for larger blocklengths. We design precoders for MTR-constrained user bits and unconstrained parity bits with a reverse-concatenation architecture, and we evaluate the resulting SNR gains in a turbo equalization scheme.
IEEE Communications Letters | 2017
Minghai Qin; Jing Guo; Aman Bhatia; Albert Guillen i Fabregas; Paul H. Siegel
Polar code constructions based on mutual information or Bhattacharyya parameters of bit-channels are intended for hard-output successive cancellation (SC) decoders, and thus might not be well designed for use with other decoders, such as soft-output belief propagation (BP) decoders or successive cancellation list (SCL) decoders. In this letter, we use the evolution of messages, i.e., log-likelihood ratios, of unfrozen bits during iterative BP decoding of polar codes to identify weak bit-channels, and then modify the conventional polar code construction by swapping these bit-channels with strong frozen bit-channels. The modified codes show improved performance not only under BP decoding, but also under SCL decoding. The code modification is shown to reduce the number of low-weight codewords, with and without CRC concatenation.
global communications conference | 2011
Aman Bhatia; Aravind R. Iyengar; Paul H. Siegel
We investigate the reasons behind the superior performance of belief propagation decoding of non- binary LDPC codes over their binary images when the transmission occurs over the binary erasure channel. We show that although decoding over the binary image has lower complexity, it has worse performance owing to its larger number of stopping sets relative to the original non-binary code. We propose a method to find redundant parity-checks of the binary image that eliminate these additional stopping sets, so that we achieve performance comparable to that of the original non-binary LDPC code with lower decoding complexity.
Archive | 2017
Yi-min Lin; Aman Bhatia; Naveen Kumar; Johnson Yen
Archive | 2017
Yi-min Lin; Aman Bhatia; Naveen Kumar; Johnson Yen
Archive | 2016
Aman Bhatia; Naveen Kumar; Yi-min Lin; Lingqi Zeng
Archive | 2016
Yi-min Lin; Aman Bhatia; Naveen Kumar; Chung-li Wang; Lingqi Zeng