Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir H. Banihashemi is active.

Publication


Featured researches published by Amir H. Banihashemi.


IEEE Communications Letters | 2004

On construction of rate-compatible low-density Parity-check codes

Mohammadreza Yazdani; Amir H. Banihashemi

In this letter, we present a framework for constructing rate-compatible low-density parity-check (LDPC) codes. The codes are linear-time encodable and are constructed from a mother code using puncturing and extending. Application of the proposed construction to a type-II hybrid automatic repeat request (ARQ) scheme with information block length k=1024 and code rates 8/19 to 8/10, using an optimized irregular mother code of rate 8/13, results in a throughput which is only about 0.7 dB away from Shannon limit. This outperforms existing similar schemes based on turbo codes and LDPC codes by up to 0.5 dB.


IEEE Transactions on Communications | 2005

On implementation of min-sum algorithm and its modifications for decoding low-density Parity-check (LDPC) codes

Jianguang Zhao; Farhad Zarkeshvari; Amir H. Banihashemi

The effects of clipping and quantization on the performance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at short and intermediate block lengths are studied. It is shown that in many cases, only four quantization bits suffice to obtain close to ideal performance over a wide range of signal-to-noise ratios. Moreover, we propose modifications to the min-sum algorithm that improve the performance by a few tenths of a decibel with just a small increase in decoding complexity. A quantized version of these modified algorithms is also studied. It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates.


international conference on communications | 2001

A heuristic search for good low-density parity-check codes at short block lengths

Yongyi Mao; Amir H. Banihashemi

For a given block length and given degree sequences of the underlying Tanner graph (TG), the ensemble of short low-density parity-check (LDPC) codes can have considerable variation in performance. We present an efficient heuristic method to find good LDPC codes based on what we define as the girth distribution of the TG. This method can be used effectively to design short codes for applications where delay and complexity are of major concern.


IEEE Communications Letters | 2004

Iterative layered space-time receivers for single-carrier transmission over severe time-dispersive channels

Rui Dinis; Reza Kalbasi; David D. Falconer; Amir H. Banihashemi

This letter presents an iterative layered space-time (LST) receiver structure for single-carrier (SC)-based transmission in severe time-dispersive channels. The proposed receiver combines LST principles with iterative block decision feedback equalization (IB-DFE) techniques. Our performance results show that the proposed receivers have excellent performance in severe time-dispersive channels, which can be very close to the matched filter bound (MFB) after just a few iterations.


IEEE Transactions on Communications | 2004

Graph-based message-passing schedules for decoding LDPC codes

Hua Xiao; Amir H. Banihashemi

We study a wide range of graph-based message-passing schedules for iterative decoding of low-density parity-check (LDPC) codes. Using the Tanner graph (TG) of the code and for different nodes and edges of the graph, we relate the first iteration in which the corresponding messages deviate from their optimal value (corresponding to a cycle-free graph) to the girths and the lengths of the shortest closed walks in the graph. Using this result, we propose schedules, which are designed based on the distribution of girths and closed walks in the TG of the code, and categorize them as node based versus edge based, unidirectional versus bidirectional, and deterministic versus probabilistic. These schedules, in some cases, outperform the previously known schedules, and in other cases, provide less complex alternatives with more or less the same performance. The performance/complexity tradeoff and the best choice of schedule appear to depend not only on the girth and closed-walk distributions of the TG, but also on the iterative decoding algorithm and channel characteristics. We examine the application of schedules to belief propagation (sum-product) over additive white Gaussian noise (AWGN) and Rayleigh fading channels, min-sum (max-sum) over an AWGN channel, and Gallagers algorithm A over a binary symmetric channel.


IEEE Transactions on Information Theory | 1998

On the complexity of decoding lattices using the Korkin-Zolotarev reduced basis

Amir H. Banihashemi; Amir K. Khandani

Upper and lower bounds are derived for the decoding complexity of a general lattice L. The bounds are in terms of the dimension n and the coding gain /spl gamma/ of L, and are obtained based on a decoding algorithm which is an improved version of Kannans (1983) method. The latter is currently the fastest known method for the decoding of a general lattice. For the decoding of a point x, the proposed algorithm recursively searches inside an, n-dimensional rectangular parallelepiped (cube), centered at x, with its edges along the Gram-Schmidt vectors of a proper basis of L. We call algorithms of this type recursive cube search (RCS) algorithms. It is shown that Kannans algorithm also belongs to this category. The complexity of RCS algorithms is measured in terms of the number of lattice points that need to be examined before a decision is made. To tighten the upper bound on the complexity, we select a lattice basis which is reduced in the sense of Korkin-Zolotarev (1873). It is shown that for any selected basis, the decoding complexity (using RCS algorithms) of any sequence of lattices with possible application in communications (/spl gamma//spl ges/1) grows at least exponentially with n and /spl gamma/. It is observed that the densest lattices, and almost all of the lattices used in communications, e.g., Barnes-Wall lattices and the Leech lattice, have equal successive minima (ESM). For the decoding complexity of ESM lattices, a tighter upper bound and a stronger lower bound result are derived.


IEEE Transactions on Information Theory | 2011

Lowering the Error Floor of LDPC Codes Using Cyclic Liftings

Reza Asvadi; Amir H. Banihashemi; Mahmoud Ahmadian-Attari

Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which are part of the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle ξ of length l(ξ) in the base graph such that the inverse image of ξ in the lifted graph consists of only cycles of length strictly larger than l(ξ). The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, and in some cases, the code rate. The constructed codes are quasi-cyclic and thus attractive from a practical point of view. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel.


IEEE Journal of Solid-state Circuits | 2006

A 0.18-

Saied Hemati; Amir H. Banihashemi; Calvin Plett

Current-mode circuits are presented for implementing analog min-sum (MS) iterative decoders. These decoders are used to efficiently decode the best known error correcting codes such as low-density parity-check (LDPC) codes and turbo codes. The proposed circuits are devised based on current mirrors, and thus, in any fabrication technology that accurate current mirrors can be designed, analog MS decoders can be implemented. The functionality of the proposed circuits is verified by implementing an analog MS decoder for a (32,8) LDPC code in a 0.18-mum CMOS technology. This decoder is the first reported analog MS decoder. For low signal to noise ratios where the circuit imperfections are dominated by the noise of the channel, the measured error correcting performance of this chip in steady-state condition surpasses that of the conventional floating-point discrete-time synchronous MS decoder. When data throughput is 6 Mb/s, loss in the coding gain compared to the conventional MS decoder at BER of 10-3 is about 0.3 dB and power consumption is about 5 mW. This is the first time that an analog decoder has been successfully tested for an LDPC code, though a short one


IEEE Transactions on Information Theory | 2012

muhbox m

Mehdi Karimi; Amir H. Banihashemi

This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. This algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. The algorithm is initiated with a set of short cycles as the input. The cycles are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-Pinsker (ZP) trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets.


global communications conference | 2002

CMOS Analog Min-Sum Iterative Decoder for a (32,8) Low-Density Parity-Check (LDPC) Code

Farhad Zarkeshvari; Amir H. Banihashemi

This paper is concerned with the implementation issues of the so-called min-sum algorithm (also referred to as max-sum or max-product) for the decoding of low-density parity-check (LDPC) codes. The effects of clipping threshold and the number of quantization bits on the performance of the min-sum algorithm at short and intermediate block lengths are studied. It is shown that min-sum is robust against quantization effects, and in many cases, only four quantization bits suffices to obtain close to ideal performance. We also propose modifications to the min-sum algorithm that improve the performance by a few tenths of a dB with just a small increase in decoding complexity.

Collaboration


Dive into the Amir H. Banihashemi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui Dinis

Universidade Nova de Lisboa

View shared research outputs
Researchain Logo
Decentralizing Knowledge