Eliminating Trapping Sets in Low-Density Parity Check Codes by using Tanner Graph Covers
aa r X i v : . [ c s . I T ] M a y Eliminating Trapping Sets in Low-Density Parity CheckCodes by using Tanner Graph Covers
Miloˇs Ivkovi´c, Shashi Kiran Chilappagariand Bane Vasi´c,
Fellow, IEEE
Abstract — We discuss error floor asympotics and present a method forimproving the performance of low-density parity check (LDPC) codesin the high SNR (error floor) region. The method is based on Tannergraph covers that do not have trapping sets from the original code. Theadvantages of the method are that it is universal, as it can be applied toany LDPC code/channel/decoding algorithm and it improves performanceat the expense of increasing the code length, without losing the coderegularity, without changing the decoding algorithm, and, under certainconditions, without lowering the code rate. The proposed method canbe modified to construct convolutional LDPC codes also. The method isillustrated by modifying Tanner, MacKay and Margulis codes to improveperformance on the binary symmetric channel (BSC) under the GallagerB decoding algorithm. Decoding results on AWGN channel are alsopresented to illustrate that optimizing codes for one channel/decodingalgorithm can lead to performance improvement on other channels.
Index Terms — convolutional LDPC codes, error floor, Gallager B,LDPC codes, min-sum decoding algorithm, Tanner code, trapping sets.
I. I
NTRODUCTION
The error-floor problem is arguably the most important problemin the theory of low-density parity check (LDPC) codes and iterativedecoding algorithms. Roughly, error floor is an abrupt change in theframe error rate (FER) performance of an iterative decoder in thehigh signal-to-noise ratio (SNR) region (see [9] for more details and[1], [2], [3] for general theory of LDPC codes).The error floor problem for iterative decoding on the binary erasurechannel (BEC) is now well understood, see [7], [8] and the referencestherein.In the case of the additive white Gaussian noise (AWGN) channel,MacKay and Postol in [4] pointed out a weakness in the constructionof the Margulis code [22] which led to high error floors. Richardson[9] presented a method to estimate error floors of LDPC codes andpresented results on the AWGN channel. He pointed out that thedecoder performance is governed by a small number of likely errorevents related to certain topological structures in the Tanner graphof the code, called trapping sets (or stopping sets on BEC [7]). The approach from [9] was further refined by Stepanov et al. in[10]. Zhang et al. [11] presented similar results based on hardwaredecoder implementation. Vontobel and Koetter [12] established atheoretical framework for finite length analysis of message passingiterative decoding based on graph covers. This approach was usedby Smarandache et al. in [13] to analyze the performance of LDPCcodes from projective geometries [13] and for LDPC convolutionalcodes [14].An early account on the most likely error events on the binarysymmetric channel (BSC) for codes which Tanner graphs have cycles
M. Ivkovi´c is with the Department of Mathematics, University of ArizonaTucson, AZ 85721, USA, e-mail: [email protected]. K. Chilappagari and B. Vasi´c are with Dept. of Electrical and ComputerEng. Univ. of Arizona.An earlier version of this work was presented at the 2007 IEEE Intern.Symposium on Information Theory (ISIT’07).This work was supported by grants from INSIC-EHDR and NSF-CCR(Grant no. 0634969). The necessary definitions will be given in the next section. is given by Forney et al. in [16]. Some results on LDPC codes overthe BSC appear in [13], as well.A significant part of the research on error floor analysis hasalso focused on methods for lowering the error floor. The twodistinct approaches taken to tackle this problem are (1) modifyingthe decoding algorithm and (2) constructing codes avoiding certaintopological structures. Numerous modifications of the sum-productdecoding algorithm were proposed, see, for example, [18] and [19],among others.Among the methods from the second group, there have been novelconstructions of codes with high Tanner graph girth [21], [6], asit was observed that codes with low girth tend to have high errorfloors. While it is true that known trapping sets have short cycles[10], [17], the example of projective geometry codes, that have shortcycles, but perform well under (hard decision) iterative decoding,suggests that maximizing the girth is not the optimal procedure. Asthe understanding of the error floor phenomena and its connectionwith trapping sets grows, avoiding the trapping sets directly (ratherthan short cycles) seems to be a more efficient way (in terms of coderate and decoding complexity), to suppress error floors.Code modification for improving the performance on the binaryerasure channel (BEC) was studied by Wang in [20]. To the bestof our knowledge, it is the first paper on code modification withmaximizing the size of stopping (or trapping) sets as the objective.Edge swapping within the code was suggested as a way to breakthe stopping sets. The method that we propose is similar. Roughlyspeaking, it consists of taking two (or more) copies of the same codeand swapping edges between the code copies in such a way that themost dominant trapping sets are broken. It is also similar to the codeconstructions that appear in Smarandache et al. [14], Thorpe [24],Divsalar and Jones [25] and Kelley, Sridhara and Rosenthal [26].The advantages of the method are: (a) it is universal as it canbe applied to any code/channel model/decoding algorithm and (b)it improves performance at the expense of increasing the codelength only, without losing the code regularity, without changing thedecoding algorithm, and, under certain conditions, without loweringthe code rate. If the length of the code is fixed to n , the method canbe applied by taking t copies of a (good) code C of length n/t andeliminating the most dominant trapping sets of C . The method canbe slightly modified to construct convolutional LDPC codes as well.The details are given in Section III.We apply our method and construct codes based on Margulis [22],Tanner [21] and MacKay [23] codes and present results on the BSCwhen decoded using the Gallager B algorithm [1]. It is worth notingthat the error floor on the AWGN channel depends not only on thestructure of the code but also on implementation nuances of thedecoding algorithm, such as numerical precision of messages [9].Since the Gallager B algorithm operates by passing binary messagesalong the edges of a graph, any concern about the numerical precisionof messages does not arise.The rest of the paper is organized as follows. In Section IIwe introduce the notion of trapping sets and their relation to theperformance of the code. We explain the proposed method in SectionIII. We present numerical results in Section IV and conclude inSection V. II. B ASIC C ONCEPTS
The Tanner graph of an LDPC code, G , is a bipartite graph withtwo sets of nodes: variable (bit) nodes and check (constraint) nodes.The nodes connected to a certain node are referred to as its neighbors.The degree of a node is the number of its neighbors. The girth g is the length of the shortest cycle in G . In this paper, • represents a variablenode, (cid:3) represents an even degree check node and (cid:4) represents anodd degree check node.The notion of trapping sets was first introduced in [4], but here wefollow the formalism from [19]. Definition 1:
For a given m × n matrix U = ( U i,j ) with i m, j n, the projection of a set of h columns indexed by j , j , . . . , j h is an m × h matrix consisting of the elements u i,j , i m, j = j , j , . . . , j h . Definition 2:
Let H be a parity check matrix of an LDPC code.An ( a, b ) trapping set T is a set of a columns of H with a projectionthat contains b > odd weight rows.The definition of the trapping set above is purely topological,that is, a trapping set can be seen as a subgraph of the Tannergraph. In other words, an ( a, b ) trapping set T is a subgraph with a variable nodes and b odd degree checks. The most probable noiserealizations that lead to decoding failure are related to trapping sets([9], [10]). A measure of noise realization probability is referred to as pseudo-weight. Following the terminology in [10], an instanton canbe defined as the most likely noise realization that leads to decodingfailure.The instantons on the BSC consist of the received bit configurationswith minimal number of erroneous bits that lead to decoding failure.Following [17], the notion specific to BSC, analogous to pseudo-weight, can be defined as:
Definition 3:
The minimal number of variable nodes that have tobe initially in error for the decoder to end up in the trapping set T will be referred to as the critical number k for that trapping set. Remark:
To “end up” in a trapping set T means that, after a finitenumber of iterations, the decoder will be in error, on at least onevariable node from T , at every iteration. Note that the variable nodesthat are initially in error do not have to be within the trapping set.We illustrate the above concepts with an example. (a) (5,3) trapping set (b) (4,4) trapping setFig. 1. Trapping sets Example 1:
The (5 , trapping set in Fig. 1(a). appears (amongother codes) in the Tanner (155, 64) code [17] (see also the examplesof irreducible closed walks in the chapter 6.1 of [5]) . This trappingset has critical number k = 3 under the Gallager B decodingalgorithm (for the definition of the algorithm see [2]), meaning that,if three variable nodes, on the diagonal from bottom left to top right,are initially in error, the decoder will fail to correct the errors.Fig. 1(b) illustrates a (4 , trapping set. This trapping set, althoughsmaller, has critical number k = 4 , (all the variable nodes have to bein error initially for the decoder to fail). So, if a code has both (5 , and (4 , trapping sets, the FER performance is dominated by the (5 , trapping set.At the end of this example, we note that the (5 , trapping setabove is an example of an oscillatory trapping set, i.e, if three variablenodes on the diagonal are initially in error, after the first iterationthose three nodes will be decoded correctly, but the remaining twowill be in error. In the decoding attempt after the second iteration those two will be correct, but the initial three will be in error again,and so on. Remark:
Note that on the BEC the critical number is just the size ofthe stopping set, see [20].We now clarify what “the most dominant trapping sets” means andhow these effect code performance.Let α be the transition probability of the BSC and c k be the numberof configurations of received bits for which k channel errors lead toa codeword (frame) error. The frame error rate (FER) is given by: F ER ( α ) = n X k = i c k α k (1 − α ) ( n − k ) where i is the minimal number of channel errors that can lead to adecoding error (size of instantons) and n is the length of the code.On a semilog scale the FER is given by the expression log( F ER ( α )) = log ` n X k = i c k α k (1 − α ) n − k ´ (1) = log( c i ) + i log( α ) + log((1 − α ) n − i ) (2) + log „ c i +1 c i α (1 − α ) − + . . . + c n c i α n − i (1 − α ) i − n « (3)In the limit α → we note that lim α → h log((1 − α ) n − i ) i = 0 and lim α → h log “ c i +1 c i α (1 − α ) − . . . + c n c i α n − i (1 − α ) i − n ”i = 0 So, the behavior of the FER curve for small α is dominated by log( F ER ( α )) ≈ log( c i ) + i log( α ) The log(
F ER ) vs log( α ) graph is close to a straight line withslope equal to i -the minimal critical number or cardinality of theinstantons.Therefore, if two codes C and C have instanton sizes i and i , such that i < i , then the code C will perform better than C for small enough α, independent of the number of instantons, justbecause log( α ) → −∞ as α → . Note also that the critical numberof the most dominant trapping sets cannot be greater than half theminimum distance. If it is the case, the performance of the decoderis dominated by the minimum weight codewords.III. T HE M ETHOD FOR E LIMINATING T RAPPING S ETS
In this section we present a method to construct an LDPC code C (2) of length n from a given code C of length n and discuss amodification of the method that gives a convolutional LDPC codebased on C .Let H and H (2) represent the parity check matrices of C and C (2) respectively. H (2) is initialized to H (2) = » H H – . Stated simply, H (2) is formed by taking two copies of H say C and C . It can be seen that if H has dimensions m × n , then H (2) hasdimensions m × n . Every edge e in the Tanner graph G of C isassociated with a nonzero entry H t,k . The operation of changing thevalue of H (2) t,k and H (2) m + t,n + k to “0”, and H (2) m + t,k and H (2) m,n + k to“1” is termed as swapping the edge e . Fig. 2 illustrates edge swappingin two copies of a (5 , trapping set. We assume that the most Fig. 2. Trapping set elimination dominant trapping sets for C are known. The method can be describedin the following steps. Algorithm:
1) Take two copies C and C of the same code. Since thecodes are identical they have the same trapping sets. Initialize SwappedEdges = φ ; FrozenEdges = φ ;2) Order the trapping sets by their critical numbers.3) Choose a trapping set T in the Tanner graph of C , withminimal critical number. Let E T denote the set of all edgesin T . If ( E T ∩ SwappedEdges = φ ) goto 5. Else goto 4.4) Swap an arbitrarily chosen edge e ∈ E T \ FrozenEdges (if itexists). Set
SwappedEdges = SwappedEdges ∪ e .5) “Freeze” the edges E T from T so that they cannotbe swapped in the following steps. Set FrozenEdges = FrozenEdges ∪ E T .6) Repeat steps 2 to 4 until it is possible to remove the trappingsets of the desired size.Step 5 is needed because swapping additional edges from the (former)trapping sets might introduce trapping sets with a same criticalnumber again. Fig. 3 illustrates such a swapping which correspondsto just interchanging the check nodes. Fig. 3. Reintroducing trapping set by swapping two edges
The Tanner graph of the newly made code is a special double coverof the original code’s Tanner graph, interested readers are referred to[12].
Remark:
There are several approaches which may improve theefficiency of the algorithm. Firstly, instead of swapping the edgesat random at step 3, edges could be swapped based on the number oftrapping sets they participate in, or by using some other schedulethat would (potentially) lead to the highest number of trappingsets eliminated. The structure of the code can also be exploited.For example, the Margulis (2640 , code [22], has , minimal trapping sets with the property that each trapping set has oneedge that does not participate in any other minimal trapping set. So,instead of swapping edges at random, the edges appearing in onlyone trapping set can be swapped, and such a procedure is guaranteedto eliminate all the minimal trapping sets. Also, there is a possibilitynot to freeze all the edges from the (former) trapping sets, but onlythose that would, if swapped, introduce the trapping sets with thesame critical number.Note, however, that any edge swapping schedule can be seen as aparticular realization of the random edge swapping. For all the codesthat we considered, all trapping sets with minimal critical numberwere eliminated by the algorithm with random edge swapping. The following theorem shows how this method affects the coderate. Theorem 1:
If the code C, with parity check matrix H, and rate r (and length n ) is used in the algorithm above, the resulting code C (2) will have rate r (2) (and length n ), such that r (2) r. Proof:
Each edge swapping operation in the algorithm can be seenas matrix modification. At the end of the algorithm, code C (2) isdetermined by H (2) = » H ′ BB H ′ – where H ′ and B are matrices such that H ′ + B = H, and H ′ t,k (or B t,k ) can be equal to “1” only if H t,k = 1 .If the second block row is added to the first in H (2) , and then thethe first block column is added to the second, we end up with » H ′ BB H ′ – → » H HB H ′ – → » H B H – (4)The last matrix in (4) has rank which is greater than or equal to twicethe rank of H . Therefore, the code C (2) has rate r (2) r where r is the rate of C . (cid:3) Note, that r (2) = r if B = CH + HD, for some matrices C and D, so that CH corresponds to linear combinations of rows of H and HD corresponds to linear combinations of columns of H. We alsohave a following corollary.
Corollary 1:
If the matrix H has full rank, then r (2) = r. Proof:
This follows from the fact that if H has full rank, then thelast matrix in (4) has full rank also. (cid:3) At the end of this section, we briefly discuss the minimal distanceof the modified code.
Theorem 2:
If the code C has minimal distance d min , the modifiedcode C (2) , will have the minimal distance d (2) min , such that, d min ≥ d (2) min ≥ d min . Proof:
We first prove that d (2) min ≥ d min . Suppose that the minimalweight codeword of C (2) is c (2) , where c (2) is a column vectorconsisting of two vectors c and c of length n . Then H (2) c (2) = 0 is equivalent to » H ′ BB H ′ – » c c – = » H ′ c + Bc Bc + H ′ c – = 0 (5)Note that c + c = c is a column vector of length n, withHamming weight w h ( c ) ≤ w h “ c (2) ” , where w h “ c (2) ” is theHamming weight of the c (2) . Now: Hc = ( H ′ + B )( c + c ) = H ′ c + Bc + H ′ c + Bc = 0 (6)because the last expression in Eq. (6) is equal to the sum of entriesof the last column vector in Eq. 5. So, c is a codeword of C. If c = 0 , from w h ( c ) ≤ w h “ c (2) ” it follows that d (2) min ≥ d min .If c = 0 then c = c , and from Eq. (5) follows that Hc = 0 , so c is a codeword of C and again d (2) min ≥ d min .The proof that d min ≥ d (2) min is similar. If we assume that c isa minimal weight codeword of C, we have: » H ′ BB H ′ – » c c – = 0 (7)so d min ≥ d (2) min . We finish this proof by mentioning that it is not difficult toconstruct examples where d min = d (2) min or d (2) min = d min , so thestatement of the theorem is “sharp”. (cid:3) We described the algorithm in its basic form. H (2) can be initial-ized by interleaving the copies C and C in an arbitrary order, butwe choose concatenation to keep the notation simple. The method,as well as all the proofs, will hold for any interleaving. It is alsopossible to consider more than two copies of the code to furthereliminate trapping sets with higher critical number.The splitting of parity check matrix H into H ′ and B can be seenas a way to construct convolutional LDPC codes, that is, as a wayto unwrap the original LDPC code C . For details on unwrapping see[15] and the references therein. The (infinite) parity check matrix canbe can be constructed as: H conv = H ′ B H ′ B H ′ B ...... (8)Note that by construction the resulting convolutional code haspseudo-codewords with higher pseudo-weights than original LDPCcode. In this light, Theorem 2 can be seen as a generalization ofLemma 2.4 from [14]. We refer readers interested in convolutionalLDPC codes to that paper.IV. N UMERICAL R ESULTS
In this section we illustrate the proposed method by modifyingthe Margulis [22], Tanner [21] and MacKay [23] codes to eliminatetrapping sets under the Gallager B decoding algorithm. We use thetrapping sets reported in [17].
Example 2: (Margulis (2640 , code) The parity check of thismatrix has full rank, so the modified code is an (5280 , code,and has the same rate as the original code, i.e., r (2) = r = 0 . . This code has , trapping sets with critical number as the most dominant ones. The modified (5280 , code hasno (4 , trapping sets and the performance is governed by (5 , trapping sets (ten cycles), that have critical number k = 5 , Fig. 4. Fig. 4. Margulis code performance
Example 3: (Tanner (155, 64) code)
This code has (5 , trappingsets (Fig. 1(a)) with critical number i = 3 as the most dominantones. There are 155 such trapping sets [17], [21]. In this case weused a version of the method in which it is possible to swap edgesfrom the (former) trapping sets, if no trapping set of the same orsmaller critical number is introduced. The result was a (310, 126)code for which the minimal trapping sets are type (4,4) (eight cycles)with critical number k = 4 (see Fig 1(b)). This was confirmed bynumerical simulations in Fig. 5. The FER curve changes the slope,for higher α, where FER contribution from the expression (3) is notnegligible. Note that there was a small rate penalty to this procedure.The original Tanner code has rate 0.4129, whereas the modified codehas rate 0.4065. Fig. 5. Tanner code performance for a longer range of α Example 4: ( MacKay’s (1008, 504) codes)
This is an example ofhow the method can be used to produce better codes of a fixed length.We have taken a 504 length MacKay code and constructed a 1008( ∗ ) length code. The new code performs better than MacKaycodes of length 1008.Both original 504 and 1008 length codes have two types of trappingsets with critical number k = 3 , (5,3) and (3,3) (six cycles). We ranthe algorithm so that all (3,3) trapping sets are eliminated from thenewly constructed, but none of the (5,3) trapping sets. The results areshown in Fig. 6. It can be seen that, although the FER performance Fig. 6. MacKay’s codes performance is improved, the slope of the FER curve is approximately the same. Example 5: (AWGN channel)
This example illustrates two points.First is that optimizing code for one decoding algorithm can lead toperformance improvement for other decoding algorithms. The secondpoint is that the use of an appropriate axis scaling can greatly helpin error floor analysis and code performance prediction.We present FER results over AWGN channel and min-sum algo-rithm after 500 iterations for three codes, the original Tanner (155,64) code, our modified Tanner (310, 126) from the Example 3 and arandom (310, 127) code with column weight 3 and row weight 5.In the low SNR region, where all kinds of error events arelikely, the length (and rate) of a code govern the performance.In this region codes of length 310 have similar performance. Forhigh SNRs, however, code optimization in terms of trapping setsbecomes important and random code performance becomes muchworse than performance of the modified Tanner (310, 126) code.Notice a pronounced error floor for the random code.What is even more illustrative is Fig. 7(b) where we plot log(
F ER ) versus SNR (not in dB) on the x-axis. This is becausefor high SNRs on the AWGN channel, similarly to Eq. (3), F ER ∝ exp( − ω in ∗ SNR / , where d in is pseudo-weight of the most likely It is possible that a more sophisticated algorithm would also eliminatethe (5,3) trapping sets. However, our goal with this example was to show theperformance when some, but not all, of the trapping sets with minimal criticalnumber are eliminated. error event. So on the graph with SNR on the x-axis which is not indB, log(
F ER ) curve will approach (from above) a straight line withslope equal to − ω in / as SNR → ∞ . See [5] and [12] for furtherdetails. Using these observations and numerical results obtained bysimulations we can estimate that our modified code has the slopeapproximately equal to 20, better than the original Tanner (155, 64)code with the slope of ≈ . Further more, considering that the slope for the random code is ≈ , we can claim that, for SNR values higher than those on theplots, the Tanner code will perform better than the random code. (a) log( F ER ) versus SNR in dB(b) log( F ER ) versus SNR as EN (not in dB)Fig. 7. FER performance under min-sum decoding V. C
ONCLUSION
The proposed method allows the construction of codes withgood FER performance, but low row/column weight (as opposedto projective geometry codes) and therefore relatively low decodingcomplexity. Although numerical results for the Gallager B decoderare presented, we reiterate that the method can be used for codeoptimization based on the trapping sets of an arbitrary decoder.The algorithm can also be used to determine the pseudo-weightspectrum of a code as follows. Once the most likely trapping sets(those with the smallest pseudo-weight) are determined and elimi-nated by the method, the numerically obtained decoding performanceof a modified code, i.e., the slope of the FER curve with appropriateaxis, gives an estimate of the pseudo-weight of the next most likelytrapping sets -just as it was done in the Example 5 with the Tannercode and the modified Tanner code.A
CKNOWLEDGMENT
The authors would like to acknowledge valuable discussions withRobert Indik, Misha Stepanov and Clifton Williamson. The estimate for the Tanner code is in accordance with the pseudo-weightof the single most likely error event of ≈ . reported in [10]. R EFERENCES [1] R. G. Gallager,
Low Density Parity Check Codes,
Cambridge, MA: MITPress, 1963.[2] T. J. Richardson and R. Urbanke, “The Capacity of Low-Density Parity-Check Codes Under Message-Passing Algorithm,”
IEEE Trans. Inform.Theory, vol. 47, no. 2, pp. 599-618, Feb. 2001.[3] T. Richardson, A. Shokrollahi, and R. Urbanke, ”Design of capacityapproaching irregular low-density parity-check codes,”
IEEE Trans.Inform. Theory , vol. 47, pp. 619-637, 2001.[4] D. MacKay and M. Postol, “Weaknesses of Margulis and Ramanujan-Margulis low-density parity check codes,” Electronic Notes in Theoret-ical Computer Science, vol. 74, 2003.[5] N. Wiberg,
Codes and decoding on general graphs,
Ph.D. thesis,Link¨oping University, 1996.[6] J.K. Moura. J. Lu, and H. Zhang, “Structured LDPC codes with largegirth,” IEEE Signal Proc. Mag., vol. 21, no. 1, pp. 42-55, Jan. 2004[7] C. Di et al. , “Finite length analysis of low-density parity-check codeson the binary erasure channel,”
IEEE Trans. on Information Theory , vol.48, no. 6, pp. 1570-1579, Jun 2002.[8] C. Wang, S. R. Kulkarni, H. V. Poor, “Upper Bounding the Performanceof Arbitrary Finite LDPC Codes on Binary Erasure Channels,” in
Proc.Intern. Symp. on Inform. Theory,
Seattle, WA, USA, July 9-14, 2006.[9] T. J. Richardson, “Error Floors of LDPC Codes,” in Proc. 41st AnnualAllerton Conf. on Communications, Control and Computing, 2003.[10] M. Stepanov, M. Chertkov, “Instanton analysis of Low-Density Parity-Check codes in the error-floor regime,” ,
Proc. IEEE Intern. Symp. onInform. Theory (ISIT),
Seattle, WA, USA, July 9-14, 2006.[11] Z. Zhang, L. Dolecek, B. Nikoli´c, V. Anantharam and M Wainwright“Investigation of Error Floors of Structured Low- Density Parity-CheckCodes by Hardware Emulation, in
Proc. IEEE Globecom,
Proc. IEEE Intern. Symp on Inform. Theory,
Nice, France,June 24-29, 2007, pp. 1221-1225.[16] G. D. Forney, Jr., R. Koetter, F. R. Kschischang, and A. Reznik, “Onthe effective weights of pseudocodewords for codes defined on graphswith cycles,” in Codes, Systems, and Graphical Models (Minneapolis,MN, 1999) (B. Marcus and J. Rosenthal, eds.), vol. 123 of IMA Vol.Math. Appl., pp. 101–112, Springer Verlag, New York, Inc., 2001.[17] S. K. Chilappagari, S. Sankaranarayanan, and B. Vasic, “Error Floorsof LDPC Codes on the Binary Symmetric Channel,”
Proc. Intern. Conf.on Communications,
ICC 2006, Istanbul, Turkey, June 2006.[18] N. Varnica and M. Fossorier, “Improvements in belief-propagationdecoding based on averaging information from decoder and correctionof clusters of nodes,” in
IEEE Communications Letters, vol. 10, no 12.pp 846-848, Dec. 2006.[19] S. Laendner, T. Hehn, O. Milenkovi´c and J. Huber, “When does oneredundant parity-check equation matter?” in
Proc. IEEE Globecom,
SanFrancisco, CA, USA, 2006.[20] C. Wang, “Code annealing and the suppressing effect of the cyclicallylifted LDPC code ensemble,” presented at , Chengdu, China, October 22-26, 2006.[21] R. M. Tanner, D. Sridhara and T. Fuja, “A class of Group-StructuredLDPC codes,” in
Proc. ISCTA,
Proc. of the 38-thAllerton Conf. on Communication, Control, and Computing