A Reconstruction-Computation-Quantization (RCQ) Approach to Node Operations in LDPC Decoding
Linfang Wang, Maximilian Stark, Richard D. Wesel, Gerhard Bauch
AA Reconstruction-Computation-Quantization (RCQ)Approach to Node Operations in LDPC Decoding
Linfang Wang, Richard D. Wesel
University of California, Los AngelesDepartment of Electrical and Computer Engineering { lfwang,wesel } @ucla.edu Maximilian Stark, Gerhard Bauch Hamburg University of TechnologyInstitute of Communications { maximilian.stark,bauch } @tuhh.de Abstract —In this paper, we propose a finite-precision decodingmethod that features the three steps of Reconstruction, Com-putation, and Quantization (RCQ). Unlike Mutual-Information-Maximization Quantized Belief Propagation (MIM-QBP), RCQcan approximate either belief propagation or Min-Sum decoding.One problem faced by MIM-QBP decoder is that it cannotwork well when the fraction of degree-2 variable nodes is large.However, sometimes a large fraction of degree-2 variable nodesis necessary for a fast encoding structure, as seen in the IEEE802.11 standard and the DVB-S2 standard. In contrast, theproposed RCQ decoder may be applied to any off-the-shelfLDPC code, including those with a large fraction of degree-2 variable nodes. Our simulations show that a -bit Min-SumRCQ decoder delivers frame error rate (FER) performancearound . dB of full-precision belief propagation (BP) for theIEEE 802.11 standard LDPC code in the low SNR region.The RCQ decoder actually outperforms full-precision BP in thehigh SNR region because it overcomes elementary trapping setsthat create an error floor under BP decoding. This paper alsointroduces Hierarchical Dynamic Quantization (HDQ) to designthe non-uniform quantizers required by RCQ decoders. HDQ isa low-complexity design technique that is slightly sub-optimal.Simulation results comparing HDQ and an optimal quantizer onthe symmetric binary-input memoryless additive white Gaussiannoise channel show a loss in mutual information between thesetwo quantizers of less than − bits, which is negligible forpractical applications. Index Terms — Low Precision LDPC decoder, Information Max-imization Quantizer.
I. I
NTRODUCTION
Low-Density Parity-Check (LDPC) codes have been widelyused in wireless communication and NAND flash systembecause of its excellent error correction capability. Typically,the massage passing algorithms, which are used to decodeLDPC codes, involve accurate number representation. In orderto make LDPC code practical, quantization is inevitable. How-ever, uniformly quantizing messages with too low precisionwill deteriorate the decoder’s performance greatly.Recently, non-uniform quantization LDPC decoders haveraised researchers’ interests because of their excellent perfor-mance with low precision and coarse quantization. One way torealize non-uniform quantization LDPC decoders is to design
This research is supported by National Science Foundation (NSF) grantCCF-1911166 Physical Optics Corporation (POC) and SA Photonics. Anyopinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect views ofthe NSF, POC, or SA. lookup tables (LUT) for variable nodes and/or check nodes.In [1], a Finite Alphabet Iterative Decoder (FAID) is proposedto overcome the error floor of LDPC code under binarysymmetric channel (BSC). On the other hand, aiming to min-imizing the performance degradation in the water fall region,[2] proposed a Mutual-Information-Maximization LUT (MIM-LUT) decoder. The MIM-LUT decomposes the actual nodeoperation into a series of cascaded binary-input-single-outputLUTs at the variable and the check node. In [3], Lewandowskyet al. proposed the Information-Optimum decoder, which isalso called Information Bottleneck (IB) decoder. Stark et al.extended the ideas from [2] and [3] and developed messagealignment (MA) in [4], [5] such that IB decoders work alsoon irregular LDPC codes with arbitrary degree distribution. In[6], [7], the Min-LUT decoders were proposed, which replacethe LUTs in the check node by a discrete, cluster-based Min-Sum operation. The Min-LUT decoder cannot perform well ifthe fraction of degree-2 variable nodes is large, thus suitableLDPC codes for Min-LUT decoders need careful optimization[7].The other way to realize non-uniform quantization is de-signing quantization parameters that maximizes mutual infor-mation between the source and quantized messages. In [8],Jason Kwok-San Lee and Jeremy Thorpe proposed a non-uniform BP decoder, which is implemented based only onsimple mappings and fixed-point additions. Unfortunately, theauthors did not provide a systematic way to find those mappingparameters. Recently, He et al. in [9] provided a systematicway to find mappings by implementing density evolution anddynamic programming quantization [10], and propose MIM-QBP. They also extended MIM-QBP to the irregular LDPCcode. However, similar to Min-LUT, MIM-QBP also facesthe problem that it does not work well when the fraction ofdegree-2 variable nodes in the LDPC code is large [9].Even though both Min-QBP and MIM-LUT can have an ex-cellent decoding performance by optimizing edge distributionto lower the fraction of degree 2 variable node, sometimes itis necessary to consider LDPC code with large part of degree2 variable node. For an example, in the IEEE 802.11 standardthe rate / LDPC code, half variables nodes has degree 2 forthe purpose of fast encoding [11].In this work, we generalize the structure in [8] and proposea finite-precision decoding method that features the three stepsof Reconstruction, Computation, and Quantization (RCQ). a r X i v : . [ ee ss . SP ] M a y nlike MIM-QBP and Min-LUT, RCQ can be applied on anyoff-the-shelf LDPC codes, including those with larger fractionof degree-2 variable nodes, such as IEEE 802.11 code. Themain contributions in this paper include: • We proposed generalized RCQ decoder structure. Unlikethe work in [8], [9], RCQ decoder can be an approxima-tion of either BP decoder ( bp-RCQ ) or Min-Sum decoder( ms-RCQ ). • We designed an efficient sub-optimal quantizationscheme, which is called Hierarchical Dynamic Quantiza-tion (HDQ), for symmetric binary-input discrete mem-orelyess channel (BIDMC). HDQ is used for channelquantization and RCQ decoder construction. • We used HDQ to implement Mutual Information Max-imization Discrete Density Evolution (MIM-DDE), andshowed that the RCQ decoder is a result of MIM-DDE. • We designed a 4 bit bp-RCQ decoder for IEEE 802.11standard rate / LDPC code for theoretical interests.Simulation shows that a -bit bp-RCQ decoder deliversframe error rate (FER) performance less than . dB offull-precision BP. • We designed a 4 bits ms-RCQ decoder for IEEE 802.11standard rate / LDPC code for practical implemen-tation interests. Simulations show that a -bit ms-RCQ decoder delivers frame error rate (FER) performancearound . dB of full-precision belief propagation (BP)in the low SNR region. The RCQ decoder actuallyoutperforms full-precision BP in the high SNR regionbecause it overcomes elementary trapping sets that createan error floor under BP decoding.The remainder of this paper is organized as follows: In Sec.II, we give the description and notations for the RCQ decoder.A hierarchical dynamic quantization algorithm is proposed inSec. III. Mutual information maximization Discrete DensityEvolution is introduced in Sec. IV. This section also describeshow to design RCQ decoders given an LDPC ensemble.Simulation results and discussion are given in Sec. V. Finally,Sec. VI concludes our work.II. R ECONSTRUCTION C OMPUTATION Q UANTIZATION D ECODING S TRUCTURE
Message passing algorithms update messages between vari-able nodes and check nodes in an iterative manner eitheruntil a valid codeword is found, or a predefined maximumnumber of iterations, I T , is achieved. The updating procedurecontains two steps: 1) computation of the output , 2) messageexhange of the output between neighboring nodes. We call themessages with respect to the computation internal message ,and the messages passed over the edges of the Tanner graph external message . In [8], the authors proposed a LDPC decoderstructure where the internal message has a higher precisionthan external message. In this work, we generalize theirstructure and propose a decoding framework that features threesteps of Reconstruction, Computation and Quantization.As illustrated in Fig. 1, RCQ decoder consists the followingthree parts: R ( · ) R ( · ) R ( · ) R ( · ) u n u u u r n r r r F ( · ) r out Q ( · ) u out Fig. 1. RCQ Decoding Structure Illustration
1) Reconstruction:
Reconstruction R ( · ) : F m → F n ( m 2) Computaion: F ( · ) : F n → F n is used to calculateoutcoming message. We denote the variable node function andcheck node function by F v and F c , respectively. F v sums upall incoming messages. F c has different implementation, wedenote check node operation in BP (i.e. hyperbolic-tangentoperation) and Min-Sum (we use stantadrd Min-Sum in ourwork) decoder by F cbp and F cms . 3) Quantization: A quantizer Q : F n → F m quantizes n bits internal message to m bit external message. A m bits Quantizer Q is determined by m − thresholds th = { th , ..., th m − } and Q ( i ) = i ≤ th m − i > th m − j th j < i ≤ th j +1 (1)We denote channel quantization by Q ch , denote check nodequantization and variable node quantization at i th iteration by Q ci and Q vi respectively.RCQ decoder precision can be fully described by a threetuple ( m, n c , n v ) , which represents external message preci-sion, check node internal message precision and variable nodeinternal message precision. We use notation ∞ to denotefloating point representation.III. H IERARCHICAL D YNAMIC Q UANTIZATION Like most non-uniform quantization LDPC decoders, de-signing RCQ decoder involves quantization that maximizesmutual information. Kurkoski in [10] proposed a dynamicprogramming method to find optimal quantizer for BIDMCwith complexity O ( M ) , where M is cardinality of channeloutput. Dynamic programming quantization is proved to beoptimal, however quantization becomes impractical when M is large. To mitigate computation complexity, different low-complexity near-optimal algorithms are proposed. In [12], Taldeveloped an annealing quantization algorithm with complex-ity O ( M log( M )) for quantizing symmetric BIDMC . In [3]Lewandowsky J. improved sequential Information Bottleneckalgorithm (sIB) to quantize symmetric BIDMC . The compu-tation of IB algorithm is O ( tM ) , where t is the number oftrials. As a machine learning algorithm, IB algorithm requires p ( y | x ) a a a a a bit level 0bit level 100 01 1101Fig. 2. HDQ method illustration: Quantizing symmetric BI-AWGNC obser-vation into 2 bit messages a l a r a i P l P r P m cost ( P l , P m ) > cost ( P r , P m )Stop: Return a i cost ( P l , P m ) ≤ cost ( P r , P m ) a l a r a i +1 P l P r P m Fig. 3. An intermediate step of STS Algorithm multiple trials for a guaranteed a satisfying result. In thiswork, we propose an efficient m bit quantization algorithmfor symmetric BIDMC with complexity O ( mM ) .Consider code bits x ∈ { , } in a binary LDPC codewordare modulated by Binary Phase Shift Keying (BPSK), i.e. s ( x ) = − x + 1 , and transmitted by Additive Gaussian WhiteNoise (AWGN) channel. Assume x obeys uniform distributionand noise variance is σ , the joint probability density functionbetween x and received signal y , p ( x, y | σ ) is p ( x, y | σ ) = 12 √ πσ e ( y − s ( x ))22 σ . (2)Since HDQ is designed under BIDMC, we first uniformlyquantize p ( x, y | σ ) into M levels and denote the joint prob-ability mass function (p.m.f.) by P ( X, Y ) , X = { , } , Y = { , ..., M − } . We denote P ( X = i, Y = i ) by P ( X i , Y j ) forsimplicity.A m bit Quantizer Q ch aims to maximizing mutual infor-mation between X and quantized value T [10] : arg max Q ∈Q I ( X ; T ) . (3)Lemma 1 and Lemma 2 in [2] simplifies finding an optimal m bit quantizer to finding m − boundaries { a , ..., a m − } .Even so, jointly optimizing m − boundaries still has alarge searching space. Hence, instead of optimizing thresh-olds jointly, HDQ algorithm determines these boundaries bitlevel by bit level . Figure. 2 illustrates how HDQ quantizessymmetric BI-AWGNC output into 2 bit levels : Algorithm 1: Sequential Thresholds Searching ( STS ) input : P ( X, Y ) , a l , a r output: a out P l ← [ P ( X , Y a l ) P ( X , Y a l )] P m ← [ P ( X , Y a l +1 ) P ( X , Y a l +1 )] P r ← [ (cid:80) a r − i = a l +1 P ( X , Y i ) (cid:80) a r − i = a l +1 P ( X , Y i )] for i ← to a r − a l − do c li ← cost ( P l , P m ) c ri ← cost ( P r , P m ) if c li < c ri then P l ← P l + P m P r ← P r − P m P m ← [ P ( X , Y a l + i +1 ) P ( X , Y a l + i +1 )] elsereturn a l + i + 1 endendreturn a r − Algorithm 2: Hierarchical Dynamic Quantization input : Pr ( X, Y ) , X ∈ { , } , Y ∈ { , ..., N − } ; m output: P ( X, T ) , Q , Ra ← a N ← N − for i ← to m − dofor j ← to i − − do a T i ( j + T ) ← STS (cid:16) a T i j , a T i ( j +1) (cid:17) endend P ( X i , T j ) ← (cid:80) a j − k =0 P ( X i , T k ) th i ← log P ( X ,Y ai ) P ( X ,Y ai ) R ( i ) = log P ( X ,T i ) P ( X ,T i ) • initialize: a and a . • bit level 0: determine a , a < a < a − , • bit level 1 : fix a and determine a and a , a < a l and starting from a l +1 , STS sequentially calculatesthe merging costs that a i is merged into left or right clusteruntil left merging cost is larger than right merging cost. Fig.3 shows an intermediate step of STS. Merging cost is definedas mutual information loss when merging two probabilitiestogether(Ref [3], Eq(10)) . Full description of STS and HDQalgorithm are given in Algorithm 1 and 2, respectively.Fig. 4 shows 4 bits quantization regions for channel outputof BI-AWGNC under different σ . We examined four different hannel Observation Fig. 4. quantization regions for channel output of BI-AWGNC under different σ quantization algorithms. Simulation shows that improved sIBalgorithm and HDQ algorithm has a quantization result veryclose to the optimal dynamic programming algorithm. Anneal-ing quantization algorithm deviates from the optimal solutionto different extent under different σ . We use I dp ( X ; T ) to denote the mutual information between X and quantizedvalue T , obtained by optimal dynamic programming quantizerand use I sub ( X ; T ) to represent mutual information obtainedthrough sub-optimal quantizers. Therefore, we can quantita-tively evaluate the performance of each sub-optimal algorithmby: ∆ I sub = I dp ( X ; T ) − I sub ( X ; T ) . (4)Fig. 5 gives ∆ I sub of each sub-optimal quantizer. Simu-lation shows that all three sub-optimal quantizer yields verysimilar mutual information with optimal quantizer. However,we can still see that compared with annealing quantization,sIB algorithm and HDQ has a quantization result more closeto optimal quantizer because the ∆ I sub is around − forboth sIB and HDQ.In the next section, we will use HDQ to conduct mutual-information-maximization discrete density evolution and con-struct RCQ decoder.IV. M UTUAL I NFORMATION M AXIMIZATION D ISCRETE D ENSITY E VOLUTION RCQ decoder is a result of quantized density evolution : Byquantizing the joint p.m.f. between code bits and message fromvariable node or check node, R ci , R vi , Q ci , Q vi can be constructedcorrespondingly. To differ our discrete density evolution withthe one using uniform quantization [14], we call our densityevolution Mutual-Information-Maximization Discrete DensityEvolution (MIM-DDE). A. MIM-DDE at check node Denote the joint p.m.f between incoming message T andcode bit X from i th variable node by P v,i ( X, T ) , X = { , } , M u t u a l I n f o r m a ti on L o ss Fig. 5. Difference of mutual information loss between each sub-optimalquantizer and optimal quantizer T = { , ..., m − } . Based on the independence assumptionin the density evolution [15], we have: P v,i ( X, T ) = P v ( X, T ) , i = 0 , ..., d c − (5)where d c is check node degree. At check node, the codebit corresponding to output is the XOR sum of code bitscorresponding to all inputs. By denoting: P v,a ( X, T ) (cid:126) P v,b ( X, T ) (cid:44) (cid:88) m,n : m (cid:76) n = k P v,a ( X m , T ) P v,b ( X n , T ) , (6) where m, n, k ∈ { , } , the joint p.m.f between code bitcorresponded to output and input messages, P cout ( X, T ) , canbe represented by: P cout ( X, T ) = P v, ( X, T ) (cid:126) ... (cid:126) P v,d c − ( X, T ) (7) = P v ( X, T ) (cid:126) ... (cid:126) P v ( X, T ) (8) (cid:44) P v ( X, T ) (cid:126) ( d c − , (9)where T is a vector containing all incoming d c − messages.Eq.(9) gives p.m.f. update when F cbp is implemented at thecheck node.In order to keep cardinality of external message same, P cout ( X, T ) needs to be quantized to m levels. As pointed in[3], | T | = 2 m ( d c − will be very large when m and d c is large.For an example, if d c = 8 and m = 4 , | T | = 2 . ∗ . Hence,directly quantizing P cout ( X, T ) is impossible. To mitigate theproblem of cardinality bombing , we propose an intermedi-ate coarse quantization algorithm called One-Step-Annealing(OSA) quantization without sacrificing mutual information.Note that Eq. (9) can be calculate in a recursive way andeach step takes two input: P cout ( X, T ) (cid:126) i = P v ( X, T ) (cid:126) ( i − (cid:126) P v ( X, T ) (10)We observe that, in each step, output of Eq.(10) have someentries with very close log likelihood ration (LLR) value. Bymerging entries whose LLR difference is small enough, mutualinformation loss is negligible. Hence, OSA simply mergesentries whose LLR values difference is less than a threshold ig. 6. OSA illustration: points are ordered w.r.t. LLR values. Each colorrepresents a cluster and LLR value difference in each cluster is less than l s . l s , and the output of OSA will be the input of next p.m.fcalculation step, i.e.: P v ( X, T ) (cid:126) i = OSA ( P v ( X, T ) (cid:126) ( i − , l s ) (cid:126) P v ( X, T ) . (11)We take l s ∈ [10 − , − ] in our simulation. Fig. 6 showsan illustration of OSA and full description of OSA algorithmis given in Algorithm.3. The following table shows | T | afterwe implement OSA and choose different l s . The example weshowed has the parameter m = 4 , d c = 8 . The result showsthat OSA greatly decreases the output cardinality, and basedon our simulation, mutual information losses under these three l s are all less than − . l s − ∗ − − | T | . ∗ . ∗ . ∗ . ∗ For a regular LDPC code with check node degree d c , HDQis implemented to quantize T into a m bit message. We denotejoint p.m.f. between code bit x and quantized value T by P c ( X, T ) . As a result of HDQ, Q c and R v in this iterationare constructed.Unlike regular LDPC code, irregular LDPC code has dif-ferent node types, we denote the check node edge distributionby ρ ( x ) = (cid:80) d c,max i =2 ρ i x i − . To update P c ( X, T ) and construct Q ck and R vk for irregular LDPC code, we need to quantize: P cout ( X, T ) = d c (cid:88) i =2 ρ i P c ( X, T ) (cid:126) ( i − (12)Due to space limitation, we refer [6] to Min-Sum operation.Note that Min-Sum operation doesn’t change the cardinalityof output, this implies for ms-RCQ :1) m = n c .2) R c is not required. We can map m messages to ( − m − , ..., − , , ..., m − and then implement F cms . Wecan also implement a single LUT to realize the min-sum operation. B. MIM-DDE at variable node Variable node sums the LLR messages from channel obser-vation and neighboring check nodes. By denoting: P c,a ( X, T ) (cid:0) P c,b ( X, T ) = 1 P ( X ) P c,a ( X, T ) P c,b ( X, T ) , (13)the joint p.m.f between code bit X and incoming messagecombination T , P vout ( X, T ) , given variable node degree d v ,can be expressed by: P vout ( X, T ) = P ch ( X, T ) (cid:0) P c ( X, T ) (cid:0) ( d c − , (14) Algorithm 3: One Step Annealing Algorithm (OSA) input : Pr ( X, Y ) , X ∈ { , } , Y ∈ { , ..., N − } ; l output: Pr( X, T ) j ← X , T j ) ← P ( X , Y )Pr( X , T j ) ← P ( X , Y ) l s ← log Pr( X ,Y )Pr( X ,Y ) for i ← to N − doif (log P ( X ,T i ) P ( X ,T i ) − l s ) ≤ l then P ( X , T j ) ← Pr( X , T j ) + Pr( X , Y i ) P ( X , T j ) ← Pr( X , T j ) + Pr( X , Y i ) else j ← j + 1Pr( X , T j ) ← Pr( X , Y i )Pr( X , T j ) ← Pr( X , Y i ) l s ← log Pr( X ,Y i )Pr( X ,Y i ) endend Similarly, for irregular LDPC code with variable edge degreedistribution λ ( x ) = (cid:80) d v,max i =2 x i − , P vout ( X, T ) is given by: P vout ( X, T ) = P ch ( X, T ) (cid:0) d v,max (cid:88) i =2 λ i P c ( X, T ) (cid:0) ( d v − . (15) P vout ( X, T ) is then quantized to m levels by HDQ. Also, as aresult of HDQ, and joint p.m.f between code bit X and quan-tized messages T , P v ( X, T ) , is updated. Q v in this iterationand R c in the next iteration can be built correspondingly. Notethat variable node also faces cardinality bombing problem,hence OSA is needed in each recursive step.Thus, by implementing MIM-DDE, we can iteratively up-date P c ( X, T ) , P v ( X, T ) and build Q ci , Q vi , R ci and R vi , i = { , ..., I T − } .In MIM-DDE, we only limit the precision of externalmessages, i.e. m , and keep internal messages, n c (only for bp-RCQ ) and n v , full precision. To make internal messageprecision finite, a uniform n c (or n v ) quantizer is requiredwhen implementing F c (or F v ).V. S IMULATION AND D ISCUSSION In this section, we build RCQ decoder for IEEE 802.11standard LDPC code with codeword length and rate . .The edge distribution is: λ ( x ) = 0 . x + 0 . x + 0 . x + 0 . x , (16) ρ ( x ) = 0 . x + 0 . x . (17)The LDPC code we choose has fast encoding structure hencehalf the variable nodes has degree 2. The E b N o we used to designRCQ is 0.90 dB for both bp-RCQ and ms-RCQ . I T is set tobe 50.Fig. 7 shows the FER simulation result of bp-RCQ (4, ∞ , ∞ )and ms-RCQ (4,4, ∞ ). As comparison, we give the performance E R Fig. 7. RCQ decoder with full precision internal message F E R Fig. 8. The effect of internal message Quantization for bits ms-RCQ of BP( ∞ ) and Min-Sum ( ∞ ). BP decoder performs best, buterror floor appears at . dB. The error floor is due to theexistence of trapping sets, which is a result of large degree-2variable nodes. Waterfall of Min-Sum starts from . dB, thisimplies Min-Sum decoder is transparent to trapping set thatBP can’t overcome. This phenomena is also observed in [16].Interestingly, it also reflects on RCQ decoders. When E b N o islow, compared with BP( ∞ ), bp-RCQ (4, ∞ , ∞ ) has a degrada-tion less than 0.1 dB and ms-RCQ (4,4, ∞ ) has a degradationaround 0.1 dB. As E b N o increases, bp-RCQ (4, ∞ , ∞ ) behavessimilar to BP( ∞ ) and appears error floor. However, ms-RCQ (4,4, ∞ ) outperforms BP. We collected noised codewords thatBP could not decode under . dB and fed it into ms-RCQ .Simulation result shows ms-RCQ can decode of them.For a purpose of practical use, we are more interested in ms-RCQ . Fig.8 gives FER performance of ms-RCQ decoder withdifferent n v . When E b N o < . dB, ms-RCQ (4,4,12) ( bits areassigned to integer part and bits are assigned to fractionpart), ms-RCQ (4,4,10) ( bits are assigned to integer part and bits are assigned to fraction part) and ms-RCQ (4,4,8) ( bits are assigned to integer part and bits are assigned tofraction part) have a degradation around . , . and . dB, compared with BP( ∞ ). When E b /N o > . dB, all three ms-RCQ decoders outperforms BP( ∞ ).VI. C ONCLUSION In this work, HDQ is proposed to quantize a symmetricbinary input discrete channel into m bit levels. Then weuse HDQ and MIM-DDE to construct the RCQ decoder.Unlike Mutual-Information-Maximization Quantized BeliefPropagation (MIM-QBP), RCQ can approximate either beliefpropagation or Min-Sum decoding. We use an IEEE 802.11standard LDPC code to illustrate that the RCQ decoder workswell when the fraction of degree 2 variable nodes is large.Simulations show that a -bit ms-RCQ decoder delivers frameerror rate (FER) performance around . dB of full-precisionbelief propagation (BP) in the low SNR region. The RCQdecoder actually outperforms full-precision BP in the highSNR region because it overcomes elementary trapping setsthat create an error floor under BP decoding.R EFERENCES[1] S. K. Planjery, D. Declercq, L. Danjean, and B. Vasic, “Finite alphabetiterative decoders, part i: Decoding beyond belief propagation on BSC,”Jul. 2012.[2] F. J. C. Romero and B. M. Kurkoski, “LDPC decoding mappings thatmaximize mutual information,” IEEE J. Sel. Areas Commun. , vol. 34,no. 9, pp. 2391–2401, Sep. 2016.[3] J. Lewandowsky and G. Bauch, “Information-Optimum LDPC decodersbased on the information bottleneck method,” IEEE Access , vol. 6, pp.4054–4071, 2018.[4] M. Stark, J. Lewandowsky, and G. Bauch, “Information-Optimum LDPCdecoders with message alignment for irregular codes,” in , Dec. 2018, pp. 1–6.[5] M. Stark, L. Wang, R. D. Wesel, and G. Bauch, “Information bottleneckdecoding of Rate-Compatible 5G-LDPC codes,” Jun. 2019.[6] M. Meidlinger, A. Balatsoukas-Stimming, A. Burg, and G. Matz,“Quantized message passing for LDPC codes,” in , Nov. 2015, pp. 1606–1610.[7] M. Meidlinger and G. Matz, “On irregular LDPC codes with quantizedmessage passing decoding,” in ,Jul. 2017, pp. 1–5.[8] J. K. . Lee and J. Thorpe, “Memory-efficient decoding of LDPC codes,”in Proceedings. International Symposium on Information Theory, 2005.ISIT 2005. , Sep. 2005, pp. 459–463.[9] X. He, K. Cai, and Z. Mei, “Mutual Information-Maximizing quantizedbelief propagation decoding of LDPC codes,” Apr. 2019.[10] B. M. Kurkoski and H. Yagi, “Quantization of Binary-Input discretememoryless channels,” IEEE Trans. Inf. Theory , vol. 60, no. 8, pp. 4544–4552, Aug. 2014.[11] M. Mankar, G. Asutkar, and P. Dakhole, “Reduced complexity quasi-cyclic ldpc encoder for ieee 802.11 n,” International Journal of VLSIdesign & Communication Systems (VLSICS) , vol. 7, no. 5/6, 2016.[12] I. Tal and A. Vardy, “How to construct polar codes,” May 2011.[13] N. Wong, E. Liang, H. Wang, S. V. S. Ranganathan, and R. D. Wesel,“Decoding flash memory with progressive reads and independent vs.joint encoding of bits in a cell,” in , Dec. 2019, pp. 1–6.[14] Sae-Young Chung, G. D. Forney, T. J. Richardson, and R. Urbanke,“On the design of low-density parity-check codes within 0.0045 db ofthe shannon limit,” IEEE Commun. Lett. , vol. 5, no. 2, pp. 58–60, Feb.2001.[15] T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Transactions oninformation , 2001.[16] W. Ryan and S. Lin,