DNC-Aided SCL-Flip Decoding of Polar Codes
DDNC-Aided SCL-Flip Decoding of Polar Codes
Yaoyu Tao,
Member, IEEE, and Zhengya Zhang,
Senior Member, IEEE
Department of Electrical Engineering and Computer ScienceUniversity of Michigan, Ann Arbor, MI 48109 USAEmails: { taoyaoyu, zhengya } @umich.edu Abstract —Successive-cancellation list (SCL) decoding of polarcodes is promising towards practical adoptions. However, theperformance is not satisfactory with moderate code length.Variety of flip algorithms are developed to solve this problem. Thekey for successful flip is to accurately identify error bit positions.However, state-of-the-art flip strategies, including heuristic anddeep-learning-aided (DL-aided) approaches, are not effective inhandling long-distance dependencies in sequential SCL decoding.In this work, we propose a new DNC-aided flip decodingwith differentiable neural computer (DNC). New action andstate encoding are developed for better training and inferenceefficiency. The proposed method consists of two phases: i) a flipDNC (F-DNC) is exploited to rank most likely flip positions formulti-bit flipping; ii) if multi-bit flipping fails, a flip-validate DNC(FV-DNC) is used to re-select error position and assist single-bitflipping successively. Training methods are designed accordinglyfor the two DNCs. Simulation results show that proposed DNC-aided SCL-Flip (DNC-SCLF) decoding can effectively improvethe error-correction performance and reduce number of decodingattempts compared to prior works.
Index Terms —Polar code, deep learning, successive cancella-tion list decoder, flip algorithms, differentiable neural computer
I. I
NTRODUCTION
Capacity-achieving polar code [1] has been adopted inmodern communication systems such as 5th generation (5G)wireless standard. It can be decoded sequentially on a trellisusing successive cancellation list (SCL) [2] decoder. Uponreceiving log-likelihood ratios (LLRs), SCL calculates pathmetrics (PMs) following a bit after bit order. A list of L most likely paths are kept during decoding and decoded bitsare determined by the path with highest PM. However, thedecoding performance are not very satisfactory with moderatecode length N . Once wrong bit decisions occur on SC trellis,they have no chance to be corrected due to the sequentialdecoding order.To solve this problem, flip algorithms are used when stan-dard decoding fails with cyclic redundancy check (CRC). Errorpositions are searched and flipped in new decoding attempts.Clearly, the key for successful flip decoding is to accuratelyidentify error positions. Various heuristic methods have beenproposed for this purpose. [3] flipped the bits with smallreceived LLR amplitude. [4], [5] proposed methods to reducethe search scope for lower complexity. [6] introduced a criticalset with high possibility to be flipped. [7] developed a look-up table to store the error patterns. [8] designed a new metricbased on SCL to rank the error positions. Techniques likeprogressive flipping [6], partitioned flipping [9] and dynamic flipping [10], [11] are proposed for multi-bit flipping at a time.All these methods aim to effectively locate error positions;however, the optimal flipping strategy is still an open problem.Recent works on flip algorithms involve deep learning. [7],[12]–[14] proposed to use long short-term memory (LSTMs)to help locate error positions for short polar codes of length64 or 128. LSTM networks can deal with event sequences,but dependencies between distant events get diffused. Thispresents a limitation in the accuracy of identifying errorpositions for longer code length.The recently developed differentiable neural computer(DNC) [15] uses an external memory to help LSTM storelong-distance dependencies. It has shown advantages overtraditional LSTM when tackling highly complex sequenceproblems. In this paper, we adopt DNC to solve bit flippingproblem in SCL decoding where complex long-distance de-pendencies between bits are embedded in sequence. The maincontributions are summarized as follows:1) A two-phase decoding is proposed assisted by twoDNCs, flip DNC (F-DNC) and flip-validate DNC (FV-DNC). F-DNC ranks possible error positions and gen-erates a flip set for multi-bit flipping. If decoding stillfails, FV-DNC is used to re-select error position andassist single-bit flipping successively.2) We propose new action encoding with soft multi-hotscheme and state encoding considering both PMs andreceived LLRs for better DNC training and inferenceefficiency. Training methods are designed accordinglyfor the two DNCs, where training database are generatedbased on supervised flip decoding attempts.3) Simulation results show that the proposed DNC-aidedSCL-Flip (DNC-SCLF) decoder outperforms the state-of-the-art techniques by up to 0.19dB in error correctionperformance or 44% reduction in number of decodingattempts, respectively.II. B ACKGROUND
A. SCL and SCL-Flip Decoding of Polar Code
An ( N , K ) polar code has a code length N and coderate K/N , where N = 2 n , n ∈ Z + . Let u N − =( u , u , ..., u N − ) denote the vector of input bits to the en-coder. The K most reliable bits in u N − , called free bits A ,are used to carry information; while the remaining N − K bits, called frozen bits A c , are set to pre-determined values.SC is the basic decoding scheme of polar codes proposedin [1]. Assume r N − is the received LLRs. It follows a bit- a r X i v : . [ c s . I T ] J a n fter-bit sequential order and the decoding of a bit depends onpreviously decoded bits. The dependencies become complexand long-distance for long code length. SC keeps the mostlikely path, or the path of the highest PM. SCL decoding[2] improves the error-correction performance by keeping alist of L candidate paths, or paths of the L highest PMs.Concatenating polar code with cyclic redundancy check (CRC)[16], [17] can help pick the final path. The CRC-aided polarSCL decoding can be described by Algorithm 1. SC can beseen as a special case when list size L = 1 . Algorithm 1:
CRC-SCL Decoding of (
N, K ) PolarCode List size = L , L = { , ..., L − } for i = 0 , , ..., N − do if i / ∈ A then ˆ u i ( (cid:96) ) ← u i for ∀ (cid:96) ∈ L else ∀ ˆ u i ∈ { , } , ∀ (cid:96) ∈ L
1) SC Trellis: L ( (cid:96) ) i = log Pr ( u i =0 | r N − , ˆ u ( (cid:96) ) i − ) Pr ( u i =1 | r N − , ˆ u ( (cid:96) ) i − ) P ( (cid:96) ) i = P ( (cid:96) ) i − + log(1 + e (1 − u i ) L (ˆ u i ) )
3) Sort: continue along the L paths with top P ( (cid:96) ) i end end (cid:96) ∗ ← index of most likely path that passes CRC return ˆ u A ( (cid:96) ∗ ) An alternative approach to improve error-correction perfor-mance of SC is to use flip algorithms. Upon failed CRCof initial SC decoding, it uses T additional iterations toidentify and flip error positions in subsequent SC attempts.The flip position set F for each attempt can be determinedeither by explicit mathematical metric or by neural networkslike LSTMs. Heuristic methods like [3]–[6], [9] use received r N − or their absolute values as the metric in SC-Flip (SCF)decoding. [10], [11] propose dynamic SC-Flip (DSCF) witha new metric considering not only received r N − but alsothe sequential aspect of SC decoder. DSCF allows flipping ofmultiple bits at a time and improves the performance of SCF.[8] extends the bit-flipping from SC to SCL and proposed aSCL-Flip decoding (SCLF). Similarly, SCF can be seen as aspecial case of SCLF when L = 1 .Recently developed DL-based SCF/SCLF [7], [12]–[14]exploit a trained LSTM to locate error positions instead ofexplicit mathematical metrics. They have shown similar orslightly better performance than heuristic methods for shortpolar codes. Besides the limitation of LSTM in dealing withlonger code length, the action and state encoding as wellas good training strategy are also crucial to achieve goodperformance. Input vector Read vector
LSTM
Output vector Interface vector Write headRead head 1Read head R Mem controller Mem 𝑦 𝑡 𝑣 𝑐𝑡 𝑀 𝑡 𝑀 ℎ 𝑀 𝑤 𝑣 𝑟𝑡 𝑥 𝑡 Fig. 1. Differentiable neural computer architecture.
B. Differentiable Neural Computer (DNC)
The basic motivation behind DNC is that LSTMs are notvery efficient on complicated process executions that containmultiple computational steps and long-distance dependencies.The key behind the DNC is the use of an external memory.Since its invention, DNC has found many applications likequestion answering [18], [19] and simple algorithmic tasks[20]. DNC can be considered as an LSTM controller aug-mented with an external memory. DNC periodically receives x t as input vector and produces y t as output vector at time t . The output vector y t is usually made into a probabilitydistribution using softmax.A top level architecture of DNC is demonstrated in Fig. 1.At time t , the DNC 1) reads an input x t , 2) writes the newinformation into the external memory using interface vector v tc through memory controller, 3) reads the updated memory M t and 4) produces an output y t . Assume the external memoryis a matrix of M h slots, each slot is a length- M w vector. Tointerface with this external memory, DNC computes read andwrite keys to locate slots. The memory slot is found usingsimilarity between key and slot content. This mechanism isknown as the content-based addressing. In addition, DNCalso uses dynamic memory allocation and temporal memorylinkage mechanisms for computing write and read weights. Weomit the mathematical descriptions of DNC here and readerscan refer to [15] for more details.III. DNC FOR
SCLF D
ECODING
Bit-flipping on SC trellis can be modeled as a game andthe DNC is the player to decide which bits to be flippedtowards successful decoding. Upon CRC failure, the DNCplayer needs to take an action based on current state, eitherreverting falsely flipped positions in previous attempt, oradding more flip positions in next attempt. The proposed DNC-aided methodology includes: 1) action and state encoding; and2) DNC-aided two-phase decoding flow.
A. Action and State Encoding
One of the keys for efficient DNC is to design good inputand output vector for training and inference. We discuss theexisting DL-based approaches [7], [12]–[14] and present a newencoding scheme. ) Action Encoding: the one-hot scheme used in state-of-the-art LSTM-based flip algorithms are efficient in identifyingthe first error bit, but lacks the capability to flip multiple bitsat a time. This results in more decoding attempts. To improvebit flipping efficiency, we use a soft multi-hot (i.e. ω -hot) flipvector v f to encode both first error bit and subsequent errorbits, aiming to correctly flip multiple bits in one attempt. v f is a length- N vector that has ω non-zero entries. An actionis therefore encoded by v f . Each possible flip position in v f is a soft value indicating the flip likelihood of the bit.For training purpose we introduce a scaled logarithmic seriesdistribution with parameter p to assign flip likelihoods to the ω error positions. The intention is to create a distribution withdescending probabilities for first error position and subsequenterror positions and to provide enough likelihood differencesbetween them. Reference v f generation for F-DNC training arediscussed in detail in Section IV. Assume index of bit position k in F is I F ( k ) , non-zero entries of v f can be derived as (1): v f ( k ) = K − ln (1 − p ) p I F ( k ) I F ( k ) for k ∈ F where scaling factor K = 1 / (cid:90) F v f (1)
2) State Encoding: a straightforward way to encode statesis to directly use the received LLR sequence r N − or survivalpath metrics P N − . [7], [12] use the amplitudes of receivedLLRs as the LSTM input. [14] uses the amplitudes of receivedLLRs combining the syndromes generated by CRC for stateencoding. However, path metric information in sequentialdecoding are discarded in these methods, resulting in a loss inrepresenting error path selection probability. [13] proposed astate encoding by taking the PM ratio of discarded paths andsurvival paths. However, this representation introduces extracomputations to standard decoding for PM summations at eachbit position and does not include received LLR information.In this work, we introduce a new state encoding schemeusing the gradients of L survival paths concatenated withreceived LLRs. It takes both PMs and received LLRs intoconsideration. For (cid:96) ∈ L = { , ..., L − } , the PM gradients (cid:79) P ( (cid:96) ) N − can be described in (2). (cid:79) P ( (cid:96) ) N − = log(1 + e (1 − u ( (cid:96) ) N − ) L ( (cid:96) ) N − ) (2)Note that (cid:79) P ( (cid:96) ) N − is already calculated in step 2) ofAlgorithm 1. Hence it can be directly taken from existing SCLwithout extra computations. The state encoding S is then avector as (3) and is used as DNC input in this work. S = { (cid:79) P ( (cid:96) ) N − , r N − } (3) B. DNC-Aided Two-Phase Decoding Flow
We design a new two-phase flip decoding flow for CRC-SCL decoder aiming to reduce the number of SCL attemptswhile still achieving good error correction performance. The
DecodeReceived LLRsFailMulti-bit
DNC FlipperCRC
PMs/LLRs
Flip DNC State EncodingAction RankFlip Indexes
Fail
CRCDecodeRe-select
DNC
FlipperPMs/LLRs
Flip-validate DNC
State Encoding Re-select
Continue Decode
Single-bit flip
FailCRC1 2 Flip locations for next decode Confirmed flip locations
Discarded flip locations
Phase I: Multi-bit FlipPhase II: Successive Single-bit Flip
Likelihood Threshold
Fig. 2. DNC-aided two-phase flip decoding ( ω = 3 case) two phases in this flow are: i) multi-bit flipping and ii)successive single-bit flipping. In the first phase, the receivedsymbols are first decoded with a standard decoder. If it failsCRC, a flip DNC (F-DNC) exploits the state encoding S toscore the actions, i.e., estimate the probability of each bit beingerror bits and output a flip vector v f . Fig. 2 shows an exampleof ω = 3 where F = { , , } is flip position set in descendinglikelihoods. To avoid wrong flips of subsequent positions withinsignificant flip likelihoods, an α -thresholding is applied tokeep only positions with v f > α for multi-bit flipping. Asubsequent decode attempt is then carried out with multi-bitflipping of these bit positions.If CRC still fails after multi-bit flipping, we enter Phase-IIthat successively flip a single bit position. The reasons of faileddecoding with Phase-I are either: 1) first error bit position iswrong; or 2) first error bit position is right but subsequentflip positions are wrong. A solution is to flip each possibleflip positions one at a time and use a flip-validate DNC (FV-DNC) to confirm if this is a correct flip before moving to thenext possible flip position. The first attempt in Phase-II flipsthe highest ranked error position in F , i.e., bit in the exampleshown in Fig. 2.If FV-DNC invalidates the single-bit flip (bit in this case),we discard bit and re-select the flip position to next bit Re-select
Continue F-DNC output
ContinueRe-select
ContinueRe-select Discarded positions Positions to be flipped Confirmed positions Single-bit flip
22 9 29
Fig. 3. Flip attempts in Phase-II for different FV-DNC output combinations( ω = 3 case) in F . Alternatively, if FV-DNC confirms the flip of bit ,we continue by adding bit into the flip queue Q f and flip Q f = { , } in next attempt. The process runs successivelyuntil CRC passes or reaching the end of F . Fig. 3 showsall possible flip combinations given different FV-DNC outputcombinations in the ω = 3 case. The number of decodingattempts of Phase-II is bounded by ω . The two-phase DNC-SCLF can be described as Algorithm 2. Algorithm 2:
Two-Phase DNC-Aided SCL-Flip De-coding ˆ u N − , S ← SCL ( r N − ) if CRC (ˆ u N − ) = success return ˆ u N − Phase-I : Multi-bit Flipping F , ω, v f ← F-DNC ( S ) ˆ u N − ← SCL ( r N − , F v f ≥ α ) if CRC (ˆ u N − ) = success return ˆ u N − Phase-II : Successive Single-bit Flipping Q f = {F [0] } for i = 0 , , ..., ω − do ˆ u N − , S ← SCL ( r N − , Q f ) if CRC (ˆ u N − ) = success or i = ω − return ˆ u N − R ←
FV-DNC ( S ) if R = continue then Q f = {Q f , F [ i + 1] } else Q f [ end ] = F [ i + 1]) end end IV. T
RAINING M ETHODOLOGY
In this section, we discuss training for the DNCs usedin proposed DNC-SCLF. The training is conducted off-lineand does not increase the run-time decoding complexity. We
TABLE IF-DNC/FV-DNC H
YPER - PARAMETERS S ET Parameter DescriptionLSTM controller 1 layer of size 256Size of access heads 1 write head, 4 read headsSize of external memory M h = 512 , M w = 128 Size of training set for F-DNC, × for FV-DNCSize of validation set × Mini-batch size 100Dropout probability 0.05Optimizer AdamEnvironment Tensorflow 1.14.0 on Nvidia GTX 1080Ti adopt the cross-entropy function which has been widely usedin classification tasks [21]. The training method involves 1)training F-DNC to identify error positions; and 2) trainingFV-DNC to validate single-bit flip.In the first training stage, we run extensive SCL decodersimulations and collect error frames upon CRC failure. The F-DNC training database consists of pairs of S from (3) as DNCinput and a corresponding v f from (1) as reference output. S can be straightforwardly derived based on received LLRs andPMs of collected error frames. However, v f is determined byparameter ω and p , whose values will affect the training andinference efficiency. We first label the error positions w.r.t thetransmitted sequence for each sample as candidate flip posi-tions. Intuitively, small ω and p strengthen the likelihood ofidentifying first error position, but attenuate the likelihoods ofsubsequent error positions. Hence there is a trade-off betweenthe accuracy of identifying first error position and the accuracyof identifying subsequent error positions. In this work, wecarried out reference v f generations with ω = { , , } and p = { . , . } . The experimental results with these parameterchoices are discussed in Section V.The error frames that can not be decoded correctly in Phase-I enter Phase-II, where single bit positions are flipped andtested successively as in Fig. 3. This is to prevent wrongflips that will lead the DNC player into a trapping stateand can never recover. The FV-DNC is a classifier takingeither ”re-select” or ”continue” action given the knowledgeof received LLRs and PMs from most recent attempt. The keyfor FV-DNC training is to create a well-categorized databasethat can detect trapping state effectively. To generate FV-DNC training database, we carry out supervised flip decodingattempts based on reference v f in F-DNC database. For eachcollected error:1) the first 5 error positions in reference v f areflipped successively bit after bit and their corresponding stateencoding S are recorded. These samples result in a “continue”action. 2) After flipping each of the first 5 error positions,we flip 5 random positions not in F and record their stateencoding S . These samples indicate trapping state and resultin a “re-select” action. Hence for each collected frame, wehave 5 samples for “continue” action and 25 samples for “re-select” action. ig. 4. Rate of identifying error positions for ω = { , , } and p = { . , . } for SC decoding of (256,128) polar code V. E
XPERIMENTS AND A NALYSIS
To fully show the competitiveness of DNC in dealing withlong-distance dependencies in polar SC trellis, we evaluate theperformances for polar codes of length N = 256, 1024 withSC and SCL ( L = 4 ) in this work. The code rate is set to 1/2with an 16b CRC. Error frames are collected at SNR 2dB forboth training and testing. In this paper, we do not focus onthe training parameter optimization and just demonstrate a setof configurations and hyper-parameters that work through ourexperiments for F-DNC and FV-DNC in Table I.Firstly, we study the effects of parameters ω and p intro-duced in F-DNC. Fig. 4 presents the accuracy of identifyingthe first 5 error positions for code length N = 256 and SCdecoding. For a given ω , a smaller p ( p = 0 . ) enhances theprobability of identifying the first error position, but attenuatesthe probability of identifying subsequent error positions. Weachieve up to 0.573 success rate of identifying the first errorposition with ω = 2 , outperforming the 0.51 and 0.425 successrate of identifying the first error position for an even shortercode length of 128b with LSTM-based SCF [12] and heuristic-based DSCF [11], respectively. On the other hand, comparing ω = 2 and ω = 5 with same p = 0 . , a bigger ω helpsto identify more error positions, but the success rates ofidentifying each position are degraded.We pick p = 0 . in our two-phase DNC-SCLF experimentsto strengthen the success rates of identifying subsequent errorpositions and slightly sacrifice the rate of identifying first errorposition. This is because with help of FV-DNC, even thoughF-DNC may not identify the first error position correctly inmulti-bit flipping, the two-phase decoding can re-select itin successive single-bit flipping. We use an α = 0 . forthresholding through our experiments. Assume β is the rateof successful decoding with multi-bit flipping in Phase-I, theaverage number of decoding attempts T avg for DNC-SCLFcan be calculated by (4) below: Fig. 5. Number of extra decoding attempts of DNC-SCF and state-of-the-artflipping algorithms for (1024, 512) polar code T avg = β + ω (1 − β ) (4)where ω is the average number of attempts in Phase-IIand ω ≤ ω . Fig. 5 presents the T avg for the proposedDNC-SCF and the state-of-the-art techniques.We first compare DNC-SCF with the state-of-the-art heuris-tic methods [11] and LSTM-based methods [12], [14] for(1024, 512) polar code and 16b CRC. For a fair comparison,we compare the FER of DNC-SCF and DSCF [11] withoptimized metric parameters and T = 10 at an FER of − .DNC-SCF ω = 2 is able to achieve 0.5dB coding gain w.r.t SCdecoder. Increasing ω to 5 provides another 0.3dB coding gainfrom DNC-SCF ω = 2 . DNC-SCF ω = 5 also outperformsDSCF T = 10 by 0.06dB, while reducing the number of extraattempts by 44% at 2dB SNR. Further increasing ω to DNC-SCF ω = 10 provides 0.19dB coding gain compared to DSCF T = 10 while reducing the number of decoding attempts by18.9% at 2dB SNR.The LSTM-based approach in [12] did not report FERperformance, but has shown up to 10% improvement in theaccuracy of identifying first error position over DSCF with T = 1 at 1dB SNR for (64, 32) polar code. The estimatedFER of [12] with 1024b and T = 10 will be close to DNC-SCF ω = 5 . Another LSTM-based SCF [14] provides FER for(64, 32) polar code with T = 6 and claims 0.2dB improvementover DSCF T = 6 . The estimated FER of [14] with 1024b and T = 10 will be close to DNC-SCF ω = 10 in Fig. 6. Notethat the assumption that FER improvement holds for longerpolar code of 1024b is optimistic, because LSTM’s capabilityof identifying error positions usually gets drastically weakenedwhen code length becomes longer.We further compare the DNC-SCLF ( L = 4 ) on (256,128) polar code and 16b CRC with state-of-the-art heuristicmethods [8] and LSTM-based approaches [7], [13]. Fig. 7demonstrates the FER of DNC-SCLF ( L = 4 ) with ω = ig. 6. FER performance comparison of DNC-SCF and state-of-the-artflipping algorithms for (1024,512) polar code and 16b CRCFig. 7. FER performance comparison of DNC-SCLF ( L = 4 ) and state-of-the-art flipping algorithms for (256,128) polar code and 16b CRC { , , } . At FER − , DNC-SCLF ω = 2 achieves a0.26dB coding gain w.r.t standard SCL. Increasing ω to 5 and10 results in 0.49dB and 0.55dB coding gain from the standardSCL. Compared with [8] and [7], DNC-SCLF ω = 5 achieves0.13dB and 0.07dB better performance than SCLF and LSTM-SCLF with T = 10 , respectively. Our proposed DNC-SCLFdemonstrates better FER performance with a reduced numberof decoding attempts.VI. C ONCLUSIONS
In this paper, we present a new DNC-aided SCL-Flip decod-ing. We propose a two-phase decoding assisted by two DNCs,F-DNC and FV-DNC, to identify error positions and to validateor re-select error positions in successive single-bit flipping,respectively. The multi-bit flipping reduces number of flipdecoding attempts while successive single-bit flipping lowersthe probability of going into trapping state. Training methods are proposed accordingly to efficiently train F-DNC and FV-DNC. This strategy provides a new method to exploit DNC,an advanced variant of deep learning, in assisting decodingalgorithms. Simulation results show that the proposed DNC-SCLF helps to identify error bits more accurately, achievingbetter error correction performance and reducing the numberof flip decoding attempts than the the state-of-the-art flipalgorithms. R
EFERENCES[1] E. Arikan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,”
IEEETransactions on Information Theory , vol. 55, no. 7, pp. 3051–3073, July2009.[2] I. Tal and A. Vardy, “List decoding of polar codes,” in , July 2011,pp. 1–5.[3] O. Afisiadis, A. Balatsoukas-Stimming, and A. Burg, “A low-complexityimproved successive cancellation decoder for polar codes,” in , Nov 2014, pp.2116–2120.[4] C. Condo, F. Ercan, and W. Gross, “Improved successive cancellationflip decoding of polar codes based on error distribution,” in
Proc. IEEEWireless Commun. Netw. Conf. Workshops (WCNCW) , Apr 2018, pp.19–24.[5] F. Ercan, C. Condo, and W. J. Gross, “Improved bit-flipping algorithmfor successive cancellation decoding of polar codes,”
IEEE Transactionson Communications , vol. 67, no. 1, pp. 61–72, Jan 2019.[6] Z. Zhang, K. Qin, L. Zhang, H. Zhang, and G. T. Chen, “Progressivebit-flipping decoding of polar codes over layered critical sets,” in
GLOBECOM 2017 - 2017 IEEE Global Communications Conference ,Dec 2017, pp. 1–6.[7] X. Liu, S. Wu, Y. Wang, N. Zhang, J. Jiao, and Q. Zhang, “Ex-ploiting error-correction-crc for polar scl decoding: A deep learningbased approach,”
IEEE Transactions on Cognitive Communications andNetworking , pp. 1–1, 2019.[8] F. Cheng, A. Liu, Y. Zhang, and J. Ren, “Bit-flip algorithm for successivecancellation list decoder of polar codes,”
IEEE Access , vol. 7, pp.58 346–58 352, 2019.[9] F. Ercan, C. Condo, S. A. Hashemi, and W. J. Gross, “Partitionedsuccessive-cancellation flip decoding of polar codes,” in , May 2018, pp. 1–6.[10] L. Chandesris, V. Savin, and D. Declercq, “An improved scflip decoderfor polar codes,” in , Dec 2016, pp. 1–6.[11] L. Chandesris, V. Savin, and D. Declercq, “Dynamic-scflip decoding ofpolar codes,”
IEEE Transactions on Communications , vol. 66, no. 6, pp.2333–2345, June 2018.[12] X. Wang, H. Zhang, R. Li, L. Huang, S. Dai, Y. Yourui, and J. Wang,“Learning to flip successive cancellation decoding of polar codes withlstm networks,” arXiv:1902.08394, Feb 2019.[13] C.-H. Chen, C.-F. Teng, and A.-Y. Wu, “Low-complexity lstm-assistedbit-flipping algorithm for successive cancellation list polar decoder,” in , May 2020.[14] B. He, S. Wu, Y. Deng, H. Yin, J. Jiao, and Q. Zhang, “A machinelearning based multi-flips successive cancellation decoding scheme ofpolar codes,” in , 2020, pp. 1–5.[15] A. Graves, G. Wayne, and M. Reynolds et al., “Hybrid computing usinga neural network with dynamic external memory,”
Nature , vol. 538, pp.471–476, Oct 2016.[16] K. Niu and K. Chen, “Crc-aided decoding of polar codes,”
IEEECommunications Letters , vol. 16, no. 10, pp. 1668–1671, October 2012.[17] B. Li, H. Shen, and D. Tse, “An adaptive successive cancellationlist decoder for polar codes with cyclic redundancy check,”
IEEECommunications Letters , vol. 16, no. 12, pp. 2044–2047, December2012.18] S. Sukhbaatar, a. szlam, J. Weston, and R. Fergus, “End-to-end memorynetworks,” in
Advances in Neural Information Processing Systems 28 ,2015, pp. 2440–2448.[19] A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani,V. Zhong, R. Paulus, and R. Socher, “Ask me anything: Dynamicmemory networks for natural language processing,” in
Proceedings ofThe 33rd International Conference on Machine Learning , vol. 48, 20–22Jun 2016, pp. 1378–1387.[20] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines.”arXiv:1410.5401, 2014.[21] Y. Lecun, Y. Bengio, and G. Hinton et al., “Deep learning,”