Error Estimation, Error Correction and Verification In Quantum Key Distribution
Øystein Marøy, Magne Gudmundsen, Lars Lydersen, Johannes Skaar
EError Estimation, Error Correction and Verification In QuantumKey Distribution
Øystein Marøy , Magne Gudmundsen , Lars Lydersen , Johannes Skaar October 25, 2012
Abstract
We consider error correction in quantum key distribu-tion. To avoid that Alice and Bob unwittingly end upwith different keys precautions must be taken. Beforerunning the error correction protocol, Bob and Al-ice normally sacrifice some bits to estimate the errorrate. To reduce the probability that they end up withdifferent keys to an acceptable level, we show that alarge number of bits must be sacrificed. Instead, ifAlice and Bob can make a good guess about the er-ror rate before the error correction, they can verifythat their keys are similar after the error correctionprotocol. This verification can be done by utilizingproperties of Low Density Parity Check codes used inthe error correction. We compare the methods andshow that by verification it is often possible to sac-rifice less bits without compromising security. Theimprovement is heavily dependent on the error rateand the block length, but for a key produced by theIdQuantique system Clavis , the increase in the keyrate is approximately 5 percent. We also show thatfor systems with large fluctuations in the error ratea combination of the two methods is optimal. Introduction
Quantum Key Distribution (QKD) [1] is a method todistribute a secret key between two parties, Alice and Department of Electronics and Telecommunications, Nor-wegian University of Science and Technology, Trondheim, Nor-way. Electronic address: [email protected] Department of Physics, Norwegian University of Scienceand Technology, Trondheim, Norway
Bob, through a quantum channel. An eavesdropperEve is allowed full control over the channel. After thecommunication through the quantum channel Aliceand Bob reconcile their keys using an error correctionprotocol. Using a privacy amplification protocol [2,3] any information Eve might have about the key isremoved. The unconditional security of the entireprotocol can be proven using the laws of quantummechanics [4, 5, 6].For practical QKD, the secret key rate is an im-portant factor. The main limitations on the key rateis the transmission efficiency of the quantum chan-nel and the performance of detectors at the receivingend of the channel, especially detector dead time. De-veloping better equipment is therefore important formaking QKD a viable alternative for secure commu-nication. However it is also possible to increase thekey rate by more efficient error correction and privacyamplification protocols.Due to imperfect equipment and Eves possible ac-tions during the distribution phase, errors betweenAlice and Bobs keys are inevitable. Thus they needto do error correction, ending up with identical keys.This is done by classical communication on an au-thenticated channel. Because this communicationreveals some information about the key, either thecommunication must be encrypted using previouslyestablished key, or additional privacy amplificationmust be used. Thus it is important to have an effec-tive error correction protocol, revealing as little in-formation about them as possible. Assuming a blockof N bits, containing N δ errors, the number of bits L lost in error correction is lower bounded by the Shan-non limit [7]. For a perfect protocol, working at the1 a r X i v : . [ qu a n t - ph ] O c t hannon limit we have L = N h ( δ ) (1)Here h ( · ) is the binary entropy function h ( p ) = − p log p − (1 − p ) log(1 − p ). Error correction
Error correction in QKD is generally done by ex-change of parity information about Alice’s and/orBob’s keys. For processing purposes the key is di-vided into blocks of N bits, on which error correctionis performed while the next block is distributed on thequantum channel. Different protocols can be used forerror correction, the most popular being CASCADE[8].Of significant interest are also protocols using LowDensity Parity Check (LDPC) codes [9, 10]. Usingthe technique of Density Evolution [11] it is possi-ble to construct error correcting codes performing ex-tremely close to the Shannon limit [12]. In additionto being efficient, error correction protocols based onLDPC has another advantageous property. Let d min be the minimal Hamming distance between two code-words in the code, i.e. the minimal number of bitsflips needed to turn a codeword into another. ThenAlice and Bob’s keys differ in at least d min bits if theerror correction protocol completes without beeingsuccessful. Finding d min for a code is not solvable inpolynomial time, but one can find a lower bound. Alinear code cannot correct more errors than d min . Ifthe code performs at the Shannon limit this gives d min = 2 N δ. (2)Note that for optimal efficiency a different code isneeded for each error rate. Because creating goodcodes is computationally demanding, and therefore atime consuming task, a running QKD system wouldneed an large set of preestablished codes, each opti-mized for a different error rate.Both CASCADE and LDPC based protocols re-quire an estimate on the error rate. This error esti-mation is often done by random sampling. Alice andBob publicly announce some random bit pairs from their keys to estimate the error rate. However, theestimation can also be done without sacrificing bits.For example, in both protocols the error rate of theprevious block is known to Alice and Bob, and canbe used as an estimate.To make sure that all errors have been corrected,Alice and Bob can verify whether their keys are iden-tical. This verification process can be done by ex-changing parity information [13, 14]. Given V paritysums announced from a key with a least one error,a very good approximation for the probability of anundetected error is p U | E = (cid:18) (cid:19) V . (3)As an alternative, we propose to exploit the mini-mum distance of LDPC codes as follows: After errorcorrection Alice and Bob publicly announce V ran-domly selected bit pairs. Since any non-identical keyshave at least d min errors the probability of not findingany errors given that there exist some errors is givenby p U | E ≤ (cid:18) − d min N (cid:19) V ≤ (1 − δ ) V (4)This method is simpler and less computational de-manding than exchanging parities, but more verifica-tion bits are needed to reach the same p U | E .If the actual error rate for a given block is largerthan the estimate, Bob might end up with a wrongfinal key. Thus one should add a buffer ∆ to theoriginal estimate when running the protocol. Thechosen value for ∆ depends on the uncertainty in theerror estimate and the consequences of coding into awrong keyword. If the key only is used to encryptinformation going from Alice to Bob, Bob having thewrong key only makes Alice message unreadable. Onthe other hand, Bob’s key is not necessarily coveredby security proofs if it differs from Alice’s, so usingit to encrypt data would be a breach of security.We can now find expressions for the number of bitslost in error correction with error estimation by ran-dom sampling (EERS), and with verification. As-sume that block i has N δ i errors. Let (cid:15) be an upperbound for the probability that the error correction2tep fails in a way such that Alice and Bob unwit-tingly end up with different codewords. This boundshould be valid under any circumstances and arbi-trary attacks by Eve. We assume that the error cor-rection protocol used is based on LDPC codes, andfor simplicity we assume that it performs at the Shan-non limit for any error rate. Error estimation by random sampling(EERS):
Random sampling of S bits gives an estimated errorrate δ S , which is approximately binomial distributedwith mean δ i and variance σ S = δ i (1 − δ i ) S . The loss inthe error estimation and error correction is given by L S = S + ( N − S ) h ( δ S + ∆ S ) (5)with ∆ S being the buffer parameter. Assuming thatsampling only makes a negligible change in the errorrate of the N − S remaining bits, the probability ofan undetected error is bounded by p U = P ( δ S + ∆ S < δ i ) (6) ≤ max δ i P ( δ S + ∆ S < δ i ) . The maximization over all possible values for δ i isnecessary since we have no a priori information aboutthe error rate. Using the normal distribution as anapproximation for the binomial we get p U ≤ max δ i Φ (cid:18) − ∆ S σ S (cid:19) (7)= 12 (cid:16) − erf(∆ S √ S ) (cid:17) , with Φ being the cumulative normal distribu-tion function. A lower bound for S such that P ( δ S + ∆ S < δ i ) < (cid:15) is then S ≥ (cid:18) erf − (1 − (cid:15) )∆ S (cid:19) . (8) Verification:
Assume that Alice and Bob use the error rate of theprevious block, δ i − , plus a buffer parameter ∆ V as their estimate for the error rate. Assuming the worstcase scenario, p U | E = (cid:15) , the loss is given by L V = ( p E − (cid:15) ) N + (1 − p E + (cid:15) )( V + N h ( δ i − + ∆ V ))(9)with V being the number of bits used in verificationstep, and p E being the probability that Bobs raw keyis transformed into the wrong codeword by the errorcorrection protocol.Utilizing the minimum length between codewordsthe probability of an error not being detected is givenby (4). The probability of an undetected error is p U = p U | E p E . Since we do not know Eve’s action wehave no certain knowledge about the block error rate δ i , and therefore we cannot bound p E . Thus p U ≤ (cid:18) − d min N (cid:19) V . (10)Note that this is independent of the actual error rate.Using (2) we find a lower bound for V to ensure that p U ≤ (cid:15) to be V ≥ log( (cid:15) )log(1 − d min N ) = log( (cid:15) )log(1 − δ V + ∆ V )) . (11a)As noted we can also do the verification by parityexchange. The number of bits used in verification isthen, using (3), V ≥ log( (cid:15) )log( ) . (11b)For a system where every bit has the same a prioriprobability of being an error, δ i and δ i − are bothnormally distributed with mean δ and variance σ .In that case we have p E = P ( δ i ≥ δ V + ∆ V ) = Φ (cid:18) − ∆ V √ σ (cid:19) , (12)which we can use in (9) to find the total loss. Numerical results
For performance analysis, we first consider a systemrunning with mean error rate δ and variance betweenblock error rates σ = δ (1 − δ ) N , i.e. all variance is3 δ E x ce ss i v e l o ss r a ti o EERSVerification minimum lengthVerification parity exchange
Figure 1: Loss ratio for different error rates. N =10 , (cid:15) = 10 − .due to the inherit randomness of the bit values. Theloss in error correction is then dependent on threeparameters, the error rate δ , the security parameter (cid:15) and the block size N .We can minimize the loss from the error correction,(5) and (9), for different δ , (cid:15) and N , with respect tothe buffer parameters ∆ j , j = S, V . Note that whenrunning the error correction protocol, the value of thebuffer parameter is chosen according to the estimates δ S and δ V , not the error rate δ . Since these estimatesare not exact, one will generally choose a suboptimalvalue for ∆ j , resulting in slightly larger losses thanthe one showed in the following results. Also notethat the possibility of choosing a suboptimal valuefor ∆ j is accounted for in security analyses in theprevious section.We define the excessive loss ratio, L Ej , to be L Ej = L j N − h ( δ ) j = S, V (13)Figure 1 shows that the excessive loss ratio is lowerfor verification than for EERS for all error rates δ .We also see that the difference between the two meth-ods of verification is small compared to the differencebetween error correction and verification, especiallyfor large δ . Since the difference is close to negligible δ∆ EERS ( ∆ S )Verification ( ∆ V ) Figure 2: Optimal value of the buffer parameter ∆for different error rates. N = 10 , (cid:15) = 10 − .we consider verification by utilizing minimum lengthbetween codewords in the rest of the discussion. Allresults also apply to verification by parity exchangeunless noted otherwise.There are two main terms contributing to the dif-ference, both related to the security parameter (cid:15) .As mentioned, the probability of undetected errors,bounded by (cid:15) , might be of critical importance to thesecurity of the protocol. If we use error estimationwe must have a high buffer parameter ∆ S to avoidsuch errors. However, if we use verification, we havean efficient method to find errors after the error cor-rection. The main purpose of ∆ V is then not to avoidall errors, but only to keep the error probability p E low to avoid many blocks being thrown away. Wecan then choose a buffer parameter ∆ V < ∆ S eventhough our estimate δ i − is less reliable than δ S . Op-timal values for ∆ V and ∆ S are shown in Figure 2.The other reason that verification has a smallerexcessive loss than EERS is that to keep ∆ S fromgrowing too large we must use a large sample size S .This sample is much larger than the number of bits V used for verification. Actually, as seen in Figure3, V does not give a significant contribution to theexcessive loss unless we are using the minimal lengthapproach on a raw key with very small δ . This again4 −5 −4 −3 −2 −1 δ E x ce ss i v e l o ss r a ti o EERSVerification minimum lengthVerification parity exchange
Figure 3: Excessive loss ratio from the samplingprocedure. N = 10 , (cid:15) = 10 − .shows that the method one chooses for verification,exchange of parities or utilizing the minimum lengthbetween codewords, is not important when it comesto excessive loss unless δ is very small.As shown in Figure 4 the block size N is crucialto the excessive loss ratio. For EERS the high lossratio for small N is mainly due to a large part ofthe block being used in the sampling process. Usingverification this loss is avoided. Here the increasedloss ratio for small N is due to the larger variancebetween block error rates when N is small.In verification, better security, i.e. decreasing thesecurity parameter (cid:15) , demands more bits V used tocheck for error after the error correction. Howeversince V ∼ log (cid:15) (11), and additionally V (cid:28) N h ( δ V +∆ V ), decreasing (cid:15) only gives a minimal increase inthe loss ratio (9). Thus, as shown in Figure 5, we canincrease the security tremendously while sacrificingfew extra bits if we use verification. In the scheme ofEERS, as (cid:15) → S increases towards infinity quitefast because of the inverse error function in (8). Sincesampling is a significant part of the loss ratio for allbut very large N , high security comes with a highexcessive loss ratio in this scheme. E x ce ss i v e l o ss r a ti o EERSVerification
Figure 4: Loss ratio for different block size. δ = 0 . (cid:15) = 10 − . −10 −5 ε E x ce ss i v e l o ss r a ti o EERSVerification
Figure 5: Loss ratio for different security parameters (cid:15) . δ = 0 . N = 10 .5 σ E x ce ss i v e l o ss r a ti o EERS δ =0.05EERS δ =0.05Verification δ =0.01Verification δ =0.01 Figure 6: Excessive loss ratio for different block errorrates and block error rate variance. The block errorrates are assumed to be independent and normallydistributed. N = 10 , (cid:15) = 10 − . Variable error rates
In real setups external factors like temperature fluc-tuations and calibration routines may cause greatervariation in the block error rate. Then, using theerror rate of the last block as our estimate for theerror rate of the current block, is less reliable. Toavoid throwing away more blocks due to the less ac-curate estimates, the buffer parameter, ∆ V , must beincreased. This will lead to increased loss in the pro-tocol. Using an EERS scheme, the loss is indepen-dent of the block error rate variance. Thus, as shownin Figure 6, verification is preferable when the blockerror rate variance is small, while EERS should beconsidered when the variance is high.Figure 6 also indicates that the variance for whichsampling and verification has equal excessive lossonly depends slightly on δ . Thus the important vari-ables are N and (cid:15) . As shown in Figure 7 large vari-ance favors EERS while small block size and highsecurity demands favor verification.In real setups the block error rate is not necessarilynormally distributed. For example, Figure 8 showshow the block error rate evolved for a 24-hour runof the IdQuantique system Clavis . In this case it σ ε =10 −6 ε =10 −15 Figure 7: The curves show for which block error ratevariance and block size EERS and verification has thesame excessive loss. Verification is the best methodfor parameters in the area to the left of the curves,while EERS is best for parameters to the right of thecurves. δ = 0 . j . However, as can be seen fromthe figure, in this run ∆ V = 0 .
004 would be enoughto avoid any errors. Minimizing (5) with respect to∆ S for the relevant N = 2 . · , δ ≈ . (cid:15) =10 − we find the optimal buffer parameter for EERSto be ∆ S ≈ . L EV = 0 .
023 and L ES = 0 . j close to its optimal value, mightnot be justified for the verification scheme if the blockerror rate start to fluctuate in an unexpected way.Then there is a risk of having loss much larger thenexpected. The EERS scheme is not prone to thisproblem since ∆ S might be estimated pretty accu-rately from δ S . Thus EERS is recommended for sys-6 δ Figure 8: Block error rate for a 24-hour run of theIdQuantique system Clavis tems with unknown behavior. In this respect theIdQuantique system the seems quite stable. Con-sidering groups of 50 consecutive blocks, the errorrate between blocks varies a lot within each group.However the distribution of the difference betweeneach block is quite similar for all the groups. Es-pecially is the maximal difference between two con-secutive blocks, which is the important quantity infinding a good value for ∆ V , very similar in all thegroups. Thus it seems that verification scheme with∆ V = 0 .
004 would work fine also for the next 250blocks.For the 24-hour run of the IDQuntique system 30.9percent of the raw key was lost in error correction,mostly due to whole blocks beeing discarded. Thisgives an excessive loss ratio of 0.189. It clear that abetter error correction scheme would be beneficial tothe systems performance.
Combination of the methods
We have seen that using EERS many bits must besacrificed in random sampling to achieve high secu-rity. On the other hand, when the variance in theblock error rate is high, doing verification and usingthe previous block as an estimate for the error ratealso has large excessive loss since the estimate is not
Error rate Method ∆ S + VN L E δ = 0 .
05 EERS 0.0126 0.036 0.075Combination 0.0081 0.023 0.053 δ = 0 .
01 EERS 0.0122 0.038 0.105Combination 0.0077 0.025 0.076
Table 1: Results for EERS and a combination ofEERS and verification.very accurate. Thus, if the block error rate varianceis high and we want high security combining the twomethods make sense.The loss related to error correction using bothEERS and verification is, again assuming p U | E = (cid:15) , L C =( p E − (cid:15) ) N + (1 − p E + (cid:15) ) (14) · ( S + V + ( N − S ) h ( δ C + ∆ C )) . Just like for EERS the loss is independent of the vari-ance and the method is robust against wild fluctua-tions in the block error rate.Using the results from the EERS as our estimate δ C , the probability p E of an error after the error cor-rection step is the same as the probability given in(7) with ∆ C for ∆ S . Using verification by parity ex-change the probability of an undetected error is then p U = p U | E p E = (cid:18) (cid:19) V +1 (cid:16) − erf(∆ C √ S ) (cid:17) . (15)For a given security parameter (cid:15) the number ofbits used in error estimation is then related to thebits used in verification by V = log(1 − erf(∆ √ S )) − log (cid:15) − L EC as in (13) with j = C .This can now be minimized with respect to the bufferparameter ∆ C and the sampling size S .For (cid:15) = 10 − and N = 10 the results are shownin Table 1. We clearly see that using a combinationof the methods leads to an improvement in perfor-mance compared to EERS alone. We expect this im-provement to be even more profound if we demandhigher security (decreases (cid:15) ), or for small block sizes,as these are scenarios where verification significantlyoutperforms EERS.7o compare the combination method with verifica-tion we can compare the results from Table 1 withFigure 6. As the performance of the combinationmethod is independent of variance we infer that itoutperforms verification when the variance is largerthan 0.004 while verification is better for σ < . L EC = 0 .
054 for com-bination of the methods. Thus the variance betweenblock error rates is so small that it seems verificationonly is the best approach for this system.
Conclusion
Due to the uncertainty about the true value of theblock error rate some bits need to be sacrificed to de-crease the probability that Alice and Bob have unde-tected errors in their keys. This can be done by EERSbefore the error correction protocol, or by verificationafter the protocol. We find that verification gener-ally outperforms EERS, however if the variance inthe block error rate is large EERS is the best choice.To minimize the loss in error correction it is thereforeimportant to have a QKD system with a stable errorrate.We propose a combination of the two methodsthat generally outperforms EERS. This combinationmethod, and EERS, are both robust against changesin the behavior of the error rate. If one only doesverification, large losses might occur if the block er-ror rate changes unexpectingly. Thus the combina-tion method should be used when the variance of theblock error rate is high or when the change in theerror rate between blocks is unknown or susceptibleto unpredictable fluctuations.We also show that utilizing the minimum distanceof LDPC codes provides a fast and efficient way todo verification.
References [1] C. H. Bennett and G. Brassard, “Quantum cryp-tography: Public key distribution and coin toss-ing,” in
Proceedings of IEEE International Con- ference on Computers, Systems, and Signal Pro-cessing , (Bangalore, India), pp. 175–179, IEEEPress, New York, 1984.[2] C. H. Bennett, G. Brassard, and J. Robert, “Pri-vacy amplification by public discussion,”
SIAMJ. Comput. , vol. 17, pp. 210–229, 1988.[3] C. H. Bennett, G. Brassard, C. Crepeau, andU. M. Maurer, “Generalized privacy amplifica-tion,”
IEEE T. Inform. Theory , vol. 41, no. 6,pp. 1915–1923, 1995.[4] P. W. Shor and J. Preskill, “Simple proof of secu-rity of the BB84 quantum key distribution pro-tocol,”
Phys. Rev. Lett. , vol. 85, pp. 441–444,July 2000.[5] M. Koashi, “Simple security proof of quantumkey distribution via uncertainty principle,”
J.Phys. Conf. Ser. , vol. 36, p. 98, 2006.[6] M. Koashi, “Simple security proof of quan-tum key distribution based on complementar-ity,”
New J. Phys. , vol. 11, p. 045018, 2009.[7] C. E. Shannon, “A mathematical theory ofcommunication,”
Bell Syst. Tech. J. , vol. 27,pp. 379–424, 623–656, 1948.[8] G. Brassard and L. Salvail, “Secret-key recon-ciliation by public discussion,” in
Advances inCryptology - EUROCRYPT ’93 , pp. 410–423,Springer-Verlag, 1994.[9] R. G. Gallager, “Low-density parity-checkcodes,” 1963.[10] D. J. MacKay and R. M. Neal, “Near Shannonlimit performance of low density parity checkcodes,”
Electron. Lett. , vol. 32, pp. 1645–1646,1996.[11] T. J. Richardson and R. L. Urbanke, “The ca-pacity of low-density parity-check codes undermessage-passing decoding,”
IEEE T. Inform.Theory , vol. 47, pp. 599–618, 2001.812] S. Y. Chung, G. D. Forney, T. J. Richardson,and R. Urbanke, “On the design of low-densityparity-check codes within 0.0045 dB of the Shan-non limit,”
IEEE Commun. Lett. , vol. 5, pp. 58–60, 2001.[13] N. L¨utkenhaus, “Estimates for practical quan-tum cryptography,”
Phys. Rev. A , vol. 59,pp. 3301–3319, 1999.[14] C. H. F. Fung, X. Ma, and H. F. Chau, “Prac-tical issues in quantum-key-distribution post-processing,”