aa r X i v : . [ qu a n t - ph ] O c t Qubit metrology for building a fault-tolerant quantum computer
John M. Martinis ∗ University of California, Santa Barbara andGoogle, Inc. (Dated: October 7, 2015)
PACS numbers:
Recent progress in quantum information has ledto the start of several large national and indus-trial efforts to build a quantum computer. Re-searchers are now working to overcome many sci-entific and technological challenges. The pro-gram’s biggest obstacle, a potential showstopperfor the entire effort, is the need for high-fidelityqubit operations in a scalable architecture. Thischallenge arises from the fundamental fragility ofquantum information, which can only be over-come with quantum error correction [1]. In afault-tolerant quantum computer the qubits andtheir logic interactions must have errors below athreshold: scaling up with more and more qubitsthen brings the net error probability down to ap-propriate levels ∼ − needed for running com-plex algorithms. Reducing error requires solvingproblems in physics, control, materials and fabri-cation, which differ for every implementation. Iexplain here the common key driver for continuedimprovement - the metrology of qubit errors. We must focus on errors because classical and quan-tum computation are fundamentally different. The clas-sical NOT operation in CMOS electronics can have zeroerror, even with moderate changes of voltages or transis-tor thresholds. This enables digital circuits of enormouscomplexity to be built as long as there are reasonabletolerances on fabrication. In contrast, quantum informa-tion is inherently error prone because it has continuousamplitude and phase variables, and logic is implementedusing analog signals. The corresponding quantum NOT,a bit-flip operation, is produced by applying a control sig-nal that can vary in amplitude, duration and frequency.More fundamentally, the Heisenberg uncertainty princi-ple states that it is impossible to directly stabilize a singlequbit since any measurement of a bit-flip error will pro-duce a random flip in phase. The key to quantum errorcorrection is measuring qubit parities, which detects bitflips and phase flips in pairs of qubits. As explained in thetext box, the parities are classical-like so their outcomescan be known simultaneously.When parity changes, one of the two qubits had anerror, but which one is not known. To identify, encod-ing must use larger numbers of qubits. This idea can beunderstood with a simple classical example, the 3-bit rep-etition code as described in Fig. 1. Logical states 0 (1)are encoded as 000 (111), and measurement of parities input bits parityA B C A-B B-C0 0 0 0 01 0 0 1 00 1 0 1 10 0 1 0 11 1 0 0 11 0 1 1 10 1 1 1 01 1 1 0 0A B CA-B B-C
FIG. 1: 3-bit classical repetition code for bits A, B and C, withparity measurements between A-B and B-C. Table shows allcombination of inputs and the resulting parity measurements.For an initial state of all zeros, a unique decoding from themeasurement to the actual error is obtained for only the topfour entries, where there is no more than a single bit error(order n = 1). between adjacent bits A-B and B-C allows the identifi-cation (decoding) of errors as long as there is a changeof no more than a single bit. To improve the encodingto detect both order n = 1 and n = 2 errors, the repe-tition code is simply increased in size to 5 bits, with 4parity measurements between them. Order n errors canbe decoded from 2 n +1 bits and 2 n parity measurements.Quantum codes allow for the decoding of both bit- andphase-flip errors given a set of measurement outcomes.As for the above example, they decode the error prop-erly as long as the number of errors is order n or less.The probability for a decoding error can be computednumerically using a simple depolarization model that as-sumes a random bit- or phase-flip error of probability ǫ for each physical operation used to measure the pari-ties. By comparing the known input errors with thosedetermined using a decoding algorithm, the decoding orlogical error probability is found to be P l ≃ Λ − ( n +1) (1)Λ = ǫ t /ǫ , (2)where ǫ t is the threshold error, fit from the data. Theerror suppression factor is Λ, the key metrological figureof merit that quantifies how much the decoding errordrops as the order n increases by one. Note that P l scaleswith ǫ n +1 , as expected for n + 1 independent errors. Thekey idea is that once the physical errors ǫ are lower thanthe threshold ǫ t , then Λ > n . WhenΛ < ǫ t . Thebest practical choice is the surface code [2, 3], which canbe thought of as a two-dimensional version of the repeti-tion code that corrects for both bit and phase errors. A4 n + 1 by 4 n + 1 array of qubits performs n -th order er-ror correction, where about half of the qubits are used forthe parity measurements. It is an ideal practical choicefor a quantum computer because of other attributes: (i)Only nearest neighbor interactions are needed, making itmanufacturable with integrated circuits. (ii) The code isupward compatible to logical gates, where measurementsare simply turned off. (iii) The code is tolerant up to asignificant density ( ∼ P l is strictly valid only for the operative range Λ & ǫ t ≃ . ∼ operations [3],so we target a logical error P l = 10 − . Assuming animprovement Λ = 10 for each order, we need n = 17encoding. The number of qubits for the surface code is(4 ·
17 + 1) = 4761. For Λ = 100, this number lowers bya factor of 4. Although this seems like a large numberof qubits from the perspective of present technology, weshould remember that a cell phone with 10 transistors, TechnologyLevel
I: demonstration (1-2) T , T , t g1 II: metrology (4-10) (cid:72) , (cid:72) , (cid:72) m (cid:47) III: error correction (3-9)IV: logical qubit (9, 81, 10 )V: logical Clifford (10 )VI: logical T & feed-forward (10 )VII: quantum computer (10 ) P l P l P l Metrics
FIG. 2: Life cycle of a qubit. Illustration showing the in-creasing complexity of qubit experiments, built up upon eachother, described by technology levels I through VII. Numbersin parenthesis shows approximate qubit numbers. Key met-rics are shown at bottom. Errors for 1 qubit, 2 qubit andmeasurement are described by ǫ , ǫ and ǫ m , which leads toan error suppression factor Λ. Fault-tolerant error correctionis achieved when Λ >
1. Scaling to large n leads to P l → now routinely owned by most people in the world, wasinconceivable only several decades ago.Hardware requirements can be further understood byseparating out the entire parity operation into one- andtwo-qubit logic and measurement components. Assum-ing errors in only one of these components, break-eventhresholds are respectively 4.3%, 1.25% and 12%: the 2-qubit error is clearly the most important, whereas mea-surement error is the least important. For the practicalcase when all components have non-zero errors, I proposethe threshold targets ǫ ≤ .
1% (3) ǫ ≤ .
1% (4) ǫ m ≤ . , (5)which gives Λ ≥
17. It is critical that all three er-ror thresholds be met, as the worst performing errorlimits the logical error P l . Measurement errors ǫ m canbe larger because its single component threshold 12% ishigh. Two-qubit error ǫ is the most challenging becauseits physical operation is much more complex than for sin-gle qubits. This makes ǫ the primary metric aroundwhich the hardware should be optimized. The singlequbit error ǫ , being easier to optimize, should readilybe met if the two-qubit threshold is reached. Note thatalthough it is tempting to isolate qubits from the envi-ronment to lower one-qubit errors, in practice this oftenmakes it harder to couple them together for two-qubitlogic; I call such strategy “neutrino-ized qubits”.In the life cycle of a qubit technology, experiments startwith a single qubit and then move to increasingly morecomplex multi-qubit demonstrations and metrology. Thetypical progression [4] is illustrated in Fig. 2, where thetechnology levels and their metrics are shown together.In level I, one and two-qubit experiments measure co-herence times T and T , and show basic functionalityof qubit gates. Along with the one-qubit gate time t g ,an initial estimate of gate error can be made. Determin-ing the performance of a two-qubit gate is much hardersince other decoherence or control errors will typicallydegrade performance. Swapping an excitation betweentwo qubits is a simple method to determine whether co-herence has changed. Quantum process tomography isoften performed on one- and two-qubit gates [5], whichis important as it proves that proper quantum logic hasbeen achieved. In this initial stage, it is not necessaryto have low measurement errors, and data often have ar-bitrary units on the measurement axis. This is fine forinitial experiments that are mostly concerned with theperformance of qubit gates.In level II, more qubits are measured in a way thatmimics the scale-up process. This initiates more realisticmetrology tests as to how a qubit technology will performin a full quantum computer. Here, the application ofmany gates in sequence through randomized benchmark-ing (RB) enables the total error to grow large enough foraccurate measurement, even if each gate error is tiny [6].Interleaved RB is useful for measuring the error proba-bility of specific one- and two-qubit logic gates, and givesimportant information on error stability. Although RBrepresents an average error and provides no informationon error coherence between gates, it is a practical metricto characterize overall performance [7]. For example, RBcan be used to tune up the control signals for lower errors[8]. Process tomography can be performed for multiplequbits, but is typically abandoned because (i) the numberof necessary measurements scales rapidly with increasingnumbers of qubits, (ii) information on error coherence ishard to use and (iii) it is difficult to separate out initial-ization and measurement errors. Measurement error isalso obtained in this level; differentiation should be madebetween measurement that destroys a qubit state or not,since the latter is eventually needed in level IV for logicalqubits. A big concern is crosstalk between various logicgates and measurement outcomes, and whether residualcouplings between qubits create errors when no interac-tions are desired. A variety of crosstalk measurementsbased on RB are useful metrology tools.In level III an error detection or correction algorithmis performed [9], representing a complex systems test ofall components. Qubit errors have to be low enough toperform many complex qubit operations. Experimentswork to extend the lifetime of an encoded logical state,typically by adding errors to the various components toshow improvement from the detection protocol relativeto the added errors.At level IV, the focus is measuring Λ >
1, demonstrat-ing how a logical qubit can have less and less error byscaling up the order n of error correction. The logicalqubit must be measured in first and second order, whichrequires parity measurements that are repetitive in timeso as to include the effect of measurement errors. Notethat extending the lifetime of a qubit state in first order isnot enough to determine Λ. Measuring Λ > n = 2, a useful initial test is for bit-fliperrors, requiring a linear array of 9 qubits. These experi-ments are important since they connect the error metricsof the qubits, obtained in level II, to actual fault-tolerantperformance Λ. As there are theoretical and experimen-tal approximations in this connection, including the de-polarization assumption for theory and RB measurementfor experiment, this checks the whole framework of com-puting fault-tolerance. A fundamentally important testfor n ≥ n is high enough to convincingly demon-strate an exponential suppression of error. A significantchallenge here is to achieve all error thresholds in one device and in a scalable design.An experiment measuring the bit-flip suppression fac-tor Λ X has been done with a linear chain of 9 supercon-ducting qubits [10]. The measurement Λ X = 3 . ◦ . Here, state distillation must be demon-strated, and feed-forward from qubit errors conditionallycontrols a logical S gate [3]. Because logical errors can bereadily accounted for in software for all the logical Clif-ford gates in level V, feed-forward is only needed for thisnon-Clifford logical T gate.Level VII is for the full quantum computer.The strategy for building a fault-tolerant quantumcomputer is as follows. At level I, the coherence timeshould be at least 1000 times greater than the gate time.At level II, all errors need to be less than threshold, withparticular attention given to hardware architecture andgate design for lowest 2 qubit error. Design should allowscaling without increasing errors. Scaling begins at levelIV: 9 qubits give the first measurement of fault tolerancewith Λ X , 81 qubits give the proper quantum measureof Λ, and then about 10 qubits allow for exponentiallyreduced errors. At level V through VII, 10 qubits areneeded for logical gates, and finally about 10 qubits willbe used to build a demonstration quantum computer.The discussion here focuses on optimizing Λ, but hav-ing fast qubit logic is desirable to obtain a short run time.Run times can also be shortened by using a more paral-lel algorithm, as has been proposed for factoring. A 1000times slower quantum logic can be compensated for withabout 1000 times more qubits.Scaling up the number of qubits while maintaininglow error is a crucial requirement for level IV and be-yond. Scaling is significantly more difficult than for clas-sical bits since system performance will be affected bysmall crosstalk between the many qubits and controllines. This criteria makes large qubits desirable, sincemore room is then available for separating signals andincorporating integrated control logic and memory. Notethis differs from standard classical scaling of CMOS andMoore’s law, where the main aim is to decrease transistorsize.Superconducting qubits have macroscopic wavefunc-tions and are therefore well suited for the challenges ofscaling with control. I expect qubit cells to be in the30 − µ m size scale, but clearly any design with mil-lions of qubits will have to properly tradeoff density withcontrol area based on experimental capabilities.In conclusion, progress in making a fault-tolerantquantum computer must be closely tied to error metrol-ogy, since improvements with scaling will only occurwhen errors are below threshold. Research should partic-ularly focus on two-qubit gates, since they are the mostdifficult to operate well with low errors. As experimentsare now within the fault-tolerant range, many excitingdevelopments are possible in the next few years.The author declares no competing financial interests. Quantum parity.
An arbitrary qubit state is writtenas Ψ = cos( θ/ | i + e iφ sin( θ/ | i , where the continu-ous variables θ and φ are the bit amplitude and phase.A bit measurement collapses the state into | i ( | i ) withprobability cos ( θ/
2) (sin ( θ/ X ( | i ↔ | i ) or phase flip ˆ Z ( | i ↔ −| i ). According to the Heisenberg uncertaintyprinciple, it is not possible to simultaneously measurethe amplitude and phase of a qubit, so obtaining infor-mation on a bit flip induces information loss on phaseequivalent to a random phase flip, and vice versa. Thisproperty comes fundamentally from bit and phase flipsnot commuting [ ˆ X, ˆ Z ] = ˆ X ˆ Z − ˆ Z ˆ X = 0; the sequenceof the two operations matter. Quantum error correc-tion takes advantage of an interesting property of qubitsˆ X ˆ Z = − ˆ Z ˆ X , so that a change in sequence just producesa minus sign. With ˆ X ˆ X and ˆ Z ˆ Z corresponding to2-qubit bit and phase parities, they now commute be-cause a minus sign is picked up from each qubit[ ˆ X ˆ X , ˆ Z ˆ Z ] = ˆ X ˆ X ˆ Z ˆ Z − ˆ Z ˆ Z ˆ X ˆ X (6)= ˆ X ˆ X ˆ Z ˆ Z − ( − ) ˆ X ˆ X ˆ Z ˆ Z (7)= 0 . (8)The two parities can now be known simultaneously, im-plying they are classical-like: a change in one parity canbe measured without affecting the other. ∗ Electronic address: [email protected][1] P. W. Shor, Phys. Rev. A , R2493 (1995).[2] S. B. Bravyi and A. Yu. Kitaev, arXiv quant-ph/9811052(1998).[3] A. G. Fowler, M. Mariantoni, J. M. Martinis and A. N.Cleland, Phys. Rev. A , 032324 (2012).[4] M. H. Devoret and R. J. Schoelkopf, Science , 1169(2013).[5] J. Benhelm, G. Kirchmair, C. F. Roos and R. Blatt, Na-ture Physics , 463 (2008).[6] C. A. Ryan, M. Laforest and R. Laflamme, New J. Phys. , 013034 (2009).[7] R. Barends, J. Kelly, A. Megrant, A. Veitia, D. Sank, E.Jeffrey et. al. , Nature , 500 (2014).[8] J. Kelly, R. Barends, B. Campbell, Y. Chen, Z. Chen, B.Chiaro et. al. , Phys. Rev. Lett. , 240504 (2014).[9] D. Nigg, M. Mller, E. A. Martinez, P. Schindler, M. Hen-nrich, T. Monz et. al. , Science , 302 (2014).[10] J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jef-frey, T. C. White et. al. , Nature519