Solving strongly correlated electron models on a quantum computer
Dave Wecker, Matthew B. Hastings, Nathan Wiebe, Bryan K. Clark, Chetan Nayak, Matthias Troyer
SSolving strongly correlated electron models on a quantum computer
Dave Wecker, Matthew B. Hastings,
2, 1
Nathan Wiebe, Bryan K. Clark,
2, 3
Chetan Nayak, and Matthias Troyer Quantum Architectures and Computation Group, Microsoft Research, Redmond, WA 98052, USA Station Q, Microsoft Research, Santa Barbara, CA 93106-6105, USA UIUC Theoretische Physik, ETH Zurich, 8093 Zurich, Switzerland
One of the main applications of future quantum computers will be the simulation of quantum mod-els. While the evolution of a quantum state under a Hamiltonian is straightforward (if sometimesexpensive), using quantum computers to determine the ground state phase diagram of a quantummodel and the properties of its phases is more involved. Using the Hubbard model as a prototypicalexample, we here show all the steps necessary to determine its phase diagram and ground stateproperties on a quantum computer. In particular, we discuss strategies for efficiently determiningand preparing the ground state of the Hubbard model starting from various mean-field states withbroken symmetry. We present an efficient procedure to prepare arbitrary Slater determinants asinitial states and present the complete set of quantum circuits needed to evolve from these to theground state of the Hubbard model. We show that, using efficient nesting of the various termseach time step in the evolution can be performed with just O ( N ) gates and O (log N ) circuit depth.We give explicit circuits to measure arbitrary local observables and static and dynamic correlationfunctions, both in the time and frequency domain. We further present efficient non-destructive ap-proaches to measurement that avoid the need to re-prepare the ground state after each measurementand that quadratically reduce the measurement error. I. INTRODUCTION
Feynman envisioned that quantum computers wouldenable the simulation of quantum systems: an initialquantum state can be unitarily evolved with a quantumcomputer using resources that are polynomial in the sizeof the system and the evolution time. However, it is notclear that this would enable us to determine the answersto the questions of greatest interest to condensed mat-ter physicists. The properties most readily measured inexperiments do not appear to be simple to determine ina quantum simulation, and known algorithms for quan-tum simulation do not return the quantities of great-est physical interest. In experiments in a solid, one canrarely, if ever, prepare a known simple initial state, nordoes one know precisely the Hamiltonian with which itwould evolve. Meanwhile, the ground state energy (orthe energy of any eigenstate yielded by quantum phaseestimation ) is not a particularly enlightening quan-tity and, furthermore, given a quantum state it is notclear how to determine its long-wavelength ‘universal’properties. Consider, for instance, the following ques-tions: given a model Hamiltonian for the electrons in asolid, such as the Hubbard model, can we determine ifits ground state is superconducting? Can we determinewhy it is superconducting? In this paper, we show that itis possible, in principle, to answer the first question and,if it has a clear answer (in a sense to be discussed), thesecond question as well. Moreover, we show that it is notonly possible in principle but, in fact, feasible to answersuch questions with a quantum computer of moderatesize.The first step in the solution of a quantum Hamilto-nian is to map its Hilbert space to the states of a quan-tum computer. While the Hilbert spaces of systems of bosons or spins are mapped most naturally to the statesof a general-purpose quantum computer, fermionic sys-tems can also be simulated by using a Jordan-Wignertransformation to represent them as spin systems ; thecost can be reduced to log( N ) using additional qubits or using appropriate ordering techniques discussed below.In order to evolve the system on a general purpose quan-tum computer, we decompose the time-evolution opera-tor according to the Trotter-Suzuki decomposition ; thenumber of time intervals in such a decomposition is de-termined by the desired accuracy. The evolution for eachtime interval is expressed in terms of the available gates.If the Hamiltonian can be broken into k non-commutingterms, then the gates can be broken into k sets; withineach set, the gates can be applied in parallel, and only O ( k ) time steps are needed for each interval. Time evolu-tion can be used in conjunction with the quantum phaseestimation algorithm to find an approximate energyeigenvalue and eigenstate of the Hamiltonian.The challenge when applying this approach to elec-tronic systems is that the Coulomb Hamiltonian in sec-ond quantized form H = N (cid:88) p,q =1 h pq c † p c q + 12 N (cid:88) p,q,r,s =1 h pqrs c † p c † q c r c s . (1)has k = O ( N ) terms for N orbitals. Here the oper-ators c † p and c p create and annihiliate an electron withspin σ in spin-orbital p (combining orbital index i andspin index σ ). The quadratic h pq -terms arise from thekinetic energy of the electrons and the Coulomb poten-tial due to the nuclei (which are assumed to be classi-cal in the usual Born-Oppenheimer approximation) andthe quartic h pqrs -terms encode the Coulomb repulsion.In order to estimate the ground state energy of a sys- a r X i v : . [ qu a n t - ph ] A ug tem of interacting electrons in a large molecule of N or-bitals, naive estimates indicated that O ( N ) operationsare needed , due to the large number of non-commutingterms in the Hamiltonian, but more recent analyses indi-cate that ∼ N operations may be sufficient. Whilethis makes the simulation of small classically-intractablemolecules feasible, the scaling is still too demanding toperform electronic structure calculations for more than afew hundred orbitals. This makes the brute-force simula-tion of large molecules and crystalline solids impractical.One approach to reduce the scaling complexity for thesimulation of crystalline solids is to focus on effectivemodels that capture only the relevant bands close tothe Fermi energy, and simplify the O ( N ) terms of theCoulomb interaction to just a few dominant terms. Oneof the paradigmatic effective models for strongly corre-lated fermions, discussed in detail in Sec. II, is the squarelattice Hubbard model, which is a radical simplification ofthe full electronic structure Hamiltonian (1) to arguablythe simplest interacting single-band model, described bythe Hamiltonian H Hub = − (cid:88) (cid:104) i,j (cid:105) (cid:88) σ t ij (cid:16) c † i,σ c j,σ + c † j,σ c i,σ (cid:17) + U (cid:88) i n i, ↑ n i, ↓ + (cid:88) i (cid:15) i n i , (2)where n i,σ = c † i,σ c i,σ is the local spin density and n i = (cid:80) σ n i,σ is the total local density. Of all the orbitals ina unit cell, we only keep a single orbital (describing e.g. the Cu d x − y in a cuprate superconductor), reduce thelong-range Coulomb repulsion to just the local repulsion U within this orbital, and the hybridizations t ij = − h pq to nearest neighbors on the lattice. The on-site ener-gies will be called (cid:15) i = h ii . Usually one can consider atranslationally-invariant model and then drop the indicesin t ij and (cid:15) i since all sites are equivalent. Here we willkeep them explicitly since they may not all be the sameduring the adiabatic evolution.The computational complexity of the Hubbard model(2) is much reduced compared to the full electronic struc-ture problem (1), since only a single orbital is used perunit cell instead of dozens needed for the full problemand the number of terms is linear in N instead of scalingwith the fourth power N . On a lattice of 20 ×
20 unitcells this means a reduction of the total number of termsfrom about ∼ to ∼ , which makes such a sim-ulation feasible on a quantum computer. Furthermore,since the 2D Hubbard Hamiltonian can be expressed asa sum of five non-commuting terms (the on-site interac-tion terms and four sets of hopping terms, one for eachnearest neighbor to a given site) each time step in theTrotter-Suzuki decomposition requires only ∼ log N par-allel circuit depth (in fact, constant, if it were not for theJordan-Wigner strings). To achieve this optimal scalingwe use a Jordan-Wigner transform combined with opti-mal term ordering and the “nesting” and Jordan-Wignercancellation technique of Ref. 11. The simplifying features of the Hubbard model that af-ford such advantageous scaling with N are the restrictionto nearest-neighbor hopping and on-site interactions. Ofcourse, a real solid will have long-ranged Coulomb in-teractions and longer-ranged hopping terms. However,the point of a model such as the Hubbard model is notto give a quantitatively accurate description of a solid,but rather to capture some essential features of strongly-correlated electrons in transition-metal oxides. For thesepurposes, a simplified model that does not include allof the complexity of a real solid should be sufficient (al-though, for the case of the cuprate high- T c superconduc-tors, the required simplified model may be a bit morecomplicated than the Hubbard model; we focus here onthe Hubbard model for illustrative purposes).A central question in the study of a model of stronglycorrelated electrons is: what is its phase diagram as afunction of the parameters in the model? The Hubbardmodel has the following interesting subquestion: is therea superconducting phase somewhere in the phase dia-gram? Once the phase diagram has been determined, wewish to characterize the phases in it by computing keyquantities such as the quasiparticle energy gap and phasestiffness (superfluid density) in a superconducting phase.To answer these questions we will need to determine theground state wave function for a range of interaction pa-rameters and densities and then measure ground statecorrelation functions.We take the following approach to solve the Hubbardmodel (or related models) on a quantum computer:1. Adiabatically prepare an approximate ground state | ˜Ψ (cid:105) of the Hubbard model starting from differentinitial states.2. Perform a quantum phase estimation to project thetrue ground state wave function | Ψ (cid:105) from the ap-proximate state | ˜Ψ (cid:105) , and measure the ground stateenergy E = (cid:104) Ψ | H | Ψ (cid:105) .3. Measure local observables and static and dynamiccorrelation functions of interestThe general strategy for simulating quantum mod-els has already been discussed previously, starting fromFeynman’s original proposal. Abrams and Lloyd werethe first to discuss time evolution under the HubbardHamiltonian, but without giving explicit quantum cir-cuits; in Ref. 14 the same authors discuss how quantumphase estimation can be used to project from a trial stateinto an energy eigenstate. Refs. 5 and 15 provided moredetails of how to simulate quantum lattice models, pro-posed an algorithm to construct arbitrary Slater determi-nants and discussed measurements of various observablesof interest. The idea of adiabatic state preparation forinteracting fermion systems was proposed in Ref. 5 andadiabatic preparation has been considered by a numberof authors In this paper we go beyond the earlier work in sev-eral crucial aspects. Besides presenting full details of thenecessary quantum circuits for time evolution (Sec. IV)and measurements (Secs. VI and VII) we address crucialissues that have not received much attention so far. Inparticular, in Sec. III we discuss in detail how to adia-batically prepare approximate ground states | ˜Ψ (cid:105) of theHubbard model starting from different initial states, cap-turing various proposed broken symmetries, and use thisto determine the phase diagram of the Hubbard model.Having thereby deduced the correct initial state, we cancheaply prepare multiple copies of the ground state of thesystem.Due to the benign scaling of circuit depth with log N ,the ground state of the Hubbard model on large latticescan be prepared in very short time of less than a second,even assuming logical gate times in the µs range as wediscuss in Sec. V.We then present, in Sec. VI, details of how to measurevarious quantities of interest, including both equal timeand dynamic correlation functions. For the latter, wepresent a new approach, based on importance sampling inthe frequency domain to complement the usual approachof measuring in the time domain.Since each measurement only gives limited informa-tion, the ground state has to be prepared many times,which leads to a significant total runtime of a hypothet-ical quantum computer. In Sec. VII, we present newapproaches to non-destructive measurements of arbitraryobservables that substantially reduce the runtime. By us-ing a relatively cheap quantum phase estimation to recre-ate the ground state after a measurement, we reduce thenumber of times the ground state needs to be preparedfrom scratch. Furthermore their runtime scales as O ( (cid:15) − )with the desired accuracy (cid:15) instead of the O ( (cid:15) − ) scalingof the na¨ıve approach to measurements.For the aficionado of quantum algorithms, the papercontains the following new algorithmic ideas. In Sec. V,we introduce a variant of quantum phase estimation thatgives a two-fold reduction in the number of time steps anda four-fold reduction in rotation gates compared to thestandard approach. Section III presents an approach tospeeding up adiabatic state preparation by adiabaticallychanging the Trotter time step. In Sec. VI, we presentan algorithm for measuring dynamical correlation func-tions by importance sampling of the spectral function inthe frequency domain. Finally, Sec. VII presents twoapproaches to non-destructive measurements. II. THE HUBBARD MODELA. Derivation of the model
The square lattice Hubbard model is one of theparadigmatic models for strongly interacting fermions.Although it was originally introduced as a model foritinerant-electron ferromagnetism , it is now studiedprimarily as a model for the Mott transition and broken-symmetry (and, perhaps, even topological) ordering phe- nomena in the vicinity of this transition (for recent re-views, see, for instance, Refs. 21 and 22). In particular,it has been hypothesized that the square lattice Hubbardmodel captures the important ingredients of the cupratehigh temperature superconductors. It is a radical sim-plification of the full Hamiltonian of a solid. A completedescription of a solid takes the form: H = (cid:88) i p i m e + (cid:88) a P a M a + (cid:88) i>j e | r i − r j | + (cid:88) i,a Z a e | r i − R a | + (cid:88) a>b Z a Z b e | R a − R b | (3)where i, j = 1 , . . . , N e and a, b = 1 , . . . N ions ; N e and N ions are, respectively, the number of electrons and ions;and Z a is the atomic number of ion a and M a its mass. Incertain situations, the physics is dominated by electron-electron interactions. The ions form a crystalline latticeand the vibrations of this lattice (phonons), though quan-titatively important, do not, it is hypothesized, play amajor qualitative role. Then we may, in a first approx-imation, ignore the dynamics of the ions and, thereby,obtain a purely electronic Hamiltonian which, in general,will take the form of Eq. (1). Explicitly introducing la-bels for spin, unit cells and orbitals with a unit cell of aperiodic solid we write this Hamiltonian as H = − (cid:88) i,j ; a,b ; σ t abij (cid:16) c † i,a,σ c j,b,σ + h.c. (cid:17) + (cid:88) i,a (cid:15) a n i,a , + (cid:88) i,a U a n i,a ( n i,a − / (cid:88) (cid:48) i,j,k,l ; a,b,c,d V abcdijkl c † i,a c † j,b c k,c c l,d + . . . (4)Here, i, j label unit cells and a, b label orbitals withina unit cell. The local density in orbital a is n i,a = (cid:80) σ n i,a,σ . The coupling constants have the followingmeanings: t a,bi,j is the matrix element for an electron tohop from orbital b in unit cell j to orbital a in unit cell i (possibly two different orbitals within the same unitcell); (cid:15) a is the energy of orbital a in isolation; U a is theenergy penalty for two electrons to occupy the same or-bital in the same unit cell. This term can be rewrit-ten equivalently as (cid:80) i,a U a n i,a, ↑ n i,a, ↓ . The prime on thesummation in the final line indicates that we have omit-ted the term with i = j = k = l and a = b = c = d ,which has been separately included as the U a term. Theyinclude, for instance, the terms with i = k (cid:54) = j = l , a = c (cid:54) = b = d , which is the density-density interaction V ababijij n i,a n j,b , and the terms with j = l , b = d , whichis correlated hopping term V abcbijkj c † i,a c k,c n j,b . These termswill be assumed to be much smaller than t a,bi,j , U a and willbe dropped.If the differences between the (cid:15) a s are the largest energyscales in the problem, then there will be, at most, a singlepartially-filled orbital and we can ignore all of the otherorbitals when studying the physics of the model at tem-peratures less than | (cid:15) a − (cid:15) b | . These energy differences canbe on the order of eV and, therefore, such an approxima-tion will be valid over a very large range of temperaturesand energies. Similarly, if t abii is a large energy scale, itmay be possible to reduce the model to one with a singleorbital per unit cell (which, in this case, will be a linearcombination of the orbitals in Eq. (4)). If the on-siteinteractions U in this orbital (we drop the subscript a since there is only a single orbital under consideration)are much larger than the interactions with nearby unitcells (nearest-neighbor, next-nearest neighbor, etc.), thenwe can drop the latter. Similarly, if the nearest-neighborhopping is much larger than more distant hopping matrixelements, then we can drop the latter and write down thesimplified model of Eq. (2). This is the Hubbard model.In the case of the copper-oxide superconductors, theonly orbital that is retained is the Cu d x − y orbital.(However, it has also been argued that a 3-band modelincluding O p x and p y orbitals is necessary to capturethe physics of the cuprates. ) Usually one can drop theindices in t ij , U i and (cid:15) i since all sites are equivalent, butwe will keep them here since during the adiabatic solutionthey will not all be the same.Due to the presumed separation of energy scales in (4),the effective model (2) is much simpler than (4) and has amuch smaller Hilbert space. As we will discuss in detail inthis paper, this simplification makes simulation of 20 × B. The physics of the Hubbard model
At half-filling (one electron per unit cell) the Hub-bard model gives a simple account of Mott insulatingbehavior. For U = 0, the ground state is metallic.There is a single band and it is half-filled; the Fermisurface is the diamond | k x ± k y | = π/a , where a is thelattice constant. However, for U >
0, the ground state isinsulating. While this is true for all
U >
0, the physicsis different in the small U (cid:28) t and U (cid:29) t limits. For U (cid:28) t , the system lowers its energy by developing an-tiferromagnetic order, as a result of which the unit celldoubles in size. There are now two electrons per unitcell and the system is effectively a band insulator. Theantiferromagnetic moment N and the charge gap ∆ charge are both exponentially-small N ∼ ∆ charge ∼ t e − ct/U forsome constant c . For large U , i.e. U (cid:29) t , on the otherhand ∆ charge ∼ U . In this limit, it is instructive to ex-pand about t = 0. Then, the ground state is necessarilyinsulating: there is one electron on each site and an en-ergy penalty of U to move any electron since some sitewould have to be doubly-occupied. For small but non-zero t/U , this picture is dressed by small fluctuations inwhich an electron makes virtual hops to a neighboringsite and then returns. As a result of these fluctuations,there is an effective interaction between the spins of the electrons on neighboring sites. Since an electron can onlyhop to a neighboring site if its spin forms a singlet withthe spin of the electron there (due to the Pauli princi-ple), the energy is lowered by such processes if neighbor-ing spins are antiferromagnetically-correlated. We canderive an effective Hamiltonian at energies much lowerthan U by taking into account these virtual hopping pro-cesses. It takes the form: H = J (cid:88) (cid:104) i,j (cid:105) S i · S j + O ( t /U ) (5)where J = 4 t /U . The O ( t /U ) terms are due to vir-tual processes in which multiple hops occcur . If theseterms are neglected, this is the antiferromagnetic Heisen-berg model. Its ground state has an antiferromagneticmoment of order 1. The temperature scale for the onsetof antiferromagnetic correlations is ∼ J , which is a muchlower scale than the charge gap ∼ U , unlike in the small- U limit, where both scales are comparable. When thesubleading O ( t /U ) terms are kept, the system remainsinsulating, but the spin physics may change considerably.The phase diagram of such spin models is the subject ofconsiderable current research (see, for instance, Refs. 27and 28).The situation is even less clear when the system is“doped”, i.e. when the density is changed from half-filling. Then, an effective model can be derived to lowest-order in t/U . The model is governed by the t − J Hamil-tonian: H = − (cid:88) (cid:104) i,j (cid:105) (cid:88) σ t ij (cid:16) c † i,σ c j,σ + h.c. (cid:17) + J (cid:88) (cid:104) i,j (cid:105) S i · S j + O ( t /U )(6)together with a no-double-occupancy constraint. As aresult of this constraint, the t − J model has a smallerper site Hilbert space than the Hubbard model, whichmakes it a much more attractive target for exact diago-nalization. However, the t − J model does not captureall of the physics of the Hubbard model (especially inthe regime in which t/U is small but not so small thathigher-order in t/U terms can be neglected).Any deviation from half-filling will cause the conduc-tivity to be non-zero in a completely clean system, inwhich (cid:15) i is independent of i in Eq. (2). However, in anyreal material, there will be some disorder due to impuri-ties, and the system will remain insulating for densitiesvery close to half-filling. As the density deviates fromhalf-filling, antiferromagnetic order is suppressed andeventually disappears. If the Hubbard model capturesthe physics of the cuprate superconductors, then super-conducting order with d x − y pairing symmetry mustoccur for a density of 1 − x electrons per unit cell, with x in the range 0 . < ∼ x < ∼ .
25. Moreover, on the under-doped side of the phase diagram, 0 . < ∼ x < ∼ .
15, othersymmetry-breaking order parameters, such as stripe or-der, d -density wave order, or antiferromagnetic order mayalso develop (for recent reviews, see, for instance, Refs.21 and 22; see, also, Ref. 30 for a recent calculation find-ing several competing orders). On the other hand, if theHubbard model is not superconducting in this dopingrange, then superconductivity in the cuprates must bedue to effects that are missing in the Hubbard model,such as longer-ranged interactions, longer-ranged hop-ping, interlayer coupling, and phonons.Thus, as far as the cuprates are concerned, the prin-ciple aim of an analysis of the Hubbard model is to de-termine if it has a d x − y superconducting ground statefor U/t ∼ . < ∼ x < ∼ . x ), can be compared to experimental measurements.If the Hubbard model does not superconduct, then dif-ferent models must be considered, restoring terms thathave been dropped in the passage to the Hubbard model. III. ADIABATIC PREPARATION OF THEGROUND STATE
We start the adiabatic preparation from the groundstate of some Hamiltonian whose solution is known. Forthe sake of concreteness, we consider two examples. Thefirst, in Sec. III A, is the quadratic mean-field Hamilto-nian whose exact ground state is the BCS ground statefor a d -wave superconductor or the analogous Hamil-tonian for any other ordered state such as striped ,antiferromagnetic , d -density wave , etc. states. Thesecond, discussed in Sec. III B is the Hubbard modelwith hopping matrix elements tuned so that the systembreaks into disconnected 2 × O ( N ) gates and a parallel circuitdepth of only O (log N ) operations per time step.This step is formally similar to adiabatic quantumoptimization , but with a different viewpoint. If nophase transition occurs along this adiabatic evolution,then we conclude that we guessed correctly about thephase of the system (for this value of parameters in thefinal Hubbard model). However, if a phase transitiondoes occur, then the gap will close and we will know thatwe guessed incorrectly and can repeat the procedure witha different initial state in a different phase. Through aprocess of elimination, we determine the phase of thesystem for a given set of parameters and, by repeat- ing this process for different Hubbard model parameters,we map out the phase diagram. In adiabatic quantumoptimzation , it is assumed that a phase transition oc-curs and it is hoped that the minimum energy at thetransition point scales polynomially in system size ratherthan exponentially, thereby allowing the evolution to oc-cur in polynomial time. In our protocol for solving theHubbard model, however, any gap closing, either poly-nomial or exponential – corresponding, respectively, toa second- or first-order phase transition – is an invita-tion to repeat the procedure with a different initial stateand correspondingly different annealing path until no gapclosing occurs. When the initial state is in the same phaseas the ground state of the Hubbard model, the adiabaticevolution can be done in constant time up to subpoly-nomial factors (see V C for a more detailed discussion oferrors in adiabatic evolution; while in most of the paperwe treat the time required for adiabatic evolution as scal-ing with inverse gap squared, in that section we providea more precise discussion of diabatic errors (i.e., thosetransitions out of the ground state that are due to a fi-nite time scale for the evolution) and further show howit is possible to improve the scaling with gap by usingsmoother annealing paths. A. Adiabatic Evolution from Mean-Field States
1. Mean-Field Hamiltonians
Let us suppose that we wish to test whether the groundstate of the 2D Hubbard model is superconducting forsome values of t, U and density. We can do this by seeingif the Hubbard model is adiabatically connected to somesimple superconducting Hamiltonian. It need not be real-istic; it merely needs to be a representative Hamiltonianfor a superconductor. We can take the BCS mean-fieldHamiltonian: H MF DSC = − (cid:88) (cid:104) i,j (cid:105) (cid:88) σ t ij (cid:16) c † i,σ c j,σ + c † j,σ c i,σ (cid:17) − (cid:88) (cid:104) i,j (cid:105) ∆ x − y ij (cid:16) c † i ↑ c † j ↓ − c † i ↓ c † j ↑ (cid:17) + h.c. , (7)where ∆ x − y ij = ∆ / i = j ± ˆ x and ∆ x − y ij = − ∆ / i = j ± ˆ y . This is a mean-field d x − y superconductor(DSC) with fixed, non-dynamical superconducting gap ofmagnitude ∆.The ground state of this Hamiltonian can be written inthe form of a Slater determinant by making a staggeredparticle-hole transformation on the down-spins: c † i ↓ → ( − i c i ↓ . Then the Hamiltonian takes the form:˜ H MF DSC = − (cid:88) (cid:104) i,j (cid:105) (cid:88) σ t ij (cid:16) c † i,σ c j,σ + c † j,σ c i,σ (cid:17) − (cid:88) (cid:104) i,j (cid:105) ( − j ∆ x − y ij (cid:16) c † i ↑ c j ↓ − c † j ↑ c i ↓ (cid:17) + h.c. , (8)Under this transformation, the number of up-spin elec-trons per unit cell is particle-hole transformed: n i ↓ → − n i ↓ while the number of up-spin electrons is un-changed; equivalently, the chemical potential for up-spinelectrons is flipped in sign, µ ↓ = − µ ↑ . Consequently,the number of down-spin electrons is not equal to thenumber of up-spin electrons. In addition, the supercon-ducting pair field ∆ x − y ij has become transformed into astaggered magnetic field in the S x direction with a d x − y form factor (a d -wave spin-density wave, in the terminol-ogy of Ref. 33). Thus, the ground state of this quadraticHamiltonian is a Slater determinant of the form: | Ψ (cid:105) = (cid:89) k ( u k c † k , ↑ + v k c † k + Q , ↓ ) | (cid:105) (9)Here, the momenta k range over the Brillouin zone, − πa ≤ k x,y ≤ πa , where a is the lattice constant, andthe functions u k , v k are given by: u k ≡ (cid:32) − (cid:15) k (cid:112) (cid:15) k + | ∆ k | (cid:33) v k ≡ (cid:32) (cid:15) k (cid:112) (cid:15) k + | ∆ k | (cid:33) (10)where (cid:15) k ≡ − t (cos k x a + cos k y a ) and ∆ k ≡ ∆(cos k x a − cos k y a ). In Sec. III A 2, we show how to prepare thisground state. For now, we will simply assume that thisground state can be prepared efficiently. In computingthe properties of this state, it is important to keep inmind that physical operators have been transformed ac-cording to c † i ↓ → ( − i c i ↓ and to use the appropriatetransformed operators.We now adiabatically deform this Hamiltonian into aweakly-perturbed Hubbard model. There are two rea-sons why our final Hamiltonian is a weakly-perturbedHubbard model, rather than the Hubbard model itself:(1) in the absence of long-ranged Coulomb interactions,(which are neglected in the Hubbard model) a super-conducting ground state will have gapless Goldstone ex-citations, and (2) a d x − y superconductor has gaplessfermionic excitations at the nodes of the superconduct-ing gap. Although these gapless excitations are impor-tant for the phenomenology of superconductors (e.g. thetemperature dependence of the superfluid density and thestructure of vortices), they are a nuisance in the presentcontext. Hence, we weakly perturb the Hubbard model inorder to give energy gaps to Goldstone modes and nodalfermions. However, this can be done weakly so that wedo not bias the tendency towards superconductivity (orlack thereof). If the Hubbard model superconducts, thenthe scale of the d x − y superconducting gap should be ∼ xJ , where (cid:104) n i (cid:105) = 1 − x is the average occupancy of asite and J = 4 t /U is the superexchange interaction. (Onthe other hand, if the Hubbard model does not supercon-duct, then the superconducting gap in the cuprates mustbe determined by some other energy scale.) Hence, weapply a fixed d x − y superconducting gap that is much smaller than xJ in order to pin the phase of any su-perconducting order that may develop. In addition, weapply a small imaginary d xy superconducting gap to givea small gap to the nodal fermions. Neither gap scaleswith system size.More concretely, we adiabatically evolve the Hamilto-nian according to: H DSC ( s ) = (1 − s ) H MF DSC + sH Hub − (cid:88) (cid:104) i,j (cid:105) w ∆ x − y ij (cid:16) c † i ↑ c † j ↓ − c † i ↓ c † j ↑ (cid:17) + h.c. − (cid:88) (cid:104)(cid:104) j,k (cid:105)(cid:105) i ∆ xyjk (cid:16) c † j ↑ c † k ↓ − c † j ↓ c † k ↑ (cid:17) + h.c. (11)Here, w is a small dimensionless parameter that deter-mines the magnitude of a small pair field that remainsthroughout the adiabatic evolution. At s = 0, it simplyincreases the superconducting gap by a factor of 1 + w .At s = 1, it applies a small pair field ∝ w to the Hub-bard model. If the Hubbard model does not have a su-perconducting ground state, then w (cid:54) = 0 will induce asmall superconducting gap ∝ w . However, if the Hub-bard model does have a superconducting ground state,then w will pin the phase of the superconducting gap,which would otherwise fluctuate. The magnitude of thegap will be essentially independent of w for small w . Ifthe Hubbard model is superconducting, then the systemwill remain superconducting all the way from s = 0 to s = 1 and the gap will remain open. As long as s isvaried slowly compared to the minimum gap – which,crucially, is independent of system size – the system willremain in the ground state. The final line of Eq. (11) isa small imaginary second-neighbor superconducting gap.Here, ∆ xyjk = u ∆ / j = k ± (ˆ x + ˆ y ), ∆ xyjk = u ∆ / j = k ± (ˆ x − ˆ y ), and u is a small dimensionless numberthat is independent of system-size. Such a term opens asmall gap ∝ u at the nodes (assuming that the Hubbardmodel does not have a superconducting ground state thatspontaneously breaks time-reversal symmetry). We take u ∆ (cid:28) xJ so that this term opens a gap at the nodesbut is otherwise too small to affect the development ofsuperconductivity away from the nodes.In summary, Eq. (11) is a path through parameterspace from a simple superconducting Hamiltonian withSlater determinant ground state to a Hubbard model thathas been perturbed by small terms that give a gap to or-der parameter fluctuations and nodal excitations but oth-erwise do not affect the ground state, as may be checkedby decreasing the magnitude of these perturbations.A similar strategy can be used to test whether the Hub-bard model has any other ordered ground state, such asantiferromagnetic order. In this case, our starting Hamil-tonian is, instead: H MF AF = − (cid:88) (cid:104) i,j (cid:105) (cid:88) σ t ij (cid:16) c † i,σ c j,σ + c † j,σ c i,σ (cid:17) − (cid:88) i ( − i N (cid:16) c † i ↑ c i ↑ − c † i ↓ c i ↓ (cid:17) , (12)We then evolve along a path analogous to Eq. (11),but with the symmetry-breaking terms in the last twolines replaced by a weak staggered magnetic field. If theHubbard model has an antiferromagnetic ground state(for some values of t, U, x ) then the gap will not closealong this path. However, if the actual ground state ofthe Hubbard model is superconducting for these valuesof the couplings, then the gap will become O (1 /N z/ )where z is the dynamical critical exponent of the transi-tion (i.e. the gap will close, up to finite-size effects) atthe point along the evolution at which superconductivitydevelops. There will be a non-zero staggered magnetiza-tion throughout since we never fully remove the staggeredmagnetic field, but there will be a sharp signature at thepoint at which superconductivity develops . The pres-ence of a staggered field, which is kept small, may slightlyshift the onset of superconductivity but will have no othereffect.This strategy can allow us, in some of the circum-stances in which it has a clear answer, to address thequestion: why does the 2D Hubbard model have a su-perconducting ground state? In particular, it is possiblethat the superconducting ground state of the Hubbardmodel necessarily has some other type of order coexistingwith superconducting order, especially in the underdopedregime. These secondary orders play a role in some theo-ries of the cuprate superconductors. If such an order werepresent in the Hubbard model, evolution from a purelysuperconducting initial state would necessarily encountera phase transition (at the point of onset of this additionalorder). Therefore, we can determine not only if the Hub-bard model has a superconducting ground state, but alsowhether the ground state exhibits a secondary type oflong-ranged order over some range of dopings. We willreturn to the question “why does the Hubbard modelsuperconduct” when we discuss correlation functions.The virtue of using these mean field Hamiltoniansas starting points for adiabatic evolution is that theirphysics is fully understood. In particular, their groundstates are Slater determinants, which we can efficientlyprepare, as we explain in the next section.
2. Preparing Slater Determinants
In this section, we explain an efficient, simple quantumalgorithm to prepare Slater determinants. This is thenused to prepare the Hubbard ground state by adiabaticevolution from U = 0 to U (cid:54) = 0.The standard algorithm is in Ref. 5. This algorithmsuffers from some drawbacks. Given a projector ρ , a Slater determinant state is a stateΨ SD ( ρ ) = Π N e j =1 b † j | (cid:105) , (13)where | (cid:105) is the no particle vacuum state, and each op-erator b † j is a linear combination of a † i , where the a † i arefermion creation operators on a given site i . We write b † j = (cid:80) ni =1 a † i P ij , where P ij are complex scalars and arethe entries of P , n is the number of orbitals, and N e isthe number of electrons. Given any matrix ρ , we can finda matrix P such that ρ = P P † . (14)Then, using this matrix P in the definition of the stategives the state Ψ SD ( ρ ); note that this definition fixesΨ SD ( ρ ) up to a complex phase. The algorithm in Ref. 5proceeds by noting that U m = exp( i π ( b m + b † m )), actingon the vacuum, produces the state b † m | (cid:105) , up to a phase.Therefore, if the b † j are suitably orthogonalized, succes-sively acting with the unitary U m produces the desiredstate. However, no explicit method for implementing thisunitary U m is given; if the unitary is implemented by aTrotter method, this will be extremely inefficient. Weexplain a simpler technique.Let us first assume that ρ is real; the extension tothe complex Hermitian case will be straightforward. Theidea is first to prepare the Slater determinant stateΨ SD ( ρ ) ≡ Π N e j =1 a † j | (cid:105) . (15)This corresponds to the case that ρ is an n -by- n diagonalmatrix, with ones in the first N e entries and zeroes else-where. This state has one electron on each of the first N e states. It is a product state that can be prepared inlinear time (indeed, simply initialize each of the first N e spins to up and the other spins to down). Then, we acton this state Ψ SD ( ρ ) with a sequence of Givens rotations .A Givens rotation, to borrow the language of matrix di-agonalization, is a rotation matrix that acts on two rowsor columns at a time. For any given pair, i, j , define theoperator R i,j ( θ ) by R i,j ( θ ) = exp( θc † i c j − h.c. ) . (16)This is a simple two-qubit gate (up to Jordan-Wignerstrings) and is, therefore, more straightforwardly imple-mented than the unitary U m in the standard methoddescribed above. Define the n -by- n orthogonal ma-trix r i,j ( θ ) by its matrix elements as follows. Let( r i,j ( θ )) k,k = 1 for k (cid:54) = i, j and let ( r i,j ( θ )) k,l = 0 if k (cid:54) = l and k (cid:54) = i, j or if k (cid:54) = l and l (cid:54) = i, j . Then,let ( r i,j ( θ )) i,i = ( r i,j ( θ )) j,j = cos( θ ) while ( r i,j ( θ )) i,j = − ( r i,j ( θ )) j,i = sin( θ ). Then, R i,j ( θ )Ψ SD ( ρ ) = Ψ SD ( r i,j ( θ ) ρr i,j ( θ ) − ) , (17)where again the equality holds up to the phase ambiguity.Now, given some desired target Ψ SD ( ρ ), we claim thatone can always find a sequence of at most nN e Givensrotations that will turn the matrix ρ into the matrix ρ .Thus, by inverting this sequence of Givens rotations andapplying that sequence to the state Ψ SD ( ρ ), we succeedin constructing the desired state. The Givens rotationswill be ordered such that the Jordan-Wigner strings canbe cancelled as in Ref. 11, so the algorithm takes time O ( N e n ). To show our claim that the sequence of Givensrotations exists, note that the effect of a given rotationwhich transforms ρ to r i,j ( θ ) ρr i,j ( θ ) − can be achived bytransforming P to r i,j ( θ ) P . Consider a left singular vec-tor of P with singular value equal to 1; we can find asequence of at most n Givens rotations that rotate thatvector to be equal to the vector (1 , , ... ). As a result, allthe other singular vectors now have amplitude only onsites 2 , ..., n . Find another sequence to transform someother left singular vector with singular value equal to1 into the vector (0 , , , ... ). Continue until all singu-lar vectors with nonzero singular values have amplitudesonly on the first N e sites.The extension of this procedure to the complex case issimple. Rather than having the Givens rotation R i,j ( θ )depends only a single angle θ , we need to perform a rota-tion exp( zc † i c j − h.c. ) for some complex number z . All thetime estimates remain unchanged up to constant factors.However, for many systems a much shorter sequence ofGivens rotations can be found. Using the same idea as inthe fast Fourier transform, if n is a power of 2, there existsa sequence of n log ( n ) Givens rotations to transform an n -by- n matrix to the momentum basis. Thus, we canproduce ground states of a system of free fermions on aperiodic lattice in time O ( n log( n )). This can be easilyextended to handle the case of a periodic system with aunit cell larger than a single site. Assume there are states | i, (cid:126)x (cid:105) where i labels an atom in a unit cell and (cid:126)x labels alattice vector. We transform to a momentum basis | i, (cid:126)k (cid:105) ,and then for each (cid:126)k we initialize some Slater determinantin the unit cell in momentum space. If the unit cells havesize O (1), the time to initialize each cell is O (1) so thetotal time is still O ( n log( n )).A further possible optimization is that if N e > n/
2, wecan instead use Givens rotations to bring the holes to thedesired states, rather than the electrons, taking a time O (( n − N e ) n ) rather than O ( N e n ). As an interestingapplication of this, consider creating the ground stateof N e = 3 electrons in n = 4 sites, using a hoppingHamiltonian with the sites arranged on a ring. This canbe done by initializing a state with electrons on the first3 sites and then apply a total of three different R i,j ( θ )operators.
3. Numerical Results
We applied the Slater determinant preparation pro-cedure to the Hubbard model on 8 sites arranged in tworows of 4 sites. We started at U = 0 and (cid:15) i = 0. All hori-zontal couplings were set equal to 1, while to avoid havingan exact ground state degeneracy the vertical couplings were doubled in strength to 2 (note that this requirementdepends on the filling factor; for example, at half-fillingwe would instead prefer to remain at vertical couplingequal to 1 to avoid degeneracies). The ground state ofthis model is a Slater determinant of free fermion single-particle wavefunctions that we could prepare with a totalof 14 Givens rotations for both spin up and spin down.This was done by initializing particles in three out of foursites in the top row of sites, and holes in the remainingsites. Then, 3 Givens rotations were used to create auniform superposition of the hole in various states in thetop row. Finally, 4 Givens rotations were used to createa uniform superposition between top and bottom rows.This was done for both spins, giving 2 × U (cid:54) = 0 while reducing the verti-cal coupling to unit strength. The results are shown inFig. 1. As seen, increasing the annealing time increasesthe success probability, until it converges to 1. The an-nealing times required to achieve substantial (say, 90%)overlap with the ground state are not significantly differ-ent from the preparation approach discussed next basedon “joining” plaquettes.Note that for this particular model, we know theground state energy with very high accuracy from run-ning an exact diagonalization routine so we know whetheror not we succeed in projecting onto it after phase esti-mation. The system size is small enough that calculatingground state energies is easy to accomplish on a classicalcomputer using Lanczos methods. Of course, on largersystems requiring a quantum computer to simulate, wewill not have access to the exact energy. In this case, itwill be necessary to do multiple simulations with differ-ent mean-field starting states as described at the startof this section; then, if the annealing is sufficiently slowand sufficiently many samples are taken, the minimumover these simulations will give a good estimate of theground state energy. Once we have determined the op-timum mean-field starting state and annealing path, wecan then use this path to re-create the ground state todetermine correlation functions. B. Preparing of d -wave resonating valence bondstates from plaquettes d -wave resonating valence bond states An alternative approach to adiabatic state prepara-tion has previously been proposed in the context of ul-tracold quantum gases in optical lattices. In this ap-proach, d -wave resonating valence bond (RVB) statesare prepared. RVB states are spin liquid states of aMott insulator, in which the spins on nearby sites arepaired into short-range singlets. RVB states were firstintroduced as proposed wave functions for spin liquidground states of Mott insulators , and then conjec-tured to describe the insulating parent compounds of thecuprate superconductors. In more modern language,
FIG. 1. Success probability as a function of total anneal-ing time for an annealing path in the Hubbard model thatbegins at U = 0 and vertical hopping equal to 2 and endsat U = 2 . t (blue) or U = 4 t (red) and all hoppings equalto 1. The initial state is a Slater determinant state (of freefermion single-particle states). Annealing is linear along thepath. The x -axis represents the total time for the anneal. The y -axis represents the probability that the phase estimationroutine returns an energy that is within 10 − of the groundstate energy. This probability is obtained by sampling runsof the phase estimation circuit, giving rise to some noise inthe curves. they are described as Z topologically-ordered spin liq-uid states. The idea was that, upon doping, the sin-glet pairs would begin to move and form a condensate ofCooper pairs. Although the insulating parent compoundsof the cuprates are antiferromagnetically-ordered, it isnevertheless possible that the cuprate superconductorsare close to an RVB spin liquid ground state and arebest understood as doped spin liquids. The RVB sce-nario has been confirmed for t - J and Hubbard modelsof coupled plaquettes and ladders (consisting of twocoupled chains) and remains a promising candidatesfor the ground state of the Hubbard model and high-temperature superconductors. Instead of starting from mean-field Hamiltonians, webuild up the ground state wave function from small spa-tial motifs, in this case four-site plaquettes, which arethe smallest lattice units on which electrons pair in theHubbard model. Following the same approach as orig-inally proposed for ultracold quantum gases, we preparefour-site plaquettes in their ground state, each filled witheither two or four electrons. These plaquettes then getcoupled, either straight to a two-dimensional square lat-tice, or first to quasi-one dimensional ladders, which aresubsequently coupled to form a square lattice.
2. Preparing the ground states of four-site plaquettes
The first step is preparing the ground state of the Hub-bard model on 4-sites plaquettes filled with either two orfour electrons. We start from very simple product states
01 2301 23 45 67
FIG. 2. Labeling of the sites in a plaquettes
01 2301 23 45 67
FIG. 3. Coupling of two plaquettes and adiabatically evolve them into the ground states ofthe plaquettes.To prepare the ground states of plaquettes withfour electrons we start from a simple product state c † , ↑ c † , ↓ c † , ↑ c † , ↓ | (cid:105) , using the labeling shown in Fig. 2.Our initial Hamiltonian, of which this state is the groundstate has no hopping ( t ij = 0), but already includes theHubbard repulsion on all sites. To make the state withdoubly occupied state the ground state we add large on-site potentials (cid:15) = (cid:15) = 2 U + 3 t on the empty sites. Toprepare the ground state of the plaquette we then1. start with t ij = 0, U i = U and a large potentialon the empty sites (here (cid:15) = (cid:15) = (cid:15) ) so that theinitial state is the ground state,2. ramp up the hoppings t ij from 0 to t during a time T ,3. ramp the potentials (cid:15) i down to 0 during a time T .Time scales T = T ≈ t − and (cid:15) = U + 4 t are suf-ficient to achieve high fidelity. As we discuss below,it can be better to prepare the state quickly and thenproject into the ground state through a quantum phaseestimation than to aim for an extremely high fidelity inthe adiabatic state preparation. To prepare a plaque-tte with two electrons we start from the product state c † , ↑ c † , ↓ | (cid:105) and use the same schedule, except that wechoose the non-zero potentials on three sites of the pla-quette ( (cid:15) = (cid:15) = (cid:15) = (cid:15) ).0 FIG. 4. Probability of preparing ground state as a function oftimes T , T , T in units of t − , following the annealing sched-ule described above. During time T , a uniform potential (cid:15) was ramped from t to 0 on the plaquette with two electronsto break the symmetry between plaquettes.
3. Coupling of plaquettes
After preparing an array of decoupled plaquettes, someof them with four electrons and some with two electronsdepending on the desired doping, we adiabatically turnon the coupling between the plaquettes. As a test casewe use that of Ref. 17 and couple two plaquettes (seeFig. 3), one of them prepared with two electrons andone with four electrons.Naively coupling plaquettes by adiabatically turningon the intra-plaquette couplings t and t will not givethe desired result, since the Hamiltonian is reflectionsymmetric but an initial state with two electrons on oneplaquette and four on the other breaks this symmetry.No matter how slowly the anneal is done, the probabilityof preparing the ground state will not converge to 1. Wethus need to explicitly break the reflection symmetry byeither first ramping up a small chemical potential on asubset of the sites or – more easily – not completely ramp-ing down the non-zero (cid:15) i values when preparing plaque-tte ground states. Consistent with Ref. 17 we find that atime T ≈ t − is sufficient to prepare the ground statewith high fidelity.See Fig. 4 for success probabilities depending on times T to ramp up hopping in a plaquette, T to ramp down (cid:15) in a plaquette, and T to join the plaquettes. The specificcouplings in these figures are an initial potential 8 t on theempty sites in the plaquette with four electrons, chemicalpotential 8 t on the empty sites in the plaquette with twoelectrons and chemical potential t on the occupied site inthe plaquette with two electrons. See Fig. 5 for a plot ofthe spectral gaps for an example annealing schedule.Going from two plaquettes to the full square latticeis straightforward: we bias some plaquettes with smallvalues of (cid:15) i to break any reflection and rotation symme- FIG. 5. Energy levels during annealing. y -axis is energy. x axis is time. From time 0 to time 16 (in units of t − ) thehopping is ramping up, time 16 to time 36 the non-uniform (cid:15) in each plaquette is ramping down (leaving a uniform (cid:15) = 1on the plaquette with two electrons), time 36 to time 76 isjoining plaquettes. All levels remain non-degenerate exceptfor a degeneracy that appears at the end of the anneal. Aftertime 76 plaquettes are separated, as described below. tries, and then switch on inter-plaquette hoppings, andswitch off the bias potentials. The time scale here is a-priori unknown. According to Ref. 17 the ground stateon ladders can be prepared within a relatively short time T ≈ t − − t − . In two dimensions we expect thatsimilar time scales may be sufficient unless we encountera quantum phase transition to a different phase.
4. Decoupling plaquettes
As proposed in the context of analog quantum simula-tion, the pairing of holes on plaquettes can be testedby adiabatically decoupling plaquettes. We start bypreparing one plaquette with four electrons and a secondplaquette with two electrons. We then couple them asdescribed above to end up with a system with two holesrelative to the half filled Mott insulator. After adiabati-cally decoupling the two plaquettes again we measure thenumber of electrons on each plaquette. If holes bind inthe ground state, they will end up on the same plaquette,and we measure an even electron number per plaquette.If they don’t bind each hole will prefer to be on a differ-ent plaquette to maximize its kinetic energy and we willend up with three electrons per plaquette.As shown in Fig. 6, the probability of finding threeelectrons on a plaquette goes to zero for U = 2 t and U = 4 t when increasing the time T s over which we rampdown the inter-plaquette hopping. This indicates pairingand is consistent with the known critical value of U ≈ . t for pairing on four-site plaquettes. On the other hand for U = 6 t and U = 8 t we see an increase in the probabilityof finding three electrons per plaquette, indicating theabsence of pairing.1 FIG. 6. Shown is the probability P of finding three electronson each of the plaquette after decoupling two plaquettes witha total of six electrons as a function of the time T s (in unitsof t − ) over which the plaquettes were nearly adiabaticallyseparated. A decrease of P indicates pairing of holes in theground state of a plaquette. C. Trotter Errors and Accelerating adiabaticpreparation by using larger time steps
In practice, it suffices for our annealing preparationto obtain a state with sufficiently large overlap onto theground state that phase estimation will then project ontothe ground state with a reasonably large probability. Wecan thus tolerate some errors in our annealing path. Forthis reason, we annealed using a larger time step, beforedoing the phase estimation with a smaller time step.In fact, this approach of annealing at a larger timestep can be turned into an approach which uses a largertime step but still produces a state with very large over-lap with the ground state. Note that the larger Trotter-Suzuki step means that we in fact evolve under a modifiedHamiltonian, proportional to the logarithm of the unitarydescribing a single time step. As the time step decreases,this Hamiltonian becomes closer to the desired Hamil-tonian. It is then possible to evolve with a larger timestep, following the adiabatic path in parameter space,and then at the end of the path one can “anneal in thetime step”, gradually reducing the time step to zero.In our simulation of the anneal using joining two pla-quettes, the system was rather insensitive to Trotter er-rors. We used a relatively large time step, equal to0 . t − for the annealing, and used a much smaller timestep for the phase estimation that made the errors negli-gible there.A time step of 0 . t − for phase estimation yieldedrelative errors below 10 − and 0 . t − yielded relativeerrors below 10 − . The larger time step used for theadiabatic preparation prevents us from obtaining 100%overlap with the ground state even in the limits of very Name Symbol Matrix RepresentationHadamard H (cid:34) / √ / √ / √ − / √ (cid:35) Y basis change Y (cid:34) / √ i/ √ i/ √ / √ (cid:35) Phase gate T ( θ ) (cid:34) e − iθ (cid:35) Z rotation R z ( θ ) (cid:34) e iθ/ e − iθ/ (cid:35) controlled not (CNOT) TABLE I. An overview of the quantum gates used in thispaper. Note that Y gates do not represent Pauli Y ; rather,they represent a Clifford operator that interchange Y and Z spin orientations. The phase gate T ( θ ) can be replaces by an R z ( θ ) rotation, up to a global phase that can be kept trackof classically. long annealing times; however, as seen we still obtainvery high overlaps in the range of 80% − IV. IMPLEMENTING UNITARY TIMEEVOLUTION OF THE HUBBARD MODELA. Circuits for the individual terms
For the Hubbard model, we need to implement uni-tary evolution under two types of terms, hopping terms c † p,σ c q,σ , and repulsion terms c † p,σ c p,σ c † q,σ c q,σ . Theseterms are a subset of those needed for earlier elec-tronic structure calculations . In addition we will needto implement unitary evolution under a pairing term c † p,σ c † q,σ (cid:48) + h.c. .To establish notation we summarize In Table I thegates used in the quantum circuits shown in this paper.
1. The repulsion and chemical potential terms
The repulsion term is an example of what is called an H pqqp term and evolution under this term is given in Fig.7. A chemical potential term, c † p,σ c p,σ , which we call an H pp term, is not needed for the final Hamiltonian but isuseful for annealing. The circuit for this term is shownin Fig. 8. Since all the repulsion terms in the Hubbardmodel and all the chemical potential terms commute witheach other, they can be applied in parallel, thus needingonly O ( N ) gates and O (1) parallel circuit depth.2 R z ( θ/ R z ( θ/ R z ( − θ/ FIG. 7. Quantum circuit to implement time evolutionfor a time step θ under the term H pqqp = n p,σ n q,σ (cid:48) = c † p,σ c † q,σ (cid:48) c q,σ (cid:48) c p,σ . Top and bottom lines represent qubits for p, σ and q, σ (cid:48) . T ( θ ) FIG. 8. Quantum circuit to implement time evolution for atime step θ under the chemical potential term n p,σ = c † p,σ c p,σ .
2. The hopping and pairing terms
The unitary evolution under the hopping term, that wecall an H pq term, is given by the circuit in Fig. 9, and hasalso been considered previously. It also will be useful tobe able to implement evolution under a super-conductingpairing term, ∆ c † p,σ c † q,σ (cid:48) + h.c. . This term is similar to thehopping term, since both are quadratic in the fermionicoperators. The hopping and pairing term circuits willbe very similar to each other, with the changes involv-ing changing some of the basis change gates. The pair-ing term will be needed for the adiabatic evolution fromthe BCS mean-field Hamiltonian suggested in subsectionIII A 1. Also, it will be needed to measure superconduct-ing correlations present in the Hubbard state: in this casethe measurement will be of the correlation function be-tween two different superconducting pairing terms on twopairs of sites separated from each other. Measurementwill be discussed more later, but our general procedurefor measurement will require knowledge of the unitarywhich implements evolution under the term.For the case ∆ real, we wish to implement unitary evo-lution by exp[ − iθ ( c † p,σ c † q,σ (cid:48) + h.c. )]. Let us write X p,σ todenote the Pauli X operator on the qubit correspondingto spin-orbital p, σ , and similarly Y p,σ and Z p,σ denotePauli Y and Z operators on that spin-orbital. Using theidentity that c † = (1 / X + iY ) and c = (1 / X − iY ),up to phase factor associated with the Jordan-Wignerstrings, we wish to implementexp[ − i θ X p,σ X q,σ − Y p,σ Y q,σ (cid:48) ) Z JW ] , (18)where Z JW denotes the product of the Pauli Z operatorson the spin-orbitals intervening between p, σ and q, σ (cid:48) .In general, we wish to implement this unitary operationcontrolled by some ancilla. In fact, this circuit is the same as the circuit used to implement the term c † p,σ c q,σ (cid:48) + h.c. ,up to a sign change on the second spin rotation, as shownin Fig. 10 .For ∆ imaginary, we wish to implementexp[ − i θ X p,σ Y q,σ + Y p,σ X q,σ (cid:48) ) Z JW ] . (19)This circuit is similar to the other pairing and hoppingcircuits, except that in the first half of the circuit, weapply a Hadamard to the first qubit and a Y basis changeto the second, and in the second half we apply the Y basis change to the first qubit and the Hadamard to thesecond. B. Optimizing the cost of the Jordan Wignertransformation
In order to implement the unitary time evolution in theHubbard model, we use a Jordan-Wigner transformationto turn the model into a spin model. The depth of thecircuits described in the previous subsection will dependupon the particular ordering of spin-orbitals chosen forthis Jordan-Wigner transform. We choose to order sothat all qubits corresponding to spin up appear first insome order, and then all qubits corresponding to spindown. Since the hopping terms preserve spin, this sim-plifies the Jordan-Wigner strings needed. For a givenspin, we order the sites in the “snake” pattern shown inFig. 11.With this snake pattern, we group the hopping termsinto four different sets, as shown. The terms in any givenset all commute with each other. The two horizontal setsof terms have no Jordan-Wigner string required, sincethey involve hopping between neighboring sites with thegiven ordering pattern, and so these terms can all beexecuted in parallel. The vertical terms do not commutewith each other. However, they nest as in Ref. (11) sothat all vertical terms in a given set can be executed with O ( N ) gates using a depth O ( √ N ), where we assume thatthe geometry is roughly square so the the number of sitesin a given row is √ N .For periodic boundary conditions, we must also handlethe terms going from left to right boundary or from topto bottom boundary. The terms going from left to rightboundary can be executed in parallel with each other.Each term naively requires a depth proportional to thelength of a given horizontal row to calculate the Jordan-Wigner string. Assuming a square geometry, this lengthis O ( √ N ). We can use a tree to calculate the fermionicparity of the sites in the middle of the string, reducingthe depth to log( N ) if desired. For the terms from top tobottom edge, there are Jordan-Wigner strings of length O ( N ), but these terms nest and so can all be executed inparallel, reducing the depth to O (log( N )). The fermionicparity on the sites on the middle rows (those other thanthe top and bottom row) can be calculated using a tree,again in depth O (log( N )). The total depth can then be3 H H Y Y † Z H R z ( θ ) H Y R z ( θ ) Y † Z FIG. 9. Quantum circuit to implement time evolution for a time step θ under the hopping term c † p,σ c q,σ + c † q,σ c p,σ . Top andbottom lines represent qubits for p, σ and q, σ . H H Y Y † Z H R z ( θ ) H Y R z ( − θ ) Y † Z FIG. 10. Quantum circuit to implement time evolution for a time step θ under the pairing term c † p,σ c † q,σ (cid:48) + c q,σ (cid:48) c p,σ . Top andbottom lines represent qubits for p, σ and q, σ (cid:48) .FIG. 11. Snake pattern for ordering of sites. Arrow showsorder of sites. Dashed and solid lines represent hopping terms.The hopping terms are grouped into four different sets, twosets of vertical and two sets of horizontal hoppings as shownby different colorations and dash patterns. O (log( N ) √ N ). In fact, using a tree to compute parities,the total depth can be reduced to polylogarithmic in N .Another way to reduce the cost of the Jordan-Wignertransform in this geometry is in Ref. 43. This methoddoes require additional qubits. V. FINDING THE GROUND STATE BYQUANTUM PHASE ESTIMATION
The annealing algorithm generates a state with goodoverlap with the ground state if the rate is sufficientlyslow. V C discusses optimized paths that increase the overlap. However, in fact it is not necessary to obtaina perfect overlap with the ground state; if the overlapwith the ground state is reasonably large, then we canapply quantum phase estimation to the state and havea reasonably large chance of projecting onto the groundstate. Since the time for the simplest annealing pathscales inversely with the gap squared, while the time touse quantum phase estimation to measure energies moreaccurately than the gap (and thus to determine whetheror not one is in the ground state) scales inversely withthe gap, up to logarithmic corrections, using quantumphase estimation to project an approximate state ontothe ground state may be preferred. The gap will dependupon the annealing path; since we use a symmetry break-ing field until nearly the end, the gap only closes at theend of the anneal. In this section we first give a simpleimprovement to quantum phase estimation that leads toa reduction in depth. We then give some numerical esti-mates for annealing times for free fermion systems (thesimplest case where one can estimate annealing times fora large system numerically).
A. Reducing the number of rotations in quantumphase estimation
While this general setting of using adiabatic prepara-tion and quantum phase estimation has been consideredmany times before, we briefly note one simplification ofthe quantum phase estimation procedure. If the gateset available consists of one- or two-qubit Clifford oper-ations, then this simplification provides a slightly argerthan twofold reduction in depth; more importantly, it4provides a fourfold reduction in the number of arbitraryrotations, which are expected to be the most costly op-erations to implement. This simplification is not in anyway specific to the Hubbard model, and could be ap-plied in other settings, such as in quantum chemistry.The usual approach to quantum phase estimation is tobuild controlled unitaries to implement the unitary evo-lution (by using the ancilla to control the unitaries de-scribed above). In this case, if the ancilla qubit is inthe | (cid:105) state, then no quantum evolution is implemented,while if the ancilla qubit is in the | (cid:105) state, then quan-tum evolution by U = exp( − iHt ) is implemented (up toTrotter-Suzuki error). Then, quantum phase estimationestimates the eigenvalues of this unitary U . The sim-ple modification that we propose is that instead, if theancilla qubit is in the | (cid:105) state, then we implement evo-lution by exp( iHt/ | (cid:105) state, then we implement evolution by exp( − iHt/ difference between these two evolutions is thesame exp( − iHt ), then the same phase estimation proce-dure on the ancilla will return the identical result for theenergy (again up to Trotter-Suzuki error).This approach means that we have then only to imple-ment unitary evolution for half the time (i.e., t/ t ), so assuming the same time step for each Trot-ter step means that the number of Trotter steps requiredis halved. However, to analyze whether or not this im-proves the gate depth, we need to determine how the gatedepth might change in a given Trotter step. First, let usreview the usual method of making circuits controlled toimplement phase estimation, and then we will explainhow to do it for the modification considered here. As anexample, let us consider a circuit to implement a hoppingterm, as in Fig. 9. The usual technique (see referencesbefore) to make this circuit controlled (so that if the an-cilla is | (cid:105) then the identity operation is performed whileif the ancilla is | (cid:105) then evolution under the hopping termis performed) is to modify the two R z ( θ ) gates, makingthem both controlled by an ancilla. A controlled rota-tion by angle θ implements no rotation if the ancilla is inthe | (cid:105) state and implements a rotation by angle θ if theancilla is in the | (cid:105) state. Different quantum computingarchitectures may make different elementary gates avail-able. If we have access to arbitrary Z rotations by angle θ , and to Clifford operations, then the controlled rotationby angle θ is implemented by rotating the target by an-gle θ/
2, then applying a controlled NOT from the ancillato target, then rotating the target again by angle − θ/ θ/ − θ/ | (cid:105) state and is θ/ θ/ θ if theancilla is in the | (cid:105) state. Thus, the controlled rotation isimplemented by doing two arbitrary angle rotations, andadditionally applying two Clifford operations, for a totalof four gates given the gate set mentioned above.A crucial point is that the only gates which need tobe made controlled are the R z ( θ ) gates. The othergates in Fig. 9 do not need to be modified in any way. The property still holds when we consider our proposedmethod of implementing evolution by either exp( iHt/ − iHt/ R z ( θ ) gates to implementa rotation by either a positive or negative angle; we termthis a “uniformly controlled rotation”: a rotation by an-gle + θ if the ancilla is in the | (cid:105) state and a rotation byangle − θ if the ancilla is in the | (cid:105) state. This can bedone by applying a controlled NOT from the ancilla totarget, then rotating the target, then again applying acontrolled NOT from the ancilla to target. This requiresone arbitary angle rotation and two Clifford operations.Thus, we find that the total number of gates re-quired to implement the appropriately controlled versionof Fig. 9 has been reduced by 1, as one fewer arbitraryrotation is required, while the same Clifford operationsare required. So, the depth of a single Trotter step isslightly reduced. Since the number of Trotter steps hasbeen halved, we find indeed a slightly larger than twofoldreduction in depth, as claimed. Further, the number ofarbitrary rotations is reduced by four. We have describedthis contruction for the hopping term; however, for anyterm, the usual technique is to replace arbitrary singlequbits rotations by controlled rotations while we suggestinstead replacing them with uniformyl controlled rota-tions. B. Annealing Time for Free Fermions
To make some estimate of the cost of adiabaticallypreparing a good approximation to the ground state, itis worth considering the case of free fermions. For anysystem of free fermions with translation symmetry and k sites per unit cell, the annealing problem decouplesinto many different problems, one for each Fourier mode,each problem describing annealing of a k -dimensionalsubspace. For k = 2, this is a Landau-Zener problem.For a single such Landau-Zener problem, the annealingtime is expected to scale as 1 / ∆ where ∆ is the gap ofthe Hamiltonian. However, we have many Landau-Zenerproblems in parallel, and we want all Fourier modes toreach the ground state. The problem is least serious ifthe Fermi surface consists just of isolated points (suchas in one dimension or for a semimetal in higher dimen-sions) as there are only a few Fourier modes with smallgap. For systems with a Fermi surface, the density ofstates at the Fermi surface is larger, meaning that thereare more modes with small gap. However, the probabil-ity of a diabatic transition for a Landau-Zener system isexponentially small in the ratio of the gap squared to theLandau-Zener velocity. Hence, even if there are a largenumber of modes with gap ∆, this should require only alogarithmic slowdown in the adiabatic velocity to have asmall probability of a diabatic transition. However, onemust also worry about effects due to the endpoints of theevolution. See, however, V C for a detailed discussion ofimproved annealing strategies that overcome some of the5effects of the endpoints.To put some numbers onto these generalities, we con-sidered a system of spinless free fermions at half-filling inone dimension with periodic boundary conditions (thisof course is the most advantageous case with only twomodes with minimum gap). For each Fourier mode,we compute the evolution using numerically exact tech-niques (this is a two-dimensional quantum mechanicsproblem); to determine the probability of the entire sys-tem being in the ground state, we multiply the probabil-ities for each mode. We follow the annealing path H s = (cid:88) i even (cid:16) c † i c i +1 + h.c. (cid:17) + s (cid:88) i odd (cid:16) c † i c i +1 + h.c. (cid:17) , (20)starting at a fully dimerized state at s = 0 and movingto a uniform state at s = 1, where c † ( c i ) creates (an-nihilates) a spinless fermion on site i . We followed auniform annealing schedule and used a time discretiza-tion with negligible Trotter error. We chose sizes whichwere not a multiple of 4 to avoid having an exact zeromode.For a system 1002 sites, with an annealing time t =1000, the probability of ending in the ground state is0 . ... . Thus, the expected time to end in the groundstate is roughly 2000 plus an roughly additional 2 phaseestimation steps on average. Reducing the annealingtime to t = 500, the probability of ending in the groundstate is 0 . ... , taking slightly less expected anneal-ing time, but more phase estimation steps on average.Increasing to 2002 sites, the situation is worse. For t = 1000, the probability of ending in the ground stateis 0 . ... , so the expected time is over 7500, plus over7 phase estimation steps on average. Because the sys-tem is translationally invariant with period 2 for all s ,the probability of ending in the ground state is a prod-uct over momentum vectors of the probability of endingin the ground state in each momentum vector. Each ofthese probabilities can be calculated by evolving a 2-by-2 matrix. In all cases, the dominant contribution to thesmallness of the probability to end in the ground statewas from the lowest energy mode. Thus, small changesin the low energy density of states can lead to dramaticchanges in the success probability.For 1002 sites, the lowest eigenvalue of the free fermionhopping matrix is 0 . ... , so the gap ∆ of the Hamil-tonian is twice that. Hence, in this case, the phase esti-mation time could be much smaller than the annealingtime.Since rather large Trotter time steps of order 0 .
25 aresufficient to obtain reasonable overlap with the groundstate, we find that about 10 steps are sufficient to pre-pare the ground state. Using parallel circuit each stepcan be performed with a small circuit depth that in-creases only logarithmically with the system size. Hencea parallel circuit depth of about one million gates is suf-ficient to prepare the ground state. C. Improved Annealing Paths
There are broadly two strategies for reducing the er-ror in the annealing path. The typical objective in suchoptimizations is to find a function, f ( s ), that satisfies f : [0 , (cid:55)→ [0 ,
1] and f (0) = 0 and f (1) = 1 such thatthe diabatic leakage out of the groundstate caused by U diab := e iT (cid:82) E ( s )d s T e − iT (cid:82) H ( f ( s ))d s is minimized fora fixed value of T . Here we use the convention that T is the time ordering operator, T is the total evo-lution time allotted and s = t/T is the dimensionlesstime, and the phase factor is included to ensure thatlim T →∞ U diab = U adiab is well defined.The first strategy is known as local adiabaticinterpolation . The idea of local adiabatic interpolationis to choose f ( s ) to increase rapidly when the eigenvaluegap of H ( f ( s )) is large and increase slowly when the gapis small. This strategy can substantially reduce the timerequired to achieve a fixed error tolerance, but does notimprove the scaling of the diabatic errors in the statepreparation, which scale as O (1 /T ) for Hamiltonians thatare sufficiently smooth.The second strategy, known as boundary cancella-tion is often diametrically opposed to local adiabaticoptimization. The idea behind it is instead of mov-ing at a speed designed to minimize diabatic transitions,you choose a path designed to maximize cancellationsbetween them. There are several strategies for exploit-ing this, but the simplest strategy to to choose f ( s ) suchthat ∂ ks f ( s ) | s =0 , = 0 for k = 1 , . . . , m . Then if f ( s ) isat least m + 2 times differentiable and H ( f ( s )) does notexplicitly depend on T then the error in the adiabaticapproximation is at most (cid:12)(cid:12)(cid:12)(cid:12) max s (cid:107) ∂ m +1 s H ( f ( s )) (cid:107) ∆( s ) m +2 T m +1 (cid:12)(cid:12)(cid:12)(cid:12) + O (cid:0) /T m +2 (cid:1) , (21)where ∆( s ) is the gap at the given value of s . Equa-tion (21) therefore implies that if f ( s ) = (cid:82) s y m (1 − y ) m d y (cid:82) y m (1 − y ) m d y , (22)then the upper bound on the asymptotic scaling is im-proved from O (1 /T ) to O (1 /T m +1 ). Furthermore, if adiabatic error of δ is desired then it suffices to take T = O (cid:32) max s (cid:107) ∂ m +1 s H ( f ( s )) (cid:107) m +1 ∆( s ) m +1 δ m +1 (cid:33) . (23)This implies that even in the limit of small δ the cost ofadiabatic state preparation is not necessarily prohibitivebecause m can be increased as δ shrinks to achieve im-proved scaling with the error tolerance. Furthermore, thescaling of the error with the minimum gap becomes near–quadratically smaller than would be otherwise expected(for δ sufficiently small).The above argument only discusses the scaling of theevolution time under the assumption that m is fixed. If6 m grows as δ shrinks then there may be m –dependentprefactors that are ignored in the above analysis. If theHamiltonian is analytic in a strip about [0 ,
1] and it re-mains gapped throughout the evolution then Lidar et al. show that, for fixed T , the error grows at most polyno-mially with m . Calculus then shows that the optimalvalue of m scales logarithmically with the error toleranceand hence such contributions are subpolynomial.These two strategies are often mutually exclusive whenthe avoided crossing does not occur near the boundarybecause if H ( s ) is analytic then ˙ H ( s ) will have to belarger away from the boundary to compensate for the factthat it is nearly constant in the vicinity of the boundary.This can cause the Hamiltonian to move rapidly throughthe avoided crossing, which increases the annealing timerequired. However, the minimum gap often occurs at theend of the evolution when transforming from the initialHamiltonian (such as the free Fermion model, plaque-tte preparation, or other tractable approximation) to theinteracting theory. This means that the two strategiesare often compatible here and that boundary cancellationmethods will often be well suited for high–accuracy statepreparation. Further adiabatic optimizations could alsobe achieved by altering (22) to approximate the local–adiabatic path in the region away from from s = 0 , U adiab ≈ U diab since it dictates theresources needed to prepare the ground state ψ within afixed error by adiabatic evolution starting from the freefermion ground state ψ f f . A related problem is the prob-lem of implementing the projector onto the ground state, P = | ψ (cid:105)(cid:104) ψ | within a fixed error; we will wish to be ableto do this for a method described in VII B. Given the pro-jector P ff = | ψ ff (cid:105)(cid:104) ψ ff | we have P = U adiab P ff U † adiab ;we describe later how to measure P ff and so by conju-gating this measurement by U adiab we can measure P .At first glance, the use of a Trotter decomposition mayappear to be problematic for adiabaticity because thetime–dependent Hamiltonian that describes the decom-position is discontinuous. Such discontinuities are notactually problematic, as can be seen by using the trian-gle inequality | ( U adiab − U T Sdiab ) | ψ ff (cid:105)| (24) ≤ (cid:107) U adiab − U T Sdiab (cid:107)≤ (cid:107) U adiab − U diab (cid:107) + (cid:107) U diab − U T Sdiab (cid:107) , where U T Sdiab is a quantum circuit approximation to thefinite time evolution U diab . This shows that the error inthe state preparation is at most the sum of the adiabaticerror and the error in simulating the adiabatic evolution.We have a similar bound for error in measuring P : (cid:107) U adiab P ff U † adiab − U T Sdiab P ff U T S † diab (cid:107)≤ (cid:0) (cid:107) U adiab − U diab (cid:107) + (cid:107) U diab − U T Sdiab (cid:107) (cid:1) . (25)In order to make the entire simulation error at most δ ,it suffices to set both sources of error to δ/
2. This cre-ates an issue because the cost of simulation scales at least linearly with T , which in turn scales with 1 /δ . Thereare a wide array of different Trotter–based formulas thatcan be used in the simulation but in all such cases theTrotter number (and in turn the simulation cost) scales for integer k > T / k /δ / k , which using (21) is O (cid:16) δ − (2 k + m +2)2 k ( m +1) (cid:17) For any fixed m , the value of k that leadsto the best scaling with δ can found by numerically opti-mization. However, if we approximate this optimal valueby taking 2 k = m for even m then the cost of the Trot-terized simulation is O ( δ − /m ), which leads to subpoly-nomial (but not polylogarithmic) scaling of the total costof simulation in 1 /δ if 2 k = m and the Hamiltonian issufficiently smooth because the cost of a Trotterized sim-ulation grows exponentially with k unlike the error in theadiabatic approximation.This shows that the cost of implementing P or prepar-ing ψ within error δ , T , is sub–polynomial in 1 /δ . Since O ( (cid:15) − log(1 /(cid:15) )) repetitions of this circuit are neededand since errors are subadditive it follows that taking δ ∝ (cid:15) / log(1 /(cid:15) ) suffices to make the total error at most (cid:15) . Thus the cost of implementing P scales as (cid:15) − o (1) where the o (1) term is the amalgamated costs of the adi-abatic preparation and the logarithmic term from phaseestimation.It also should be noted that high–order Trotter–Suzukiformulas may not be needed in practice as numerical ev-idence suggests that small Trotter numbers typically suf-fice for the small Hubbard models that we have consid-ered. This means that low–order methods may often bepreferable to their higher–order brethren. Tight errorbounds for the second order Trotter–Suzuki formula aregiven in the appendix. VI. MEASURING OBSERVABLES
In this section we will discuss the measurement of in-teresting physical observables. The total energy, whichcould be measured by phase estimation, is the least inter-esting quantity. We will instead focus on densities anddensity correlations, kinetic energies, Green’s functionsand pair correlation functions.Such computations will allow us to understand thephysics of the ground state. For instance, if the groundstate is superconducting, then we can compute the corre-lation function of the pair field by the methods describedin this section. Its long-distance behavior determines thecondensate fraction. We can also compute the expecta-tion value of the kinetic energy; its variation with dopingcan help determine whether the system becomes super-conducting as a result of kinetic energy gain comparedto the non-superconducting state. Finally, more com-plex correlation functions, which would be very difficult– if not impossible to determine in experiments – coulddetermine how these superconducting properties vary inresponse to perturbations that enhance or suppress ef-fects such as spin or charge fluctuations.7
A. Local observables and equal-time correlations
1. Measuring the density, double occupancy and spin anddensity correlations
The densities n i,σ are trivially measured by measuringthe value of the qubit corresponding to the spin-orbital( i, σ ). Similarly double occupancies n i, ↑ n i, ↓ can be deter-mined by measuring two qubits, while density correlationfunctions n i n j = ( n i, ↑ + n i, ↓ )( n j, ↑ + n j, ↓ ) (26)and spin correlation functions S zi S zj = 14 ( n i, ↑ − n i, ↓ )( n j, ↑ − n j, ↓ ) (27)can be determined directly by measuring the values offour qubits.By simultaneously measuring all qubits in the compu-tational basis (eigenvectors of the Pauli- Z operator) wecan determine density, double occupancy and all spin anddensity correlations. As an example, suppose we haverecorded this sequence of meaurement outcomes over sev-eral runs (for each run, for every qubit we measure thePauli- Z operator and we record the measurement out-comes). Given this data, suppose we now wish to esti-mate a correlation function such as (cid:104) ψ | n i, ↑ n j, ↑ | ψ (cid:105) for a pair of sites i (cid:54) = j . This is equal to the expectationvalue of Z i, ↑ + 12 Z j, ↑ + 12 . Given the outcomes of the measurements of Z i, ↑ , Z j, ↑ ,we simply add 1 to each measurement, multiply the re-sults, and divide by four. We then average this over runs.For example, if we consider a two-site Hubbard model athalf-filling and U = 0, we will find that the two measure-ment outcomes Z i, ↑ and Z j, ↑ are perfectly anti-correlated(since there is only one electron with spin up which hasequal probability to be on either of the two sites) and sothe average of the product will not equal the product ofthe averages.
2. General strategy for other observables
Next, we note that there is a general strategy usedto measure the expectation value of any unitary oper-ator U , assuming that we can build a circuit that im-plements a controlled version of this unitary, controlledby some ancilla. Namely, we apply a one-bit phase es-timation using the phase estimation circuit of Fig. 12.This is a standard trick; see for example Fig. 9 in Ref.15. Since we have circuits that implement unitary evo-lution under various Hamiltonian terms, this enables us | (cid:105) H Z ( θ ) HU FIG. 12. General phase estimation circuit to compute theexpectation value of any unitary which can be given as acontrolled black box. The Z ( θ ) produces a rotation aboutthe Z axis by angle θ ; by varying this, real and imaginaryparts of the expectation value can be measured. to meaure these terms. For example, to measure a term c † p,σ c q,σ , we measure the expectation value of the unitaryexp[ − iθ ( c † p,σ c q,σ + h.c. )]. Since the operator c † p,σ c q,σ + h.c. has eigenvalues − , , +1, the most efficient results areobtained from the phase estimation when we choose θ = π (which perfectly distinguishes in a single mea-surement between eigenvalue 0 and eigenvalues ±
1) or θ = π/ −
3. The Stabilizer Strategy
The stabilizer strategy is a method for measuring ob-servables of the form exp[ − iθ ( O + O + ... )] or of the form O + O + ... , where each O i is a product of some numberof Pauli operators, and [ O i , O j ] = 0. This form includesmany operators of interest to us, including terms in theHamiltonian such as the kinetic energy as well as otherterms such as pair correlation. We call this the stabilizerstrategy because of our use of products of Paulis whichcommute with each other; no assumption is made thatany state is a stabilizer state.If we can measure each O i , then we succeed in measur-ing the desired operator. Since they commute, we maymeasure them in any order without needing to recreatethe state after measurement. As each O i is a product ofPaulis, there is some unitary in the Clifford group whichmaps it onto a Pauli Z operator on some given qubit.Hence, we can measure that O i by applying that Cliffordunitary, then doing a Z -basis measurement, and finallyundoing the Clifford unitary. Applying this procedure toterms in the Hamiltonian, for which we have previouslygiven circuits to implement exp[ − iθ ( O + O + ... )] usingsequences of controlled Z -basis rotations conjugated byClifford gates, the measurement circuit amounts to re-placing each controlled Z -basis rotation in the evolution8circuit with a Z -basis measurement. For example, if weapply this procedure to the H pq unitary evolution shownin Fig. 9, we arrive at the circuit shown in Fig. 13. Sum-ming the value of the two Z -measurements (treating theanswers as ± same measurements are needed to measure the pairingoperator for ∆ real, since as we have noted, the pairingoperator for ∆ real is implemented by the same unitaryas the hopping operator, up to a sign change in the sec-ond controlled rotation. Thus, the two measurements wehave given simultaneously measure the two commutingoperators c † p,σ c q,σ (cid:48) + h.c. and c † p,σ c † q,σ (cid:48) + h.c. (to measurethe pairing operator, one must instead consider the dif-ference of the two Z measurements).
4. The FSWAP Strategy for the kinetic energy and Green’sfunctions
For the kinetic energies we will have to measure one-body Green’s functions c † i,σ c j,σ + c † j,σ c i,σ . To do this wefirst swap qubit j with its neighboring qubits until itis neighboring qubit i in the normal ordering and is atone of the positions k = i ±
1. Taking into account thefermionic nature of the electrons we need to use a new fermionic swap gate
FSWAP with matrix − . (28)The circuit expressed in standard gates is: F = H H (29)We next change the basis to be able to better mea-sure the kinetic energy on the bond which is then − t pq (cid:0) c † p,σ c q,σ + c † q,σ c p,σ (cid:1) . This basis change is a simplecircuit: H (30) giving the unitary: / √ / √ − / √ / √
20 1 / √ / √ − / √ / √ . (31)An example circuit containing several measurementsin parallel is shown in Fig. 14.We may then measure the Z eigenvalues of the twoqubits and call them s and s . If we measure s = − c † p,σ c q,σ (cid:48) + c † q,σ (cid:48) c p,σ : s δ s , − (32)and for the kinetic energy − t pq (cid:0) c † p,σ c q,σ + c † q,σ c p,σ (cid:1) theestimator − t pq s δ s , − (33)Note that if we have N qubits we can do N/ c † p,σ c † q,σ (cid:48) + h.c. as s δ s , .
5. The FSWAP Strategy for the pair correlation functions
Pair correlation functions in their most generalform are built from the general 2-body terms c † i,σ c † j, − σ c k, − σ (cid:48) c l,σ . This expectation value can be mea-sured after fermion-swapping the qubits to be adjacentand then performing two Green’s function measurements.We first use FSWAP gates to move the qubits i , k , j ,and l to four adjacent positions which we will label 1to 4. We then apply the same circuit as for the Green’sfunction measurements to both pairs (1 ,
2) and (3 ,
4) andmeasure the Z component of each qubit m (=1,2,3, or4) as s m . The expectation value is then expressed as (cid:104) c † i,σ c k,σ c † j,σ (cid:48) c l,σ (cid:48) (cid:105) = (cid:104) s s δ s , − δ s , − (cid:105) Care must be taken if two indices are the same, e.g. if i = k in above term. In that case things simplify since (cid:104) c † i, ↑ c i, ↑ c † j, ↓ c l, ↓ (cid:105) = (cid:104) n i, ↑ c † j, ↓ c l, ↓ (cid:105) (34)and the measurement just becomes (cid:104) n i, ↑ c † j, ↓ c l, ↓ (cid:105) = (cid:104) n i, ↑ s δ s , − (cid:105) (35)Similarly if i = k and j = l we get (cid:104) c † i, ↑ c i, ↑ c † j, ↓ c j, ↓ (cid:105) = (cid:104) n i, ↑ n j, ↓ (cid:105) (36)9 H H Y Y † H | M (cid:105) H Y | M (cid:105) Y † FIG. 13. H pq Measurement
FFFF HH HH FIG. 14. Example of a circuit for parallel measurements ofa number of Green’s function values between different sizes and the measurement is straightforward.For the Hubbard model we are interested in singletpairing and the specific terms we want are:∆ ij,kl = 12 (cid:16) c † i, ↑ c † j, ↓ − c † i, ↓ c † j, ↑ (cid:17) ( c k, ↑ c l, ↓ − c k, ↓ c l, ↑ ) . (37)Multiplying this out and sorting by spin we get∆ ij,kl = − (cid:16) c † i, ↑ c k, ↑ c † j, ↓ c l, ↓ + c † i, ↑ c l, ↑ c † j, ↓ c k, ↓ + c † j, ↑ c k, ↑ c † i, ↓ c l, ↓ + c † j, ↑ c l, ↑ c † i, ↓ c k, ↓ (cid:17) , (38)which can be measured by individually measuring allterms.Note that even though there may be N differentchoices for the four indices, we do not need to measureall of these four-point functions. As pairs are expectedto be tightly bound in the Hubbard model, choosing i close to j and k close to l , but choosing the pairs ( i, j )and ( k, l ) at as large a distance as possible on a given lat-tice is sufficient to check for long-range pair correlations.In fact, we can measure c † i, ↑ c † j, ↓ + h.c (or, c † i, ↑ c † j, ↓ − h.c )for N/ i, j ) simultaneously because the operators commute; then, averaging the result over pairs extractsthe long wavelength pair correlation. To measure thesewith minimum depth, one can use nesting strategies sim-ilar to those discussed for measuing hopping terms inSec. IV B. B. Dynamic correlation functions and gaps
1. Dynamic response in the time domain
The measurement of dynamic correlation functions C A,B ( t ) = (cid:104) ψ | A ( t ) B (0) | ψ (cid:105) (39) ≡ (cid:104) ψ | e + itH A † e − itH B | ψ (cid:105) for time-independent Hamiltonians such as the Hubbardmodel are typically presented not in the time domain butafter a Fourier transform in the frequency domain: S A,B ( ω ) = (cid:90) ∞−∞ dtC A,B ( t ) e iωt (40)The standard way of measuring them has been explainedin several references, most explicitly in Ref. 15. Forunitary operators A, B , the product A † e + itH Be − itH is aunitary, and can be measured using the circuit of Fig. 12.One simple improvement is to note that it is not neces-sary to control the evolutions e itH , e − itH and it sufficesto control A † , B ; then, if the ancilla bit is | (cid:105) so that theoperators A † , B are not done, the product e itH e − itH isequal to the identity as desired. The uncontrolled timeevolution will require only half as many arbitrary anglerotations as the controlled time evolution for many gatesets (see Sec. V A for discussion of implementing con-trolled time evolution).A further improvement is to note that the final evo-lution e − itH just gives an overall phase acting on theground state, and so can be replaced by Z rotation ofthe ancilla without being implemented, giving anotherfactor of two reduction in depth. A final improvementcombines the idea in Ref. 15 of controlling one opera-tion and anti-controlling another with one of the ideasin Sec. V A of implementing either forward or backwardevolution in time. We prepare the ancilla in the state | + (cid:105) .If the ancilla is | (cid:105) , we implement e itH/ B and if the an-cilla is | (cid:105) we implement e − itH/ A . The measurement of0the ancilla in the X basis gives the desired expectationvalue. The controlled time evolution, by either e itH/ or e − itH/ depending on whether the ancilla is | (cid:105) or | (cid:105) ,can be done as described in Sec. V A. This requires halfthe number of time steps as that required implement theuncontrolled evolution by e itH , assuming that the sametimestep is used for both evolutions; the only additionalcost is an additional CNOT gate for each arbitrary angle Z rotation in the evolution and since these are Cliffordgates the cost is much smaller than the cost of an arbi-trary angle rotation for many gate sets.
2. Dynamic response in the frequency domain by phaseestimation
A dynamic correlation function of an operator with itsadjoint, such as C A ( t ) = (cid:104) ψ | A † ( t ) A (0) | ψ (cid:105) (41) ≡ (cid:104) ψ | e + itH A † e − itH A | ψ (cid:105) , can be simplified after a Fourier transform into the fre-quency domain: S A ( ω ) = (cid:90) ∞−∞ dt e iωt C A ( t ) (42)= (cid:90) ∞−∞ dt e iωt (cid:104) ψ | e + itH A † e − itH A | ψ (cid:105) = (cid:90) ∞−∞ dt e iωt (cid:88) n (cid:104) ψ | e + itH A † e − itH/ (cid:126) | ψ n (cid:105)(cid:104) ψ n | A | ψ (cid:105) = (cid:90) ∞−∞ dt (cid:88) n e i ( ω − ( E n − E )) t (cid:104) ψ | A † | ψ n (cid:105)(cid:104) ψ n | A | ψ (cid:105) = (cid:88) n |(cid:104) ψ n | A | ψ (cid:105)| δ ( ω − ( E n − E )) . Instead of performing a “simple sampling” and mea-suring C A ( t ) for all times t and then Fourier-transformingit and hoping to get these delta-functions resolved a newand better approach is to do “importance sampling” andmeasure the energies of the eigenstates | n (cid:105) with eigenen-ergies E n directly with the weight |(cid:104) ψ n | A | ψ (cid:105)| withwhich they appear in above sum.This can be achieved by phase estimation. If we ap-ply phase estimation to the state A | ψ (cid:105) then eigenstate | ψ n (cid:105) will be picked just with the weight A |(cid:104) ψ n | A | ψ (cid:105)| and a histogram of the measured energies thus directlymeasures the dynamic structure factor S A ( ω ).The retarded correlation function: χ A ( t ) = θ ( t ) (cid:104) ψ | (cid:2) A † ( t ) , A (0) (cid:3) | ψ (cid:105) (43)can then be obtained as follows. First, we write theretarded correlation function in terms of its real andimaginary parts, χ A ( ω ) = χ (cid:48) A ( ω ) + iχ (cid:48)(cid:48) A ( ω ). The imagi-nary part of the retarded correlation function, χ (cid:48)(cid:48) A ( ω ) isthen obtained from the fluctuation-dissipation theorem: χ (cid:48)(cid:48) A ( ω ) = (1 − e − βω ) S A ( ω ); at zero-temperature, β = ∞ , and the second term in parentheses vanishes. The realpart χ (cid:48) A ( ω ) is then obtained by the Kramers-Kronig re-lation: χ (cid:48) A ( ω ) = P (cid:90) ∞−∞ dω (cid:48) π χ (cid:48)(cid:48) A ( ω ) ω (cid:48) − ω (44)By choosing A = Z p,σ we can measure the local dy-namical spin density correlations. Explicitly we find that (cid:104) Z p,σ (0) Z p (cid:48) ,σ (cid:48) ( t ) (cid:105) = 4 (cid:104) n p,σ (0) n p (cid:48) ,σ (cid:48) ( t ) (cid:105) (45) − n p,σ (0) − n p (cid:48) ,σ (cid:48) ( t ) + 1 . Since n p (cid:48) ,σ (cid:48) ( t ) = n p (cid:48) ,σ (cid:48) (0) the last three terms do not de-pend on t and only contribute a constant, which does notgive any contribution to the interesting finite- ω structurefactor. Local dynamical charge and spin correlations arestraightforwardly calculated from the spin density corre-lations.Similarly, local single-particle excitations can be mea-sured by choosing A = c † p,σ + c p,σ = (cid:89) q
For the small systems that we have explicitly simu-lated, the time needed to adiabatically prepare a goodestimate of the ground state is small compared to thephase estimation time. However, this will change as oneincreases the system size, and the time to adiabaticallyprepare the ground state will ultimately dominate, atleast if one uses an annealing path that linearly interpo-lates between initial and final Hamiltonian. The reasonis the time to adiabatically prepare the ground state isexpected to scale as ∆ − for a system with spectral gap∆, while resolving energies to accuracy ∆ with phase es-timation takes time 1 / ∆ and thus for small ∆ the phaseestimation is faster. We note that the improved higher-order annealing paths in V C do alleviate this problem.This problem is slightly alleviated by the fact that weare content to prepare by annealing a state with signif-icant overlap (say, 1 / ψ by a combination of an-nealing and phase estimation, it will be desirable to mea-sure properties of the state without destroying it. Herewe present two approaches for such non-destructive mea-surements. Section VII A discusses an approach based onthe Hellman-Feynman theorem, which not only is non-destructive but also scales better in the error than thesimple way of repeatedly preparing the ground state andmeasuring . An alternate approach based on “recover-ing” the state after measurements may be more useful insome circumstances and is discussed in Sec. VII B. A. Hellman-Feynman based approach
According to the Hellman Feynman theorem thederivative of the ground state energy E ( λ ) with respectto a perturbation λO is just the expectation value of theoperator O in the ground state | Ψ ( λ ) (cid:105) : ddλ E ( λ ) = ddλ (cid:104) Ψ ( λ ) | H + λO | Ψ ( λ ) (cid:105) = (cid:104) Ψ ( λ ) | O | Ψ ( λ ) (cid:105) . (47)An alternative and superior way of measuring expecta-tion values of arbitrary observables O is to adiabaticallyadd a small perturbation λO to the Hamiltonian H .This opens a way to measuring observables withoutdestroying the ground state wave function | Ψ (0) (cid:105) of H .We adiabatically evolve the ground state wave function | Ψ (0) (cid:105) to | Ψ ( λ ) (cid:105) by slowly increasing λ to its final(small) value, and then perform another quantum phaseestimation to determine the energy E ( λ ). The groundstate expectation value can be estimated through (cid:104) Ψ (0) | O | Ψ (0) (cid:105) = ddλ E ( λ ) | λ =0 = E ( λ ) − E (0) λ + O ( λ ) . (48)For example, to measure a Green’s function we add a“hopping” term λ ( c † p,σ c q,σ + c † q,σ c p,σ ) to the Hamiltonian.In order to obtain an estimate with an error (cid:15) we needto choose λ = O ( (cid:15) ), and then measure the energy to anaccuracy λ(cid:15) = O ( (cid:15) ). Phase estimation then requirestime O ( (cid:15) − ), which scales the same as repeated prepara-tion with destructive measurements and thus only gets aconstant improvement.The scaling can be improved by using a symmetricestimator for the derivative (cid:104) Ψ (0) | O | Ψ (0) (cid:105) = E ( λ ) − E ( − λ )2 λ + O ( λ ) , (49)which reduces the approximation error. To get the en-ergy and thus allows a larger value of λ = O ( (cid:15) / ),which requires phase estimation only to an accuracy of O ( (cid:15) / ), and a time scaling as O ( (cid:15) − / ). In general,using a k -th order estimator for the derivative requires O ( k ) energy measurements, but with an error scaling as O ( λ k ), we can choose λ = O ( (cid:15) / ( k − ), and require time O ( k(cid:15) − − / ( k − ). Asymptotically for small (cid:15) we can thus estimate the expectation value non-destructively with aneffort scaling as O ( − log (cid:15)(cid:15) ), obtaining a near-quadraticspeedup.In the Hellman-Feynman approach, there may be noneed to do an adiabatic evolution. In many cases wecould simply start in the state ψ , then apply a phaseestimation to expectation value of H + λO , and thenapply another phase estimation to estimate the groundstate energy of H . The absolute value squared of theoverlap between the ground state of H and the groundstate of H + λO is equal to 1 − O ( λ / ∆ ), and so for λ << ∆ the overlap is close to unity. So, with probabilityclose to 1, the energy given by phase estimation of H + λO will in fact be the desired ground state energy, and withagain probability close to 1 the second phase estimationwill return to the ground state of H . B. Non-Destructive Measurements With QuadraticSpeedup
This annealing procedure becomes less useful for op-erators O which require a complicated quantum circuit.Suppose we wish to measure the expectation value ofa time dependent correlation such as O = S z ( t ) S z (0),where t > S z ( t ) = exp( iHt ) S z exp( − iHt ).In this case, the quantum circuit to measure O (givenbelow) is much more complicated; thus the Hamiltonian H + (cid:15)O becomes significantly more costly to do phaseestimation on.In this section, we explain a technique to measure ar-bitrary projection operators in a non-destructive fash-ion. We then further explain a quadratic speedup ofthis approach. Our initial approach will give many in-dependent binary measurements with outcome probabil-ity determined by the expection value of the operator.This will require a number of samples proportional tothe square of the inverse error. On the other hand, theimproved approach will exploit phase estimation to speedthis up quadratically (up to log factors).As we have explained previously, we have quantum cir-cuits that implement projective measurements of all theoperators we are interested in, such as number operator,hopping operator, and so on. Further, we have a generaltechnique for implementing a projective measurement ofany unitary that can be controlled.
1. Recovery Map
Suppose we wish to implement a projective measure-ment with k possible outcomes, for some given k . Wedescribe these outcomes by projectors Q , ..., Q k , with (cid:80) i Q i = I , and we refer to this measurement as “mea-suring Q”. The ground state ψ can be written as alinear combination of states in the range of each of these2projectors: ψ = (cid:88) i a i φ i , (50)where Q i φ i = φ i , (51)and | φ i | = 1. Assume for the moment that we can alsoimplement a measurement P which projects onto theground state ψ (we describe how to do this to sufficientaccuracy below). Then, the algorithm for nondestruc-tive measurement is: first, begin in the ground state.Then, measure Q . Then, measure P . If the result isthat one indeed is in the ground state, then we haverestored the ground state; at this point we can eitherre-measure the projector Q to obtain better statistics,or make some other measurement of a different observ-able. If instead the measurement of P reveals that weare not in the ground state, we re-measure Q and thenre-measure P . We repeat this process of re-measuring Q and re-measuring P until we are in the ground state.The basic idea is similar to that in Ref. 48.The rest of this subsubsection is focused on calculat-ing the probability of returning to the ground state; mostreaders may wish to skip to the next subsubsection, whichgives the more important quadratic speedup. The mainreason for introducing the recovery map in this subsub-section is that it can be used after the quadratic speedup(see below). Let us compute the probability of returningto the ground state in this process. If the initial mea-surement of Q gives outcome i then the resulting stateis φ i ; since, |(cid:104) ψ | φ i (cid:105)| = | a i | , we have a probability | a i | of returning to the ground state after measuring P . Wenow analyze the case that this measurement of P revealsthat we are not in the ground state; then the resultingstate is equal to 1 (cid:112) − | a i | (1 − P ) φ i (52)= 1 (cid:112) − | a i | φ i − a i (cid:112) − | a i | ψ , up to a phase. Then, the probability that the subsequentmeasurement of Q gives outcome j is equal to |(cid:104) φ j | (cid:112) − | a i | φ i − a i (cid:112) − | a i | ψ (cid:105)| (53)= | δ i,j − a j a i | − | a i | , and the resulting state is exactly equal to φ j . Repeatingthis, we find that after every measurement of Q withoutcome i , the probability that the measurement of P will return to the ground state is | a i | , while if we donot return to the ground state the probability that thenext measurement of Q will give outcome j is given byEq. (53). The transitions are then governed by a Markov chainwith transition probabilities T i ← i = (1 − | a i | ) = 1 − | a i | + | a i | , (54) T j ← i,j (cid:54) = i = | a i | | a j | (55) T ← i = | a i | , (56)where T j ← i is that, starting in state φ i , a measurementof P reveals that one is not in the ground state, andthen a measurement of Q gives outcome j , while T ← i isthe probability that, starting in state φ i , does leave onin the ground state. The initial measurement of Q at thestart of the Markov chain gives state φ i with probability | a i | . We would like to work out the expected numberof measurements before the algorithm terminates back inthe ground state.To do this calculation, we re-interpret the probabili-ties, saying that with probability 2 | a i | the system makessome transition; half the time this transition leads tostate 0, while the other half the time the system makesa transition from state i to some state j , and state j ischosen with probability proportional to state | a j | . Seethe factor − | a i | in Eq. (54) which we interpret as thenegative of the probability of there being a transition;note also that j may equal i , so some of the “transitions”do not actually lead to a change in state: this is the fac-tor of | a i | in Eq. (54). This re-interpretation only makessense if | a i | ≤ /
2, of course but it will allow us to com-pute the expected time exactly and as the expected timeis analytic, this calculation will then work for all | a i | .The advantage of this re-interpretation is that if a tran-sition occurs, and if the transition leads to a state otherthan 0, then the probability that that state is equal to j is proportional to | a j | , which is the same probabilitydistribution as the chain began in.Starting in state i , the expected number of steps beforea transition is 1 / | a i | . Thus, averaging over states i withinitial probability distribution | a i | , the expected numberof steps before a transition is k/
2, where k is the numberof possible outcomes of measurement Q . Half of thesetransitions return to state 0, while the other half do not,leading to some state j (cid:54) = 0 with probability distributionproportional to | a j | . Thus, the average number of stepsbefore returning to the ground state is N steps = k. (57)Note that both the number of phase estimations requiredand the number of measurements of Q required are equalto k .
2. Quadratic Speedup
Suppose we wish to measure a projector Q . Defineunitaries U = 2 Q − V = 2 P −
1. Assume that P P or Q to the ground state ψ gives some state in the spacespanned by the non-orthogonal basis vectors ψ , Qψ .Hence, we will work in this two-dimensional space forour analysis (the degenerate cases that Qψ = ψ or Qψ = 0 can be handled easily too and one can verifythat the final result will be correct in these cases also). Inthis subspace, in the non-degenerate case, we can write V = (cid:32) − (cid:33) , (58) U = (cid:32) ( θ ) − − θ ) sin( θ ) − θ ) sin( θ ) 2 sin ( θ ) − (cid:33) , (59)where the angle θ is such that (cid:104) ψ | Q | ψ (cid:105) = cos ( θ ). Weemphasize that the above two equations are written inthe orthogonal basis ψ , Z ( Qψ − cos ( θ ) ψ ), where Z is some normalization factor. Thus, the unitary U V re-stricted to the subspace has eigenvaluesexp( ± iθ ) . (60)We now build a quantum circuit that adds an addi-tional ancilla that controls both U and V , hence controlswhether or not U V is applied. We then implement phaseestimation using this ancilla to determine the eigenvaluesof
U V . We use the ground state ψ as input to the phaseestimation circuit. Let T be the time required to imple-ment U V . Using the ancilla to control (
U V ) n , for variouspowers of n , one is able to measure the eigenvalue of U V to accuracy (cid:15) in a time proportional to T (cid:15) − log( (cid:15) − ); inthis case, we take n of order (cid:15) − .In fact, U V has two distinct eigenvalues, and the phaseestimation will return one of the two answers randomly.However, since the eigenvalues differ only in sign, and thedesired expectation value (cid:104) ψ | Q | ψ (cid:105) is equal to cos ( θ ),both eigenvalues give the same answer for this expecta-tion value.After implementing this phase estimation to the de-sired accuracy, we then are left with some state in thegiven subspace. If we then wish to recover the groundstate (to recover some other observable), we can applythe recovery map above by alternating measurements of Q and P until the ground state is restored.
3. Measuring P The above procedure relies on the ability to measure P . The key is to define an operation that will only iden-tify whether or not one is in the ground state, withoutrevealing additional information. If instead the measure-ment gives several bits of information on the energy of thestate, the procedure does not work; intuitively, each φ i is a coherent superposition of different energy eigenstatesand making this more detailed measurement destroys thisinformation. We now describe how to do this; we call this proce-dure a “coherent phase estimation”. We have the abilityto perform a phase estimation on the Hamiltonian H todetermine the energy of a state. This phase estimation isgiven by a quantum circuit, in which several control bitsare initialized in a state | (cid:105) . Then, a Hadamard is ap-plied to each control bit. Then, the control bits are usedto control the application of the Hamiltonian for a setperiod of time. Finally, the Hadamard is re-applied, andthe control bits are measured in the computational basis.Then, classical post-processing is applied to extract theenergy of the state. This process can be performed in se-rial (using only one control bit) or in parallel; the parallelapproach uses more control bits but reduces the depth ofthe circuit. The measurement of the energy is not de-terministic; however, one can employ a larger number ofcontrol bits to increase the accuracy.To measure P , we use this phase estimation in itsparallel form, but do not measure the outcomes of thecontrol bits. We initialize an additional “outcome” bitto | (cid:105) . We then use a unitary quantum circuit to im-plement the classical postprocessing done to determinethe energy, storing that energy value coherently. Thena unitary quantum circuit determines if that energy isnear the ground state value (which we have determinedin advance by a phase estimation before doing any mea-surements), and if so, it flips the value of the outcomebit. Finally, we uncompute, reversing the classical post-processing and phase estimation steps, and then at theend we measure the outcome bit. This procedure is es-sentially that in Ref. 48. While this procedure does notimplement a projective measurement, but rather imple-ments a POVM, if sufficient numbers of control bits areused, the measurement can be arbitrarily close to thedesired projective measurement.Performing this measurement of P does require addi-tional qubits. The number of qubits required scales pro-portional to the number of bits of accuracy desired; how-ever, since this accuracy is of order ∆, if ∆ is only poly-nomially small in system size then this requires only log-arithmically many extra bits. Additionally, the more ac-curately we implement a projective measurement ratherthan a POVM, the more control bits required, but thisoverhead also only scales logarithmically.An alternate method of implementing P is to note thatwe can easily implement the projector onto the groundstate of the Hamiltonian at the start of the annealing pro-cess if our initial Hamiltonian is of a sufficiently simpleform, such as a free fermion Hamiltonian or a Hamil-tonian with a product ground state. For a general freefermion Hamiltonian, we can use Givens rotations to ro-tate to the case that the ground state simply has somespin-orbitals occupied and some empty; we then mea-sure whether we have the desired pattern or not. Callthis projector P ff . Then, if U adiab is the unitary whichdescribes the approximately adiabatic evolution from thefree Fermion Hamiltonian onto the desired final Hamil-tonian, we can approximate P ≈ U adiab P ff U † adiab . The4accuracy of this approximation depends upon the anneal-ing path; see the discussion in V C, which shows thatwe can use this strategy to achieve a (near) quadraticspeedup. The improved annealing paths discussed thereare most useful for the particular approach here where weneed very high accuracy in the annealing; in all other ap-plications in the paper, we need only moderate accuracyin the annealing, sufficient to get a large overlap with theground state so that phase estimation can then projectonto the ground state with reasonably large probability.We therefore VIII. CONCLUSIONS
In this paper we have presented a comprehensive strat-egy for using quantum computers to solve models ofstrongly correlated electrons, using the Hubbard modelas a prototypical example. Similar strategies can be usedfor generalizations of the Hubbard model and other dis-crete quantum lattice models, including, but not limitedto, the t - J model, frustrated magnets, or bosonic mod-els coupled to (static) gauge fields, which all suffer fromnegative sign problems in quantum Monte Carlo simu-lations on classical computers. We go beyond previouspapers that discussed simulations of the Hubbardmodel on quantum computers by giving complete detailsof all steps needed to learn about the properties of theHubbard model.In particular we presented a strategy to prepare trialground states starting from various mean-field statesor RVB states on decoupled plaquettes. While findingthe ground state of quantum lattice models is QMA-hard this is a statement about the worst case. Weexpect that, as experience has shown, many experimen-tally relevant models have ground states with either nolong-range order (so-called quantum liquids) or with bro-ken symmetries that are qualitatively well described bymean-field theories. In fact, if a model has such pecu-liar properties that it is hard to find its ground stateon a quantum computer, then also a material describedby that model will have a hard time reaching its groundstate. Quick low-temperature thermalization in experi-ments is thus an indication that the ground state of themodel describing its properties should also be easy to pre-pare on a quantum computer. The opposite is the case,e.g. for (classical) spin glasses, where finding the groundstate is NP-hard but also spin glass materials never ther-malize nor reach the ground state.We have furthermore presented an efficient and deter-ministic quantum algorithm to prepare arbitrary Slaterdeterminants as initial state, which scales better than thealgorithms of Refs. 5 and 15. This can be used to prepareground states of various candidate mean-field Hamiltoni-ans, from which an adiabatic evolution to the groundstate of the Hubbard model can be attempted.To implement time evolution under the Hubbard and mean-field Hamiltonians we have given explicit quantumcircuits for all terms, and discussed the size of the Trot-ter time step required to achieve sufficiently small er-rors. We discuss how time evolution under the individ-ual terms in the Hubbard model can be efficiently paral-lelized, ultimately requiring O ( N ) qubits and gates withonly O (log N ) parallel circuit depth for one time step,which allows efficient simulations of very large systems.We gain additional constant factors over previous ap-proaches, by optimizing the phase estimation algorithmto reduce the required number of (expensive) rotationgates by a factor of four, assuming that our gate setconsists of one- or two-qubit Clifford operations and ar-bitrary single qubit rotations. We furthermore proposeto use a larger Trotter time step for the adiabatic statepreparation (whose time scale is controlled by the inversegap squared), to prepare a very good (but not perfect)guess for the ground state, and then refine this to the ex-act ground state by doing only the final quantum phaseestimation (whose time scale is shorter since it is con-trolled by the inverse gap) with a small time step.We have finally discussed approaches to obtain aquadratic speedup in measurements by proposing twonon-destructive measurement strategies, one based onthe Hellman-Feynman theorem, and another based onrecovering the ground state after a (destructive) mea-surement of a single qubit. We also introduce a newapproach of measuring dynamic structure factors andspectral functions directly in frequency space, using ideassimilar to angle-resolved photoemission experiments.Estimating the gate counts required to simulate theHubbard model, we find that even on lattices with morethan N ≈ µs , the groundstate of the Hubbard model can be prepared within a fewseconds. Small scale quantum computers will thus be avery powerful tool for the investigation of many problemsin the field of strongly correlated electron models that arecurrently out of reach of any classical algorithm. ACKNOWLEDGMENTS
We acknowledge useful discussions with Bela Bauer.This work was supported by Microsoft Research, by theSwiss National Science Foundation through the NationalCompetence Center in Research NCCR QSIT, and theERC Advanced Grant SIMCOFE. The authors acknowl-edge hospitality of the Aspen Center for Physics, sup-ported by NSF grant R. P. Feynman, International Journal of TheoreticalPhysics , 467 (1982). A. Y. Kitaev, arXiv:quant-ph/9511026. A. Y. Kitaev, A. H. Shen, and M. N. Vyalyi,
Classical andquantum computation , 47 (American Mathematical Soc.,2002). M. A. Nielsen and I. L. Chuang,
Quantum computation andquantum information (Cambridge university press, 2010). G. Ortiz, J. Gubernatis, E. Knill, and R. Laflamme, Phys-ical Review A , 022319 (2001). J. D. Whitfield, J. Biamonte, and A. Aspuru-Guzik,Molecular Physics , 735 (2011). S. B. Bravyi and A. Y. Kitaev, Annals of Physics , 210(2002), quant-ph/0003137. H. F. Trotter, Proc. Amer. Math. Soc. , 545 (1959). M. Suzuki, Communications in Mathematical Physics ,183 (1976). D. Wecker, B. Bauer, B. K. Clark, M. B. Hastings, andM. Troyer, Phys. Rev. A , 022305 (2014). M. B. Hastings, D. Wecker, B. Bauer, and M. Troyer,Quantum Info. Comput. , 1 (2015). D. Poulin, M. B. Hastings, D. Wecker, N. Wiebe, A. Do-herty, and M. Troyer, Preprint (2014), arXiv:1406.4920. D. Abrams and S. Lloyd, Physical Review Letters , 2586(1997). D. Abrams and S. Lloyd, Physical Review Letters , 5162(1999). R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, andR. Laflamme, Phys. Rev. A , 042323 (2002). E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lund-gren, and D. Preda, , 472 (2001). S. Trebst, U. Schollw¨ock, M. Troyer, and P. Zoller, Phys.Rev. Lett. , 250402 (2006). A. Hamma and D. A. Lidar, Phys. Rev. Lett. , 030502(2008). M. C. Gutzwiller, Phys. Rev. Lett. , 159 (1963). J. Hubbard, Proc. R. Soc. London Ser. A , 238 (1963). S. Sachdev, ArXiv e-prints (2010), arXiv:1012.0299 [hep-th]. E. Fradkin, S. A. Kivelson, and J. M. Tranquada, ArXive-prints (2014), arXiv:1407.4480 [cond-mat.supr-con]. C. M. Varma, Phys. Rev. B , 14554 (1997). A system in which insulating behavior is caused byelectron-electron interactions in a half-filled band (whichwould otherwise be metallic) is called a Mott insulator. D. J. Thouless, Proc. Phys. Soc.London , 893 (1965). In two dimensions, long-ranged magnetic order is impossi-ble at temperatures
T > ∼ J , the N´eel temperature at whichlong-ranged antiferromagnetic order develops is ∼ J ⊥ , theinter-layer exchange. G. Misguich, C. Lhuillier, B. Bernu, and C. Waldtmann,Phys. Rev. B , 1064 (1999). R. G. Melko, A. W. Sandvik, and D. J. Scalapino, Phys.Rev. B , 100408 (2004), cond-mat/0311080. In two dimensions, quasi-long-ranged (i.e. algebraically-decaying) superconducting order can occur at 0 < T Quantum Phase Transitions (Cambridge Uni-versity Press, 2011). P. W. Anderson, Materials Research Bulletin , 153 (1973). P. W. Anderson, Science , 1196 (1987). X. G. Wen, Phys. Rev. B , 2664 (1991). R. M. Fye, D. J. Scalapino, and R. T. Scalettar, PhysicalReview B , 8667 (1992). E. Altman and A. Auerbach, Physical Review B , 104508(2002). H. Tsunetsugu, M. Troyer, and T. M. Rice, Physical Re-view B , 16078 (1994). M. Troyer, H. Tsunetsugu, and T. M. Rice, Physical Re-view B , 251 (1996). P. W. Anderson, P. A. Lee, M. Randeria, T. M. Rice,N. Trivedi, and F. C. Zhang, Journal of Physics: Con-densed Matter , R755 (2004). F. Verstraete and J. I. Cirac, J. Stat. Mech. , P09012(2005). J. Roland and N. J. Cerf, Physical Review A , 042308(2002). D. A. Lidar, A. T. Rezakhani, and A. Hamma, Journal ofMathematical Physics , 102106 (2009). N. Wiebe and N. S. Babcock, New Journal of Physics ,013024 (2012). N. Wiebe, D. Berry, P. Høyer, and B. C. Sanders, Journalof Physics A: Mathematical and Theoretical , 065203(2010). K. Temme, T. Osborne, K. Vollbrecht, D. Poulin, andF. Verstraete, Nature , 87 (2011). D. Aharonov, D. Gottesman, S. Irani, and J. Kempe,Commun. Math. Phys. , 41 (2009). A. M. Childs, D. Gosset, and Z. Webb, Proceedings of the41s International Colloquium on Automata, Languages,and Programming , 308 (2014). J. Huyghebaert and H. De Raedt, Journal of Physics A:Mathematical and General , 5777 (1990). R. Babbush, J. McClean, D. Wecker, A. Aspuru-Guzik,and N. Wiebe, Physical Review A , 022311 (2015). Appendix A: Error bounds for time-dependentTrotter Formulas Simulating the dynamics of time–dependent Hamil-tonians on quantum computers is a more subtle issuethan simulating time–independent dynamics and resultsproven for the time–independent case do not necessar-ily transfer over. This issue is significant here becauseof issues surrounding simulating adiabatic state prepara-tion. Bounds for the error in high–order Trotter formu-las are known, however, the scaling of these bounds with6the number of terms in the Hamiltonian is known to beloose. Since the Hamiltonians needed for our purposeshave a large number of terms, tighter estimates of theerror scaling may be important for determining the costof performing adiabatic state preparation for Fermionicsystems.It may be tempting to think that we can neglect theerrors in the Trotter formula that are incurred by ap-proximating H ( t ) by a piecewise constant Hamiltonianbecause the time derivatives of the Hamiltonian are smallfor a slow adiabatic evolution, but this is not necessar-ily true because adiabatic theorems only require that thederivatives of the Hamiltonian are small relative to anappropriate power of the minimum eigenvalue gap. Thismeans that in some circumstances adiabatic evolutionmay actually involve a rapid passage. Furthermore, sincewe are interested in high accuracy state preparation eventhough such errors may be small they will not necessarilybe negligible compared to our target error tolerance.In this section we analyze these errors in more detail.In particular, we will examine the error in the secondorder order Trotter–Suzuki formula T e − i (cid:82) H ( u )d u ≈ m (cid:89) j =1 e − iH j (1 / / (cid:89) j = m e − iH j (1 / / , (A1)where we approximate the evolution from time 0 to 1 bya single second-order Trotter-Suzuki step. The boundsthat we present are a generalization of those in Ref. 10 tothe time–dependent case. Bounds for the Trotter formulaare given in Ref 51.Our main result is that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T e − i (cid:82) H ( u )d u − m (cid:89) j =1 e − iH j (1 / / (cid:89) j = m e − iH j (1 / / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ 124 max s (cid:107) H (cid:48)(cid:48) ( s ) (cid:107) + 112 (cid:107) [ H (cid:48) (1 / , H (1 / (cid:107) + m (cid:88) j =1 (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H j (1 / , (cid:88) k>j H k (1 / , H j (1 / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:88) k>j H k (1 / , H j (1 / , (cid:88) (cid:96)>j H (cid:96) (1 / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) . (A2)This result reduces to the bound of 10 and agrees withthe asymptotic error scaling predicted in 52, up to ause of the triangle inequality and a small multiplicativefactor, if H is time-independent.We prove the bound in two steps. First, we will showthat (cid:107)T e − i (cid:82) H ( u )d u − e − iH (1 / (cid:107) (A3) ≤ 124 max s (cid:107) H (cid:48)(cid:48) ( s ) (cid:107) + 112 (cid:107) [ H (cid:48) (1 / , H (1 / (cid:107) . Then, we use the bound of Ref. 10 that (cid:13)(cid:13)(cid:13) e − iH (1 / − m (cid:89) j =1 e − iH j (1 / / (cid:89) j = m e − iH j (1 / / (cid:13)(cid:13)(cid:13) ≤ m (cid:88) j =1 (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H j (1 / , (cid:88) k>j H k (1 / , H j (1 / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) (A4)+ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:88) k>j H k (1 / , H j (1 / , (cid:88) (cid:96)>j H (cid:96) (1 / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) . Eq. (A2) then follows from the triangle inequality. Weremark that in Ref. 10, a typographical error led to (cid:107) [[ A, H ( x )] , H ( x )] (cid:107) ≤ (cid:107) [[ A, B ] , A ] (cid:107) + (cid:107) [[ A, B ] , B ] (cid:107) be-ing replaced by (cid:107) [[ A, H ( x )] , H ( x )] (cid:107) ≤ (cid:107) [[ A, B ] , A ]] +[[ A, B ] , B ] (cid:107) in the language of that paper in appendixB (see text above Eq. (B3) of that paper). This pair ofmissing (cid:107) symbols propagated to later works although itdoes not affect any of the scaling results cited in laterpapers. In the above equation, we have corrected thiserror, so that the right hand-side is a sum of two distinctnorms for each j , rather than a norm of a sum.To show Eq. (A3), we write H (cid:48) , H (cid:48)(cid:48) , ... to denote thefirst, second, ... derivatives of H with respect to time. ByTaylor’s theorem, H ( u ) = H (1 / 2) + ( u − / H (cid:48) (1 / 2) + (cid:82) u / ( u − s ) H (cid:48)(cid:48) ( s )d s . Hence T e − i (cid:82) H ( u )d u is T e − i (cid:82) (cid:16) H (1 / u − / H (cid:48) (1 / (cid:82) u / H (cid:48)(cid:48) ( s )( u − s )d s (cid:17) d u , and thus (cid:13)(cid:13)(cid:13) T e − i (cid:82) H ( u )d u − T e − i (cid:82) (cid:16) H (1 / u − / H (cid:48) (1 / (cid:17) d u (cid:13)(cid:13)(cid:13) ≤ (cid:90) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:90) u / H (cid:48)(cid:48) ( s )( u − s )d s (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) d u ≤ 124 max s (cid:107) H (cid:48)(cid:48) ( s ) (cid:107) . (A5)We next bound the difference (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) T e − i (cid:82) (cid:16) H (1 / u − / H (cid:48) (1 / (cid:17) d u − e − iH (1 / (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) . Transforming to the interaction representation gives T e − i (cid:82) (cid:16) H (1 / u − / H (cid:48) (1 / (cid:17) d u = e − i (1 / H (1 / (cid:16) T e − i (cid:82) ( u − / H (cid:48) u (1 / u (cid:17) × e − i (1 / H (1 / , where we define H (cid:48) u (1 / ≡ e − i (1 / − u ) H (1 / H (cid:48) (1 / e i (1 / − u ) H (1 / . So by a unitary rotation (cid:107)T e − i (cid:82) (cid:16) H (1 / u − / H (cid:48) (1 / (cid:17) d u − e − iH (1 / (cid:107) (A6)= (cid:107)T e − i (cid:82) ( u − / H (cid:48) u (1 / u − (cid:107) = (cid:107)T e − i (cid:82) ( u − / H (cid:48) u (1 / u − T e − i (cid:82) ( u − / H (cid:48) (1 / u (cid:107) . (cid:107) H (cid:48) u (1 / − H (cid:48) (1 / (cid:107) can be written as (cid:13)(cid:13)(cid:13)(cid:13)(cid:90) ∂ x (cid:18) e − i ( − u )(1 − x ) H ( ) H (cid:48) ( 12 ) e i (1 / − u )(1 − x ) H ( ) (cid:19) d x (cid:13)(cid:13)(cid:13)(cid:13) which can be used to show that (cid:107) H (cid:48) u (1 / − H (cid:48) (1 / (cid:107) ≤ | u − / |(cid:107) [ H (cid:48) (1 / , H (1 / (cid:107) , and so (cid:107)T e − i (cid:82) ( u − / H (cid:48) u (1 / u − T e − i (cid:82) ( u − / H (cid:48) (1 / u (cid:107)≤ (cid:90) | u − / | (cid:107) [ H (cid:48) (1 / , H (1 / (cid:107) d u = 112 (cid:107) [ H (cid:48) (1 / , H (1 / (cid:107) ..