Sparse-Hamiltonian approach to the time evolution of molecules on quantum computers
Christina Daniel, Diksha Dhawan, Dominika Zgid, James K. Freericks
EEPJ manuscript No. (will be inserted by the editor)
Sparse-Hamiltonian approach to the timeevolution of molecules on quantum computers
Christina Daniel , Diksha Dhawan , Dominika Zgid , , and James K. Freericks Department of Physics, Georgetown University, Washington, DC 20057 USA Department of Chemistry, University of Michigan, Ann Arbor, Michigan 48109, USA Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA
Abstract.
Quantum chemistry has been viewed as one of the potentialearly applications of quantum computing. Two techniques have beenproposed for electronic structure calculations: (i) the variational quan-tum eigensolver and (ii) the phase-estimation algorithm. In both cases,the complexity of the problem increases for basis sets where either theHamiltonian is not sparse, or it is sparse, but many orbitals are re-quired to accurately describe the molecule of interest. In this work, weexplore the possibility of mapping the molecular problem onto a sparseHubbard-like Hamiltonian, which allows a Green’s-function-based ap-proach to electronic structure via a hybrid quantum-classical algorithm.We illustrate the time-evolution aspect of this methodology with a sim-ple four-site hydrogen ring.
In the variational quantum eigensolver algorithm [1,2], one prepares a trial wavefunc-tion and evaluates the expectation values needed to determine the expectation valueof the Hamiltonian with respect to that wavefunction. The number of measurementsscales with the number of nonzero terms in the Hamiltonian, which typically growslike the fourth power of the number of spin orbitals used in the basis set for thegiven calculation. The state is usually prepared with a simple strategy like a unitarycoupled cluster approach [3]. A self-consistent loop optimizes the parameters in thevariational ansatz until the required accuracy is achieved. The phase estimation algo-rithm [4] instead determines the phase of exp (cid:16) − iλ ˆ H (cid:17) and requires many operationsof the exponential of the Hamiltonian onto the wavefunction, similar to time evolu-tion, to complete the calculation ( λ is a scaling factor). If the initial wavefunction hashigh overlap with the ground state, then the chance to project onto the ground statewith the measurement is high.In both cases, the complexity of the algorithm grows with the number of nonzeroterms in the Hamiltonian matrix—for the variational quantum eigensolver, this is seenin the number of measurements required, while in the phase estimation algorithmit is in the number of independent Trotter steps required for each application ofthe exponential of the Hamiltonian (multiplied by a constant). Given the fact thatcurrent noisy intermediate scale quantum (NISQ) computers can only run low depthcircuits, this is problematic for running these algorithms on complex molecules. Even a r X i v : . [ qu a n t - ph ] S e p Will be inserted by the editor when fault-tolerant quantum computers become available, they may still require low-depth circuits due to drift of the tuning of the machine over extended periods of time(which is not normally corrected by error correction algorithms). This then impliesthat methods focused on making the Hamiltonian matrix sparse are critical to thesuccess of quantum chemistry applications on quantum computers in the near term.In this work, we describe the time-evolution piece of the algorithm to do this. Itis based on a simple premise that the electron correlations in the molecule can beefficiently encoded in the self-energy of the molecule. Then, if we can construct asparse Hamiltonian that approximates the self-energy of the molecule well, we canuse it to determine the properties of the molecule. We describe just how such aprocess can be carried out on a quantum computer with a simple example below. Weexamine the accuracy of using an approximate unitary coupled cluster wavefunctionto estimate the zero-temperature Green’s function of the sparse Hamiltonian, whichis the Hubbard Hamiltonian here.
The retarded Green’s function in position space is defined to be G ijσ ( t ) = − iθ ( t )Tr e − β ˆ H { ˆ c iσ ( t ) , ˆ c † jσ } Z , (1)where Z = Tr exp (cid:16) − β ˆ H (cid:17) is the partition function, β is the inverse temperatureand θ ( t ) is the unit step function. Here we have that ˆ c iσ (ˆ c † iσ ) are the annihilation(creation) operators for an electron at site i with spin σ . The braces denote theanticommutator, and the time-evolution of the operators is given in the Heisenbergrepresentation. The trace is over all many body states with a fixed number of electrons(that is, we are calculating a canonical, not a grand canonical Green’s function here).In this work, we focus on T = 0, where the trace includes just one state, the groundstate. We also can work in momentum space (we assume the lattice is periodic), whereˆ c kσ = 1 √ V V − (cid:88) j =0 e − ikj ˆ c jσ , (2) V is the number of lattice sites and we set the lattice constant a = 1. The allowed k values are 0, π/ π , and 3 π/ H = (cid:88) ijσ t ij ˆ c † iσ ˆ c jσ + U (cid:88) i ˆ c † i ↑ ˆ c i ↑ ˆ c † i ↓ ˆ c i ↓ . (3)Here, t ij is the hopping matrix and U is the on-site Coulomb interaction. The firstterm is the kinetic energy, and the second term is the potential energy. In this map-ping, the hopping matrix is a full matrix, with nonzero coefficients for all hoppingterms.The Hamiltonian can also be written in momentum space asˆ H = (cid:88) k,σ (cid:15) k ˆ c † kσ ˆ c kσ + UV (cid:88) kk (cid:48) q ˆ c † k ↑ ˆ c k (cid:48) ↑ ˆ c † q ↓ ˆ c k − k (cid:48) + q ↓ (4) ill be inserted by the editor 3 and we will be primarily working with this form. Here we have the bandstructuregiven by (cid:15) k = V (cid:80) jj (cid:48) t jj (cid:48) exp( ikj ) , which is independent of j (cid:48) due to the transla-tional invariance of the lattice (hydrogen ring). If the molecule is not a translationallyinvariant ring, a more complicated single-particle term to the Hamiltonian is needed,but we do not discuss this further here. Note that the hopping matrix also includesdiagonal terms with j = j (cid:48) .The mapping of the molecular Hamiltonian to the Hubbard Hamiltonian is de-signed to recover the dynamic part of the finite temperature self-energy of the parentmolecular problem with all two-body interactions present. We call such a mapping thedynamical self-energy mapping (DSEM). Such a mapping was described in Ref. [6]where the effective on-site two-body integrals were chosen to recover the first momentof the frequency dependent self-energy. Here, as a proof of principle, the given sparseHamiltonian is created to recover the first moment of the exact self-energy obtainedin the exact-diagonalization procedure. In general, the following scheme of mappingcan be used to design quantum–classical hybrid algorithms where a classical computeris used to calculate the sparse Hamiltonian that is then employed by the quantumcomputer. Details of the preparation of such a mapping are described in Ref. [7]. The fitting procedure produces a diagonal term t , a nearest neighbor hopping t anda second neighbor hopping t , along with the on-site repulsion U (see Table 1). Table 1.
Parameters for the sparse Hubbard Hamiltonian that represents the four-sitehydrogen ring. ll parameters are in Hartrees.
U t t t The exact ground state is found by diagonalizing the Hamiltonian with four elec-trons. It yields | Ψ (cid:105) = α (cid:0) ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) − ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) (cid:1) + β (cid:0) ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) + ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) + ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) + ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) + 2ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) + 2ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) (cid:1) + γ (cid:0) ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) − ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) (cid:1) , (5)with α = 0 . β = 0 . γ = 0 . k = 0 and the level with k = 1are filled. This state was described in Ref. [8], where the excitation operators for afactorized unitary coupled cluster are given. That approach is generalized here andsummarized in Tables 2 and 3.The factorized form of the unitary coupled cluster approximation applies eachdoubles excitation (and de-excitation) operator in the order given in Table 2 to the Will be inserted by the editor initial reference state, ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) . The resulting, approximate ground state interms of the three angles θ , θ , and θ is summarized in Table 3. We use the samenotation as used in Ref. [8], which is why we have no θ , since that was used for aquad excitation that we do not include here. Table 2.
Doubles unitary coupled-cluster operators used in creating the approximate groundstate. The operators are applied in order according to the rows of the table.Order Unitary coupled cluster factor1 e − θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ + θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ e − θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ + θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ e − π ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ + π ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ e + θ ˆ c † ↑ ˆ c † ↑ ˆ c ↑ ˆ c ↑ − θ ˆ c † ↑ ˆ c † ↑ ˆ c ↑ ˆ c ↑ e + θ ˆ c † ↓ ˆ c † ↓ ˆ c ↓ ˆ c ↓ − θ ˆ c † ↓ ˆ c † ↓ ˆ c ↓ ˆ c ↓ e − θ ˆ c † ↑ ˆ c † ↑ ˆ c ↑ ˆ c ↑ + θ ˆ c † ↑ ˆ c † ↑ ˆ c ↑ ˆ c ↑ e − θ ˆ c † ↓ ˆ c † ↓ ˆ c ↓ ˆ c ↓ + θ ˆ c † ↓ ˆ c † ↓ ˆ c ↓ ˆ c ↓ e − θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ + θ ˆ c † ↑ ˆ c † ↓ ˆ c ↓ ˆ c ↑ Once the analytical coefficients of Table 3 are obtained, numerical values arecalculated from the analytical coefficients. To do this, equations for the angles θ , θ ,and θ are needed. These equations are given in Ref. [8] and the angles depend onthe values of α , β , and γ from the exact ground state. They are θ = 12 sin − (4 β ) (6) θ = 12 sin − (cid:32) √ βc (cid:33) (7) θ = tan − (cid:16) γα (cid:17) − tan − (cid:0) tan θ (cid:1) . (8)After substituting the values for α , β , and γ into the equations for the angles, theapproximate ground state with numerical coefficients is obtained (see Table 3). Thiswavefunction is representative of a generic state that one would obtain after perform-ing a variational quantum eigensolver calculation.Using this approximate ground state, we compute the (approximate) time-dependentGreen’s function, where the ground-state wavefunction is replaced by the approximateground-state, and compare it to the exact Green’s function with the exact groundstate. Figure 1 shows these two results (real and imaginary part). One can see thatthe two Green’s functions are nearly identical; their differences are on the order of10 − . The reason why is that the square of the overlap of the approximate state withthe ground state is very high (the fidelity is 0.99979). This value is surprising giventhat the approximate ground state differs from the exact one with coefficients thatare on the order of 0.01, but there is a cancellation leading to a higher fidelity.Unfortunately, for such a small system, it does not make sense to use Dyson’sequation to extract the frequency-dependent self-energy, because the data is truncatedto too short of a time. This leads to inaccuracies here, because the system is finiteand the Fourier transform of the Green’s function yields a sum over delta functions.But if the Green’s function is cut-off too early in the time domain, the results of the ill be inserted by the editor 5 Table 3.
General form and final numerical values of the coefficients of the approximateground state after applying the doubles-only excitations, with c i ≡ cos θ i and s i ≡ sin θ i .State Analytical Coefficient Numerical Coefficientˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ ( c c + s s ) + s √ ( s c − c s ) 0.6902877166375496ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ ( s s − c c ) + s √ ( s c + c s ) -0.6886258223794277ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ s c − s √ s c c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c s √ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c s √ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ s c − s √ s c c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c s c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c s c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ ( s c + c s ) + s √ ( c c − s s ) 0.06687536735226934ˆ c † ↑ ˆ c † ↑ ˆ c † ↓ ˆ c † ↓ | (cid:105) c √ ( s c − c s ) − s √ ( c c + s s ) -0.04669803278042805 Fourier transform would be significantly distorted. This becomes less of a concern forlarger molecules because they have so many frequencies that they tend to dephaseand create a decaying Green’s function in the time domain, which can be cut off,resulting in just a small broadening of the delta functions, so the calculations canproceed more normally for larger systems. Most molecules will fall into this category.
Fig. 1.
Exact (solid) and approximate (dashed) retarded Green’s function for the sparseHubbard Hamiltonian. The time is in units of (cid:126) / H. The algorithm on the quantum computer is now straightforward. After mapping theproblem from the molecule to the sparse Hubbard Hamiltonian, we use the factorizedform of the unitary coupled cluster ansatz to create an approximate ground state.In general, such an approach will involve many different types of excitations, but inthis simple example, it involves only doubles excitations. The lower the order of theexcitation, the lower the depth of the circuit for the quantum computer, so it is likelythat many calculations will opt to only use singles and doubles, if possible. Thenwe would invoke the algorithm from the Los Alamos group [9,10] to calculate the
Will be inserted by the editor
Green’s function by measuring the x or y Pauli spin operator on the ancilla qubit todetermine the real and imaginary parts of G . Both of these steps can be carried outwith relatively low-depth circuits due to the sparsity of the effective Hamiltonian, butthey do still require time evolution, which is still beyond the capability of currentlyavailable NISQ machines. Once one has determined the Green’s function to far enoughtime on the quantum computer, then we would take the Fourier transform, extractthe self-energy from Dyson’s equation, and employ it to describe the Green’s functionof the molecule. This then allows the ground-state energy of the molecule to bedetermined. In this work, we described an approach to use on near-term quantum computers thatwill allow us to calculate the electronic structure of more complex molecules sooner.The approach maps the molecule onto a sparse Hamiltonian that has a full single-particle hopping matrix, but only local interactions. Due to the significant reductionin the number of nonzero matrix elements, this sparse Hamiltonian becomes mucheasier to simulate on a quantum computer and one should be able to determine it’sGreen’s function once time evolution becomes possible; this may occur in near-termNISQ machines or may need to wait until fault-tolerant computers are available. Oncethe Green’s function for the sparse Hamiltonian is found on the quantum computer,we extract the self-energy and use it as the self-energy for the molecule in the fullmolecular problem. We showed how using an approximate form for the ground stateleads to an accurate approximation to the exact result for a relatively long period oftime. Hence, this makes it promising that such an approach can lead to accurate andefficient ways to perform electronic structure calculations on quantum computers.In terms of quantum computing complexity, a factorized form of UCC state prepa-ration uses only doubles excitations here, and the time-evolution needed to computethe Green’s function requires only control operations for the application of the cre-ation or annihilation operators, but not for the time evolution. This is contrary tothe quantum phase estimation algorithm, which requires controlled application of theHamiltonian. Hence, the approach described here has the potential to be quite efficientfor implementation on both near-term and fault-tolerant quantum computers.
This work is supported from the National Science Foundation under grant number CHE-1836497. JKF is also funded by the McDevitt bequest at Georgetown University. We thankManuel Weber for useful discussions.
References
1. A. Peruzzo, J. McClean, P. Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, P. J. Love,A. Aspuru-Guzik J. L. O’Brien, Nature Commun. , 4213 (2014)2. Yudong Cao, J. Romero, J. P. Olson, M. Degroote, P. D. Johnson, M. Kieferov´a,I. D. Kivlichan, T. Menke, B. Peropadre, N. P. D. Sawaya, Sukin Sim, L. Veis, A. Aspuru-Guzik, Chem. Rev. , 10856 (2019)3. R. J. Bartlett, S. A. Kucharski, J. Noga, Chem. Phys. Lett. , 133 (1989)4. A, Kitaev ArXiv:quant-ph/9511026 (1995)5. J. Hubbard, Proc. Roy. Soc. (London) , 238 (1963)6. A. A. Rusakov, J. J. Phillips, D. Zgid, J. Chem. Phys., , 194105 (2014)7. D. Dhawan, D. Zgid, unpublished results8. Luogen Xu, Joseph T. Lee, J. K. Freericks. Mod. Phys. Lett. , 2040049 (2020)ill be inserted by the editor 79. G. Ortiz, J. E. Gubernatis, E. Knill, R. Laflamme, Phys. Rev. A , 022319 (2001)10. Dave Wecker, Matthew B. Hastings, Nathan Wiebe, Bryan K. Clark, Chetan Nayak,Matthias Troyer, Phys. Rev. A92