aa r X i v : . [ qu a n t - ph ] J un C 4 Condensed Matter Applications ofEntanglement Theory Norbert SchuchInstitute for Quantum InformationRWTH Aachen, 52056 Aachen, Germany
Contents Lecture Notes of the th IFF Spring School “Quantum Information Processing”, edited by D. DiVincenzo(Forschungszentrum J¨ulich, 2013).
The aim of this lecture is to show how the tools of quantum information theory and in particularentanglement theory can help us to understand the physics of condensed matter systems andother correlated quantum many-body systems. In a certain sense, many-body systems are allaround in quantum information: The defining feature of these systems is that they consist ofmany subsystems of the same dimension, equipped with a natural tensor product structure ( C d ) ⊗ N , and such structures appear in quantum information as quantum registers in quantumcomputers, as parties in multipartite protocols, and so on.Most condensed matter systems exhibit only weak quantum correlations (this is, entanglement)between the individual subsystems. These systems are well described by a mean field ansatz(i.e., a product state), and their behavior can be understood using Landau theory. However, someof the most exciting phenomena discovered in condensed matter physics in the last decades,such as the fraction quantum Hall effect or high-temperature superconductivity, are based onsystems with strong interactions in which entanglement plays an essential role. Given the re-fined understanding of entanglement which has been developed in the field of quantum infor-mation theory, it is thus natural to apply quantum information concepts, and in particular thetheory of entanglement, towards an improved understanding of quantum many-body systems.Indeed, an active field of research has grown during the last decade at the interface of quantuminformation theory and quantum many-body physics, and the aim of this lecture is to give anintroduction to this area.In this lecture, we will highlight two complementary topics at the interface between condensedmatter and quantum information: In the first part (Sec. 2, 3), we will show how we can use in-sights on the entanglement structure of condensed matter systems to develop powerful methodsto both numerically simulate and analytically characterize correlated many-body systems; andin the second part (Sec. 4) we will show how the field of quantum complexity theory allows usto better understand the limitations to our (numerical) understanding of those systems. For clarity of the presentation, we will in the following restrict our attention to quantum spinsystems on a lattice (such as a line or a square lattice in 2D), with a corresponding Hilbertspace ( C d ) ⊗ N (where each spin has d levels, and the lattice has N sites); generalizations tofermionic systems and beyond lattices will be discussed later. Also, we will for the momentfocus on ground state problems, i.e., given some Hamiltonian H acting or our spin system, wewill ask about properties of its ground state | Ψ i . The approach we pursue will be variational –we will try to obtain a family of states which gives a good approximation of the ground state,for which quantities of interest can be evaluated efficiently, and where the best approximation tothe ground state can be found efficiently. For instance, mean-field theory is a variational theorybased on the class of product states (for spin systems) or Slater determinants (for electronicsystems).So which states should we use for our variational description of quantum spin systems? Ofcourse, one could simply try to parametrize the ground state as | Ψ i = X i ,...,i N c i ...i N | i , . . . , i N i , (1) We ignore fermionic systems for the moment, but as it turns out, many of the results described in this lecturealso hold for fermionic systems, see Sec. 3.4. ondensed Matter Applications of Entanglement Theory C4.3
Fig. 1:
Area law: The entropy of the reduced states of a block A scales like the length of itsboundary ∂A ; in one dimension, this implies that the entropy is bounded by a constant. and use the c i ...i N as variational parameters. Unfortunately, the number of parameters c i ...i N grows exponentially with N , making it impossible to have an efficient description of | Ψ i forgrowing system sizes. On the other hand, we know that efficient descriptions exist for physicalHamiltonians: Since H = P i h i is a sum of few-body terms (even if we don’t restrict tolattice systems), a polynomial number N k of parameters (with k the bodiness of the interaction)allows to specify H , and thus its ground state. This is, while a general N -body quantum statecan occupy an exponentially large Hilbert space, all physical states live in a very small “corner”of this space. The difficulty, of course, is to find an efficient parametrization which captures thestates in this corner of Hilbert space, while at the same time allowing for efficient simulationmethods.In order to have a guideline for constructing an ansatz class, we choose to look at the entan-glement properties of ground states of interacting quantum systems. To this end, we consider aground state | Ψ i on a lattice and cut a contiguous region of length L (in one dimension) or anarea A (in two dimensions), cf. Fig. 1. It is a well-known result from entanglement theory [1]that the von Neumann entropy of the reduced density matrix ρ A of region A , S ( ρ A ) = − ρ A log ρ A , quantifies the entanglement between region A and the rest of the system. For a random quantumstate, we expect this entanglement to be almost maximal, i.e., on the order of | A | log d (where | A | is the number of spins in region A ). Yet, if we study the behavior of S ( ρ A ) for groundstates of local Hamiltonians, it is found that S ( ρ A ) essentially scales like the boundary of re-gion A , S ( ρ A ) ∝ | ∂A | , with possible corrections for gapless Hamiltonians which are at mostlogarithmic in the volume, S ( ρ A ) ∝ | ∂A | log | A | . This behavior is known as the area law forthe entanglement entropy and has been observed throughout for ground states of local Hamil-tonians (see, e.g., Ref. [2] for a review); for gapped Hamiltonians in one dimension, this resulthas been recently proven rigorously [3]. Since the entropy S ( ρ A ) quantifies the entanglement between region A and its complement, thefact that S ( ρ A ) scales like the boundary of ρ A suggests that the entanglement between region A and the rest of the system is essentially located around the boundary between the two regions,as illustrated in Fig. 2. We will now construct an ansatz for many-body quantum systems,starting from the insight that the entanglement is concentrated around the boundary of regions;4.4 Norbert Schuch Fig. 2:
The area law suggests that the entanglement between two regions is located around theboundary. for the moment, we will focus on one-dimensional systems. Clearly, since we want to have thisproperty for any partitioning of the lattice, we cannot just place entangled pairs as in Fig. 2,but we have to choose a more subtle strategy. To this end, we consider the system at each siteas being composed of two “virtual” subsystems of dimension D each, as illustrated in Fig. 3a.Then, each of the two subsystems is placed in a maximally entangled state | ω D i = D X i =1 | i, i i with the corresponding subsystems of the adjacent sites, as shown in Fig. 3b. The maximallyentangled states are called “bonds”, with D the “bond dimension”. This construction alreadysatisfies the area law: For any region we cut, there are exactly two maximally entangled statescrossing the cuts, bounding the entanglement by D . Finally, we apply linear maps P s : C D ⊗ C D → C d at each site s , which creates a description of a state on a chain of d -levelsystems, cf. Fig. 3c. (Note that the rank of the reduced density operator of any region cannotbe increased by applying the linear maps P s .) The construction can be carried out either withperiodic boundary conditions, or with open boundary conditions by omitting the outermostvirtual subsystems at the end of the chain. The total state of the chain can be written as | Ψ i = ( P ⊗ · · · ⊗ P N ) | ω D i ⊗ N , (2)where the maps P s act on the maximally entangled states as illustrated in Fig. 3c.This class of states can be rewritten as follows: For each site s , define a three-index tensor A [ s ] i,αβ , i = 1 , . . . , d , α, β = 1 , . . . , D , where the A [ s ] i can be interpreted as D × D matrices, such that P s = X i,α,β A [ s ] i,αβ | i ih α, β | . (3)Then, the state (2) can be rewritten as | Ψ i = X i ,...,i N tr (cid:2) A [1] i A [2] i · · · A [ N ] i N (cid:3) | i , . . . , i N i , (4)i.e., the coefficient c i ...i N in (1) can be expressed as a product of matrices. For this reason, thesestates are called
Matrix Product States (MPS). For systems with open boundary conditions, thematrices A [1] i and A [ N ] i N are × D and D × matrices, respectively, so that the trace can beomitted. More generally, D can be chosen differently across each link. The equivalence of (2) and (4) can be proven straightforwardly by noting that for two maps P and P , andthe bond | ω D i between them, it holds that P ⊗ P | ω D i = X i ,i ,α,β ( A [1] i A [2] i ) αβ | i , i ih α, β | , and iterating this argument through the chain. ondensed Matter Applications of Entanglement Theory C4.5 Fig. 3:
Construction of MPS: a) Each site is composed of two virtual subsystems. b) The virtualsubsystems are placed in maximally entangled states. c) Linear maps P s are applied which mapthe two virtual systems to the physical system. The defining equation (4) for Matrix Product States is a special case of a so-called tensor net-work . Generally, tensor networks are given by a number of tensors A i ,i ,...,i K , B i ,i ,...,i K , etc.,where each tensor usually only depends on a few of the indices. Then, one takes the product ofthe tensors and sums over a subset of the indices, c i ...i k = X i k +1 ,...,i K A i ,i ,...,i K B i ,i ,...,i K · · · . For instance, in (4) the tensors are the A [ s ] ≡ A [ s ] i,αβ , and we sum over the virtual indices α, β, . . . , yielding c i ...i N = X α,β,γ,...,ζ A [1] i ,αβ A [2] i ,βγ · · · A [ N ] i N ,ζα = tr (cid:2) A [1] i A [2] i · · · A [ N ] i N (cid:3) . Tensor networks are most conveniently expressed in a graphical language. Each tensor is de-noted by a box with “legs” attached to it, where each leg corresponds to an index – a three-indextensor A [ s ] ≡ A [ s ] i,αβ is then depicted as . Summing over a joint index is denoted by connecting the corresponding legs, e.g., . In this language, the expansion coefficient c i ...i N [Eq. (1)] of an MPS (which we will further onuse interchangably with the state itself) is written . (5)We will make heavy use of this graphical language for tensor networks in the following.4.6 Norbert Schuch As it turns out, MPS are very well suited to describe ground states of one-dimensional quantumsystems. On the one hand, we have seen that by construction, these states all satisfy the arealaw. On the other hand, it can be shown that all states which satisfies an area law, such as groundstates of gapped Hamiltonians [3], as well as states for which the entanglement of a block growsslowly (such as for critical 1D systems), can be well approximated by an MPS [3, 4]: Given astate | Φ i on a chain of length N for which the entropy of any block of length L is bounded by S max , S ( ρ L ) ≤ S max , there exists an MPS | Ψ D i which approximates | Φ i up to error || Φ i − | Ψ D i| =: ǫ ≤ const × N e cS max D c . (6)Note that even if S max grows logarithmically with N , the numerator is still a polynomial in N .This is, in order to achieve a given accuracy ǫ , we need to choose a bond dimension D whichscales polynomially in N and /ǫ , and thus, the total number of parameters (and, as we will seelater, also the computation time) scales polynomially as long as the desired accuracy is at most / poly( N ) . Beyond the fact that ground states of gapped Hamiltonians can be efficiently approximated byMPS, there is a large class of states which can be expressed exactly as MPS with a small bonddimension D . First of all, any product state is trivially an MPS with D = 1 . The GHZ state | GHZ i = | · · · i + | · · · i is an MPS with D = 2 , with a translational invariant MPS representation A [ s ] ≡ A ∀ s , with A = | ih | ≡ ( ) , and A = | ih | ≡ ( ) . The W state | W i = | · · · i + | · · · i + · · · + | · · · i is an MPS with D = 2 , and A [ s ] = A for ≤ s ≤ N − , A [1] = h | A , and A [ N ] = A | i , where A = A = | ih | . Other states which have an MPS representation withsmall D are the cluster state used in measurement-based computation [6, 7] (with D = 2 )or the 1D resonating valence bond state which appears as the ground state of the so-calledMajumdar-Ghosh model [8, 9] (with D = 3 ), and the AKLT state which we will get to know inSection 2.3.1. As we have discussed at the end of Section 2.1.3, MPS approximate ground states of localHamiltonians efficiently, as the effort needed for a good approximation scales only polynomi-ally in the length of the chain and the desired accuracy. Thus, it seems appealing to use the Stricly speaking, this bound only follows from an area law for the
R´enyi entropy S α = log tr[ ρ α ]1 − α for α < , with c in (6) depending on α [4], which also holds for gapped Hamiltonians [3]. The proof uses the factthat a bound on the area law implies a fast decay of the Schmidt coefficients (i.e., the eigenvalues of the reduceddensity operator), and thus, one can construct an MPS by sequentially doing Schmidt decompositions of the stateand discarding all but the largest D Schmidt coefficients [4, 5]. ondensed Matter Applications of Entanglement Theory C4.7class of MPS as a variational ansatz to simulate the properties of quantum many-body systems.However, to this end it is not sufficient to have an efficient description of relevant states – afterall, the Hamiltonian itself forms an efficient description of its ground state, but it is hard toextract information from it! Rather, a good variational class also requires that we can efficientlyextract quantities of interest such as energies, correlation functions, and the like, and that thereis an efficient way to find the ground state (i.e., minimize the energy within the variational classof states) in the first place.
Let us start by discussing how to compute the expectation value of a local operator h (such as aterm in the Hamiltonian) for an MPS. To this end, note that h Ψ | h | Ψ i = X i ,...,iNj ,...,jN c i ...i N c ∗ j ...j N δ i ,j · · · δ i k − ,j k − h j k j k +1 i k i k +1 δ i k +2 ,j k +2 · · · δ i N ,j N where h = X ik,ik +1 jk,jk +1 h j k j k +1 i k i k +1 | i k , i k +1 ih j k , j k +1 | acts on sites k and k + 1 . Using the graphical tensor network notation, this can be written as . (7)In order to evaluate this quantity, we have to contract the whole diagram (7). In principle,contracting arbitrary tensor networks can become an extremely hard problem (strictly speaking, PP -hard [10]), as in some cases it essentially requires to determine exponentially big tensors(e.g., we might first have to compute c i ...i N from the tensor network and from it determine theexpectation value). Fortunately, it turns out that the tensor network of Eq. (7) can be contractedefficiently, i.e., with an effort polynomial in D and N . To this end, let us start from the veryleft of the tensor network in Eq. (7) and block the leftmost column (tensors A [1] and ¯ A [1] ).Contracting the internal index, this gives a two-index tensor L αα ′ = X i A [1] iα ¯ A [1] iα ′ , which we interpret as a (bra) vector with a “double index” αα ′ of dimension D . Graphically,this can be denoted as , where we use a doubled line to denote the “doubled” index of dimension D . We can nowcontinue this way, and define operators (called transfer operators ) ( E [ s ] ) ββ ′ αα ′ = X i A [ s ] i,αβ ¯ A [ s ] i,α ′ β ′ αα ′ to ββ ′ , and graphically write as . (8)Similarly, we define operators (9)and . All of these operators can be computed efficiently (in the parameters D and N ), as they arevectors/matrices of fixed dimension D , and can be obtained by contracting a constant numberof indices.Using the newly defined objects L , E , E h , and R , the expectation value h Ψ | h | Ψ i , Eq. (7), canbe rewritten as h Ψ | h | Ψ i = LE [2] · · · E [ k − E h E [ k +2] · · · R = . This is, h Ψ | h | Ψ i can be computed by multiplying a D -dimensional vector O ( N ) times with D × D matrices. Each of these multiplication takes O ( D ) operations, and thus, h Ψ | h | Ψ i can be evaluated in O ( N D ) operations. There are O ( N ) terms in the Hamiltonian, and thus,the energy h Ψ | P i h i | Ψ i / h Ψ | Ψ i can be evaluated in time O ( N D ) , and thus efficiently; infact, this method can be easily improved to scale as O ( N D ) . Similarly, one can see that e.g.correlation functions h Ψ | P i ⊗ Q j | Ψ i or string order parameters h Ψ | X ⊗ X ⊗ · · · ⊗ X | Ψ i canbe reduced to matrix multiplications and thus evaluated in O ( N D ) . Exactly the same way,evaluating expectation values for MPS with periodic boundary conditions can be reduced tocomputing the trace of a product of matrices E of size D × D . Each multiplication scaleslike O ( D ) , and using the same tricks as before, one can show that for systems with periodicboundary conditions, expectation values can be evaluated in time O ( N D ) .In summary, we find that energies, correlations functions, etc. can be efficiently evaluated forMPS, with computation times scaling as O ( N D ) and O ( N D ) for open and periodic boundaryconditions, respectively. Firstly, one uses that the products L · · · E [ s ] and E [ s ] · · · R need to be computed only once (this can be simpli-fied even further by choosing the appropriate gauge [11, 12]), reducing the scaling in N to O ( N ) . Secondly, oneslightly changes the contraction order: Starting from the left, one contracts A [1] , ¯ A [1] , A [2] , ¯ A [2] , A [3] , etc.: Thisinvolves multiplications of D × D matrices with D × dD matrices, and D × Dd matrices with Dd × D matrices,yielding a O ( dD ) scaling. ondensed Matter Applications of Entanglement Theory C4.9 As we have seen, we can efficiently compute the energy of an MPS with respect to a given localHamiltonian H = P i h i . In order to use MPS for numerical simulations, we still need to figureout an efficient way to find the MPS which minimizes the energy for a given D . To this end,let us first pick a site k , and try to minimize the energy as a function of A [ k ] , while keeping allother MPS tensors A [ s ] , s = k , fixed. Now, since | Ψ i is a linear function of A [ k ] , we have that h Ψ | H | Ψ ih Ψ | Ψ i = ~A [ k ] † X ~A [ k ] ~A [ k ] † Y ~A [ k ] is the ratio of two quadratic forms in A [ k ] . Here, ~A [ k ] denotes the vectorized version of A [ k ] i,αβ ,where ( i, α, β ) is interpreted as a single index. The matrices X and Y can be obtained bycontracting the full tensor network (7) except for the tensors A [ k ] and ¯ A [ k ] , which can be doneefficiently. The ~A [ k ] which minimizes this energy can be found by solving the generalizedeigenvalue equation X ~A [ k ] = E Y ~A [ k ] where E is the energy; again, this can be done efficiently in D . For MPS with open boundaryconditions, we can choose a gauge for the tensors such that Y = Y .This shows that we can efficiently minimize the energy as a function of the tensor A [ k ] at anindividual site k . In order to minimize the overall energy, we start from a randomly chosenMPS, and then sweep through the sites, sequentially optimizing the tensor at each site. Iteratingthis a few times over the system (usually sweeping back and forth) quickly converges to a statewith low energy. Although in principle, such an optimization can get stuck [14, 15], in practiceit works extremely well and generally converges to the optimal MPS (though some care mighthave to be put into choosing the initial conditions).In summary, we find that we can use MPS to efficiently simulate ground state properties ofone-dimensional quantum systems with both open and periodic boundary conditions. This sim-ulation method can be understood as a reformulation of the Density Matrix RenormalizationGroup (DMRG) algorithm [16, 17], which is a renormalization algorithms based on keeping thestates which are most relevant for the entanglement of the system, and which since its inventionhas been highly successful in simulating the physics of one-dimensional quantum systems (seeRefs. [11, 18, 19] for a review of the DMRG algorithm and its relation to MPS, and a moredetailed discussion of MPS algorithms). Up to now, we have seen that MPS form a powerful framework for the simulation of one-dimensional systems, due to their ability to efficiently approximate ground states of local Hamil- MPS have a natural gauge degree of freedom, since for any X s with a right inverse X − s , we can alwaysreplace A [ s ] i ↔ A [ s ] i X s A [ s +1] i ↔ X − s A [ s +1] i without changing the state; this gauge degree of freedom can be used to obtain standard forms for MPS withparticularly nice properties [12, 13]. Fig. 4:
The AKLT model is built from spin- singlets ( S = 0 ) between adjacent sites, which areprojected onto the spin one subspace ( P = Π S =1 ) as indicated. tonians, and due to the fact that the necessary variational minimization can be carried out ef-ficiently. As we will see in the following, MPS can also be used as a framework to constructsolvable models whose properties can be understood analytically, which makes them a powerfultool for resolving various analytical problems regarding quantum many-body systems. The paradigmatic analytical MPS model is the so-called AKLT state, named after Affleck,Kennedy, Lieb, and Tasaki [20, 21]. The construction of the translational invariant AKLT stateis illustrated in Fig. 4: The auxiliary systems are spin- particles, which are put into a singlet(i.e., spin S = 0 ) state | ω i = | i − | i . The two virtual spin- ’s at each site have jointlyspin ⊗ = 0 ⊕ , and the map P projects onto the spin- subspace, thereby describing atranslationally invariant chain of spin- particles.Let us now see what we can say about the reduced density operator ρ of two consecutive sitesin the AKLT model. The following argument is illustrated in Fig. 5: On the virtual level, westart from two “open” bonds, each of which has spin (while we might have more informationabout their joint state, we are free to neglect it), as well as one “interior” bond which has spin ; thus, the virtual state we start from has spin ⊗ ⊗ = 0 ⊕ . The projections P onto the spin subspace, on the other hand, do not change the total spin (theyonly affect the weight of different subspaces), which implies that ρ has spin or as well. Onthe other hand, ρ is a state of two physical sites, each of which has spin ; this is, it could havespin ⊗ ⊕ ⊕ . We therefore find that there is a non-trivial constraint on ρ arising fromthe AKLT construction: ρ cannot have spin .We can now use this to construct a non-trivial Hamiltonian which has the AKLT state as itsground state. Namely, let h i,i +1 = Π S =2 be a local Hamiltonian acting on sites i and i + 1 ,projecting onto the spin S = 2 subspace of the two sites. Then, by the preceding argument, h i,i +1 | Ψ AKLT i = 0 with | Ψ AKLT i the AKLT state, and thus, with H = X i h i,i +1 , we have that H | Ψ AKLT i = 0 . This bond can be easily replaced by a normal bond | i + | i by applying the transformation (cid:0) − (cid:1) on oneside, which can subsequently be absorbed in the map P . ondensed Matter Applications of Entanglement Theory C4.11 Fig. 5:
Construction of the parent Hamiltonian for the AKLT model.
On the other hand, H ≥ , and thus, the AKLT state is a ground state of the AKLT Hamiltonian H .While the argument we have given right now seems specific to the AKLT model, it turns outthat the very same way, we can construct a so-called “parent Hamiltonian” for any MPS. To thisend, note that the reduced density operator ρ k for k consecutive sites lives in a d k dimensionalspace, yet it can have rank at most D (since the only free parameters we have are the outermost“open” bonds). Thus, as soon as d k > D , ρ k does not have full rank, and we can build a localHamiltonian h acting on k consecutive sites as h = − Π supp( ρ k ) which has the given MPS asits ground state.Of course, it is not sufficient to have a non-trivial Hamiltonian with the AKLT state (or any otherMPS) as its ground state, but we want that the AKLT state is the unique ground state of thisHamiltonian. Indeed, it turns out that there is a simple condition which is satisfied for almostany MPS (and in particular for the AKLT state) which allows to prove that it is a unique groundstate of a suitable chosen parent Hamiltonian; for the case of the AKLT model, it turns out thatthis Hamiltonian is the two-body Hamitonian which we defined originally. Secondly, it can beshown that the parent Hamiltonian of MPS is always gapped, this is, there is a finite gap abovethe ground space which does not close even as N → ∞ .Together, this proves that the AKLT state is the unique ground state of a gapped local Hamilto-nian. It is an easy exercise to show that the AKLT Hamiltonian equals h i,i +1 = S i · S i +1 + ( S i · S i +1 ) + , and is thus close to the spin- Heisenberg model. Indeed, proving that the spin- Heisenbergmodel is gapped, as conjectured by Haldane, was one motivation for the construction of theAKLT model, showing that MPS allow for the construction of models of interest for whichcertain properties can be rigorously proven.The fact that MPS naturally appear as ground states of local Hamiltonians makes them a verypowerful tool in studying the properties of one-dimensional gapped systems. In particular, onecan use them to classify phases of one-dimensional systems: On the one hand, we know that theground state of any gapped one-dimensional Hamiltonian is well approximated by an MPS. Thisallows us to replace the original Hamiltonian by the parent Hamiltonian of the MPS while keep-ing a gap (i.e., without crossing a phase transition). Given two MPS characterized by tensors P and P , we can now build a smooth interpolation between the two models (i.e., Hamiltonians)by appropriately interpolating between P and P : due to the very way the parent Hamiltonian This condition is known as “injectivity” and states there is a k such that after blocking k sites, the map X P tr[ A i · · · A i k X ] | i , . . . , i k i from the “open bonds” to the physical system is injective, or, differentlyspeaking, that we can infer the state of the bonds by looking at the physical system; the AKLT state acquiresinjectivity for k = 2 . It can then be shown that the parent Hamiltonian defined on k + 1 sites has a unique groundstate; in the case of the AKLT model, one can additionally show that the ground space of this three-body parentHamiltonian and the original two-body one are identical [12, 13, 21–23]. | Ψ i ) implies that U ⊗ Ng | Ψ i = | Ψ i . For an MPS with such a symme-try, it can be shown [25, 26] that the symmetry is reflected in a local symmetry of the MPS map P , U g P = P ( V g ⊗ ¯ V g ) , and classifying the possible inequivalent types of V g allows to classifythe different phases in one-dimensional systems in the presence of symmetries [24, 27, 28]. As we have discussed in the preceding section, MPS form a powerful tool for the constructionof solvable models. Let us now have a closer look at what we can say about the behavior ofphysical quantities. While we have discussed how to extract physically relevant quantities suchas energies and correlation functions numerically in Sec. 2.2.1, the purpose of this section isdifferent: Here, we focus on a translationally invariant exact MPS ansatz, and we are asking forsimple (and ideally closed) expressions for quantities such as the correlation length.In Sec. 2.2.1, the central object was the transfer operator, . As compared to Eq. (8), we have omitted the site-dependency index [ s ] . It is easy to see that– up to local unitaries – the MPS is completely determined by the transfer operator E : If A and B give rise to the same transfer operator, P A i ⊗ ¯ A i = P B i ⊗ ¯ B i , then we must have A i = P j v ij B j , with V = ( v ij ) an isometry. In the following, we would like to see howspecific properties can be extracted from the transfer operator.One property which is of particular interest are two-point correlations. To obtain those, let usfirst define = X ij h j | O | i i A i ⊗ ¯ A j . Then, the correlation between operator A at site and B at site j + 2 on a chain of length N → ∞ is h A B j +2 i = tr[ E A E j E B E N − j − ]tr[ E N ] . (10)If we now assume that E has a unique maximal eigenvalue λ , which we normalize to λ = 1 ,with eigenvectors | r i and h l | , then for large N , E N → | r ih l | , and thus Eq. (10) converges to h l | E A E j E B | r i . If we now expand E j = | r ih l | + X k ≥ λ jk | r k ih l k | This is nothing but the statement that two purifications of a mixed state are identical up to an isometry on thepurifying system (this can be easily proven using the Schmidt decomposition, cf. Sec. 2.5 of Ref. [29]), where theindex i corresponds to the purifying system. ondensed Matter Applications of Entanglement Theory C4.13(where | λ k | < for k ≥ ), we find that h A B j +2 i = h l | E A | r ih l | E B | l i + X k ≥ λ jk h l | E A | r k ih l k | E B | l i , and thus decays exponentially with j ; in particular, the correlation length ξ is given by the ratioof the second largest to the largest eigenvalue of E , ξ = − (cid:12)(cid:12) λ λ (cid:12)(cid:12) , i.e., it is determined solely by simple spectral properties of the transfer operator. Let us notethat the same quantity, i.e. the ratio between the two largest eigenvalues, also bounds the gap ofthe parent Hamiltonian [13, 30].The proof above that correlations in MPS decay exponentially is not restricted to the case of aunique maximal eigenvalue of the transfer operator: In case it is degenerate, this implies that thecorrelation function has a long-ranged part (i.e., one which is not decaying), such as in the GHZstate, while any other contribution still decays exponentially. In particular, MPS cannot exhibitalgebraically decaying correlations, and in case MPS are used to simulate critical systems,one has to take into account that a critical decay of correlations in MPS simulations is alwaysachieved by approximating it by a sum of exponentials. As we have seen, MPS are very well suited for simulating ground state properties of one-dimensional systems. But what if we want to go beyond one-dimensional systems, and, e.g.,study interacting spin systems in two dimensions? Two-dimensional systems can exhibit a richvariety of phenomena, such as topologically ordered states [31, 32] which are states distinctfrom those in the trivial phase, yet which do not break any (local) symmetry. Moreover, two-dimensional spin systems can be highly frustrated due to the presence of large loops in theinteraction graph, and even classical two-dimensional spin glasses can be hard to solve [33]. Inthe following, we will focus on the square lattice without loss of generality.A first idea to simulate two-dimensional systems would be to simply use an MPS, by choosinga one-dimensional ordering of the spins in the two-dimensional lattice. While this approach hasbeen applied successfully (see, e.g., Ref. [34]), it cannot reproduce the entanglement features oftypical ground states in two dimensions as one increases the system size: As we have discussedin Section 1.1, two-dimensional systems also satisfy an area law, i.e., in the ground state weexpect the entanglement of a region A with its complement to scale like its boundary, S ( ρ A ) ∼| ∂A | . To obtain an ansatz with such an entanglement scaling, we follow the same route as in theconstruction of MPS: We consider each site as being composed of four D -dimensional virtualsubsystems, place each of them in a maximally entangled state | ω D i with the correspondingsubsystem of each of the adjacent sites, and finally apply a linear map P s : C D ⊗ C D ⊗ C D ⊗ C D → C d Fig. 6:
PEPS construction for a 2D square lattice, where we have omitted the site-dependence
P ≡ P s of the maps P s . at each site s to obtain a description of the physical state on a 2D lattice of d -level sytems. Theconstruction is illustrated in Fig. 6. Due to the way they are constructed, these states are called Projected Entangled Pair States (PEPS). Again, we can define five-index tensors A [ s ] = A [ s ] i,αβγδ ,where now P s = X iαβγδ A [ s ] i,αβγδ | i ih α, β, γ, δ | , and express the PEPS in Fig. 6 graphically as a tensor network(where we have omitted the tensor labels). Similar to the result in one dimension, one can showthat PEPS approximate ground states of local Hamiltonians well as long as the density of statesgrows at most polynomially with the energy [35, 36], and thereby provide a good variationalansatz for two-dimensional systems. (Note, however, that it is not known whether all 2D stateswhich obey an area law are approximated well by PEPS.) Let us next consider what happens if we try to compute expectation values of local observablesfor PEPS. For simplicity, we first discuss the evaluation of the normalization h Ψ | Ψ i , which isobtained by sandwiching the ket and bra tensor network of | Ψ i , h Ψ | Ψ i = . (11)ondensed Matter Applications of Entanglement Theory C4.15This can again be expressed using transfer operators(the E should be thought of as being “viewed from the top”), leaving us with the task of con-tracting the network . [This easily generalizes to the computation of expectation values, where some of the E have tobe modified similar to Eq. (9)]. Different from the case of MPS, there is no one-dimensionalstructure which we can use to reduce this problem to matrix multiplication. In fact, it is easy tosee that independent of the contraction order we choose, the cluster of tensors we get (such as arectangle) will at some point have a boundary of a length comparable to the linear system size.This is, we need to store an object with a number of indices proportional to √ N – and thus an exponential number of parameters – at some point during the contraction, making it impossibleto contract such a network efficiently. (Indeed, it can be proven that such a contraction is acomputationally hard problem [10].)This means that if we want to use PEPS for variational calculations in two dimensions, we haveto make use of some approximate contraction scheme, which of course should have a small andideally controlled error. To this end, we proceed as follows [37]: Consider the contraction of atwo-dimensional PEPS with open boundary conditions, . (12)Now consider the first two columns, and block the two tensors in each column into a new tensor4.16 Norbert Schuch F (with vertical bond dimension D ): . (13)This way, we have reduced the number of columns in (12) by one. Of course, this came at thecost of squaring the bond dimension of the first column, so this doesn’t help us yet. However,what we do now is to approximate the right hand side of (13) by an MPS with a (fixed) bonddimension αD for some α . We can then iterate this procedure column by column, therebycontracting the whole MPS, and at any point, the size of our tensors stays bounded. It remainsto be shown that the elementary step of approximating an MPS | Φ i [such as the r.h.s. of (13)]by an MPS | Ψ i with smaller bond dimension can be done efficiently: To this end, it is sufficientto note that the overlap h Φ | Ψ i is linear in each tensor A [ s ] of | Ψ i , and thus, maximizing theoverlap (cid:12)(cid:12) h Φ | Ψ i (cid:12)(cid:12) h Ψ | Ψ i can again be reduced to solving a generalized eigenvalue problem, just as the energy minimiza-tion for MPS in the one-dimensional variational method. Differently speaking, the approxi-mate contraction scheme succeeds by reducing the two-dimensional contraction problem to asequence of one-dimensional contractions, i.e., it is based on a dimensional reduction of theproblem.This shows that PEPS can be contracted approximately in an efficient way. The scaling in D is naturally much less favorable than in one dimension, and for the most simple approach onefinds a scaling of D for open boundaries, which using several tricks can be improved down to D . Yet, the method is limited to much smaller D as compared to the MPS ansatz. It should benoted that the approximate contraction method we just described has a controlled error, as weknow the error made in in each approximation step. Indeed, the approximation is very accurateas long as the system is short-range correlated, and the accuracy of the method is rather limitedby the D needed to obtain a good enough approximation of the ground state. Just as in onedimension, we can use this approximate contraction method to build a variational method fortwo-dimensional systems by successively optimizing over individual tensors [37]. The PEPS construction is not limited to square lattices, but can be adapted to other lattices,higher dimensions, and even arbitrary interaction graphs. Clearly, the approximate contractionscheme we just presented works for any two-dimensional lattice, and in fact for any planargraph. In order to approximately contract systems in more than two dimensions, note thatthe approximate contraction scheme is essentially a scheme for reducing the dimension of theproblem by one; thus, in order to contract e.g. three-dimensional systems we can nest two layersondensed Matter Applications of Entanglement Theory C4.17of the scheme. In cases with a highly connected PEPS graph (e.g., when considering systemswith highly connected interaction graphs such as orbitals in a molecule), one can of course stilltry to find a sequential contraction scheme, though other contraction methods might be morepromising.The contraction method described in Section 3.1.1 is not the only contraction scheme for PEPS.One alternative method is based on renormalization ideas [38–40]: There, one takes blocks ofe.g. × tensors and tries to approximate them by a tensor with lower bond dimension byappropriate truncation, . Finding the best truncation scheme requires exact knowledge of the environment, i.e., the con-traction of the remaining tensor network. Since this is as hard as the original problem, heuristicmethods to approximate the environment (such as to only contract a small number of surrond-ing tensors exactly, and imposing some boundary condition beyond that) have been introduced.While these approximations are in principle less accurate and the error is less controlled, theirmore favorable scaling allows for larger D and thus potentially better approximations of theground state.Another approach to speed up PEPS contraction is using Monte Carlo sampling [41–43]: Wecan always write h Ψ | O | Ψ ih Ψ | Ψ i = X i p ( i ) h i | O | Ψ ih i | Ψ i , (14)where the sum runs over an orthonormal basis | i i , and p ( i ) = |h i | Ψ i| / h Ψ | Ψ i ; in particular, wewant to consider the local spin basis i = ( i , . . . , i N ) . If we can compute h i | Ψ i and h i | O | Ψ i (where the latter reduces to the former if O is a local operator), then we can use Monte Carlosampling to approximate the expectation value h Ψ | O | Ψ i . In particular, for PEPS h i | Ψ i canagain be evaluated by contracting a two-dimension tensor network; however, this network nowhas bond dimension D rather than D . Thus, we can apply any of the approximate contractionschemes described before, but we can go to much larger D with the same computational re-sources; it should be noted, however, that the number of operations needs to be multiplied withthe number M of sample points taken, and that the accuracy of Monte Carlo sampling improvesas / √ M . Just as MPS, PEPS are also very well suited for the construction of solvable models. First ofall, many relevant states have exact PEPS representations: For instance, the 2D cluster stateunderlying measurement based computation [6] is a PEPS with D = 2 [7], as well as varioustopological models such as Kitaev’s toric code state [9, 22, 32] or Levin and Wen’s string-netmodels [44–46]. A particularly interesting class of exact PEPS ansatzes is obtained by consid-ering classical spin models H ( s , . . . , s N ) , such as the 2D Ising model H = − P s i s j , anddefining a quantum state | ψ i = X s ,...,s N e − β/ H ( s ,...,s N ) | s , . . . , s N i . z basis as the classicalGibbs state at inverse temperature β . On the other hand, this state has a PEPS representation(called the “Ising PEPS”) with P = | ih a, a, a, a | + | ih b, b, b, b | , where h a | a i = h b | b i = 1 and h a | b i = h b | a i = e − β [9]. This implies that in two dimensions, PEPS (e.g., the Ising PEPS at thecritical β ) can exhibit algebraically decaying correlations which implies that any correspondingHamiltonian must be gapless. This should be contrasted with the one-dimensional case, wherewe have seen that any MPS has exponentially decaying correlations and is the ground state of agapped parent Hamiltonians.Clearly, parent Hamiltonians can be defined for PEPS just the same way as for MPS: Thereduced density operator ρ k × ℓ of a region of size k × ℓ lives on a d kℓ -dimensional system, yet canhave rank at most D k +2 ℓ . Thus, by growing both k and ℓ we eventually reach the point where ρ k × ℓ becomes rank deficient, giving rise to a non-trivial parent Hamiltonian H with local terms H projecting onto the kernel of ρ k × ℓ . Again, just as in one dimension it is possible to deviseconditions under which the ground state of H is unique [49], as well as modified conditionsunder which the ground space of H has a topological structure [22], which turns out to berelated to “hidden” (i.e., virtual) symmetries of the PEPS tensors. Unlike in one dimension,these parent Hamiltonians are not always gapped (such as for the critical Ising PEPS discussedabove); however, techniques similar to the ones used in one dimension can be used to prove agap for PEPS parent Hamiltonians under certain conditions [24]. Up to now, our discussion has been focused on ground states of many-body systems. How-ever, the techniques described here can also be adapted to simulate thermal states as well astime evolution of systems governed by local Hamiltonians. In the following, we will discussthe implementation for one-dimensional systems; the generalization to to 2D and beyond isstraightforward.Let us start by discussing how to simulate time evolution. (This will also form the basis forthe simulation of thermal states.) We want to study how an initial MPS | Ψ i changes under theevolution with e iHt ; w.l.o.g., we consider H to be nearest neighbor. To this end, we perform aTrotter decomposition e iHt ≈ (cid:0) e iH even t/M e iH odd t/M (cid:1) M where we split H = H even + H odd into even and odd terms (acting between sites , , . . . , and , , . . . , respectively), such that both H even and H odd are sums of non-overlapping terms.For large M , the Trotter expansion becomes exact, with the error scaling like O (1 /M ) . We cannow write e iH even τ = O i =1 , , ,... e ih i,i +1 τ = (with τ = t/M ), and similarly for e iH odd τ . Thus, after one time step τ the initial MPS is This follows from the so-called “exponential clustering theorem” [47, 48] which states that ground states ofgapped Hamiltonians have exponentially decaying correlation functions. ondensed Matter Applications of Entanglement Theory C4.19transformed into . Here, the lowest line is the initial MPS, and the next two lines the evolution by H even and H odd for a time τ , respectively. We can proceed this way and find that the state after a time t isdescribed as the boundary of a two-dimensional tensor network. We can then use the sameprocedure as for the approximate contraction of PEPS (proceeding row by row) to obtain anMPS description of the state at time t [5]. A caveat of this method is that this only works wellas long as the state has low entanglement at all times, since only then, a good MPS approx-imation of the state exists [4, 50]. While this holds for low-lying excited states with a smallnumber of quasiparticles, this is not true after a quench, i.e., a sudden change of the overallHamiltonian of the system [51, 52]. However, this does not necessarily rule out the possibil-ity to simulate time evolution using tensor networks, since in order to compute an expectationvalue h Ψ | e − iHt Oe iHt | Ψ i , one only needs to contract a two-dimensional tensor network with noboundary, which can not only be done along the time direction (row-wise) but also along thespace direction (column-wise), where such bounds on the correlations do not necessarily hold;indeed, much longer simulations times have be obtained this way [53].In the same way as real time evolution, we can also implement imaginary time evolution; andsince e − βH acting on a random initial state approximates the ground state for β → ∞ , this canbe used as an alternative algorithm for obtaining MPS approximations of ground states.In order to simulate thermal states, we use Matrix Product Density Operators (MPDOs) [54] ρ = X i ,...,iNj ,...,jN tr[ A [1] i ,j · · · A [ N ] i N ,j N ] | i , . . . , i N ih j , . . . , j N | = , where each tensor A [ s ] now has two physical indices, one for the ket and one for the bra layer.We can then write the thermal state as e − βH = e − βH/ e − βH/ and use imaginary time evolution (starting from the maximally mixed state e − βH , which can again be transformed into an MPDO with bounded bond dimension usingapproximate contraction [54]. There is a number of other entanglement based ansatzes beyond MPS and PEPS for interactingquantum systems, some of which we will briefly sketch in the following.Firstly, there is the Multiscale Entanglement Renormalization Ansatz (MERA) [55], which is anansatz for scale invariant systems (these are systems at a critical point where the Hamiltonian4.20 Norbert Schuch
Fig. 7:
The Multi-Scale Entanglement Renormalization Ansatz (MERA) in 1D. (The left andright boundary are connected.) is gapless, and which have algebraically decaying correlation functions), and which incorpo-rates the scale-invariance in the ansatz. A first step towards a scale-invariant ansatz would beto choose a tree-like tensor network. However, such an ansatz will not have sufficient entangle-ment between different blocks. Thus, one adds additional disentanglers which serve to removethe entanglement between different blocks, which gives rise to the tensor network shown inFig. 7. In order to obtain an efficiently contractible tensor network, one chooses the tensors tobe unitaries/isometries in vertical direction, such that each tensor cancels with its adjoint. It iseasy to see that this way for any local O , in the tensor network for h Ψ | O | Ψ i most tensors cancel,and one only has to evaluate a tensor network of the size of the depth of the MERA, which islogarithmic in its length [55]. The MERA ansatz is not restricted to one dimension and can alsobe used to simulate critical system in 2D and beyond [56].A different variational class is obtained by studying states for which expectation values can becomputed efficiently using Monte Carlo sampling. Following Eq. (14), this requires (for localquantities O ) that we can compute h i | Ψ i efficiently for all i = ( i , . . . , i N ) . One class of statesfor which this holds is formed by MPS, which implies that we can evaluate expectation valuesfor MPS using Monte Carlo sampling [41, 42] (note that the scaling in D is more favorablesince h i | Ψ i can be computed in time ∝ N D ). This can be extended to the case where h i | Ψ i is a product of efficiently computable objects, such as products of MPS coefficients definedon subsets of spins: We can arrange overlapping one-dimensional strings in a 2D geometryand associate to each of them an MPS, yielding a class known as string-bond states [41, 57],which combines a flexible geometry with the favorable scaling of MPS-based methods. Wecan also consider h i | Ψ i to be a product of coefficients each of which only depends on the spins i k supported on a small plaquette, and where the lattice is covered with overlapping plaquettes,yielding a family of states known as Entangled Plaquette States (EPS) [58] or Correlator ProductStates (CPS) [59], which again yields an efficient algorithm with flexible geometries. In all ofthese ansatzes, the energy is minimized by using a gradient method, which is considerablyfacilitated by the fact that the gradient can be sampled directly without the need to first computethe energy landscape.In order to simulate infinite lattices, it is possible to extend MPS and PEPS to work for infinitesystems: iMPS and iPEPS. The underlying idea is to describe the system by an infinite MPS andPEPS with a periodic pattern of tensors such as ABABAB. . . (which allows the system to breakondensed Matter Applications of Entanglement Theory C4.21translational symmetry and makes the optimization more well-behaved). Then, one fixes alltensors except for one and minimizes the energy as a function of that tensor until convergenceis reached. For the optimization, one needs to determine the dependence of the energy on theselected tensor, which can be accomplished in various ways, such as using the fixed point ofthe transfer operator, renormalization methods (cf. Section 3.1.2), or the corner transfer matrixapproach. For more information, see, e.g., [60–62]. Up to now, we have considered the simulation of spin systems using tensor networks. Onthe other hand, in many cases of interest, such as for the Hubbard model or the simulation ofmolecules, the underlying systems are fermionic in nature. In the following, we will discusshow tensor network methods such as MPS, PEPS, or MERA can be extended to the simulationof fermionic systems.In order to obtain a natural description of fermionic systems, the idea is to replace each object(i.e., tensor) in the construction of PEPS or MERA by fermionic operators [63–65]. This is,in the construction of MPS and PEPS, Fig. 3 and Fig. 6, both the maximally entangled bondsand the P s are now built from fermionic operators and need to preserve parity; equally, in theMERA construction, Fig. 7, all unitaries and isometries are fermionic in nature. The resultingstates are called fermionic PEPS (fPEPS) and fermionic MERA (fMERA).Let us now have a closer look at a fermionic tensor network, and discuss how to computeexpectation values for those states. E.g., the fPEPS construction yields a state ( P ⊗ P ⊗ · · · )( ω ⊗ ω ⊗ · · · ) | Ω i , where | Ω i is the vacuum state, the ω i create entangled fermionic states between the correspond-ing auxiliary modes, and the P s map the auxiliary fermionic modes to the physical fermionicmodes at site s (leaving the auxiliary modes in the vacuum). While the product of the ω i containsonly auxiliary mode operators in a given order, the product of the P s contains the physical andauxiliary operators for each site grouped together. To compute expectation values, on the otherhand, we need to move all the physical operators to the left and the virtual operators to the rightin the product of the P s ; additionally, the virtual operators have to be arranged such that theycancel with the ones arising from the product of the ω i . Due to the fermionic anti-commutationrelations, this reordering of fermionic operators results in an additional complication whichwas not present for spin systems. Fortunately, it turns out that there are various ways how totake care of the ordering of fermionic operators at no extra computational cost: One can usea Jordan-Wigner transformation to transform the fermionic system to a spin system [63, 65];one can map the fPEPS to a normal PEPS with one additional bond which takes care of thefermionic anticommutation relations [64]; or one can replace the fermionic tensor network bya planar spin tensor network with parity preserving tensors, where each crossing of lines [notethat a planar embedding of a network such as the 2D expectation value in Eq. (11) gives rise tocrossings of lines, which corresponds to the reordering of fermionic operators] is replaced by atensor which takes care of the anticommutation rules [66, 67].4.22 Norbert Schuch In the preceding sections, we have learned about different methods to simulate the physics ofmany-body systems governed by local interactions, using their specific entanglement structure.While these algorithms work very well in practice, the convergence of these methods can almostnever be rigorously proven. But does this mean that we just haven’t found the right proof yet,or is there a fundamental obstacle to proving that these methods work – and if yes, why do theywork after all?
These questions can be answered using the tools of complexity theory, which deals with clas-sifying the difficulty of problems. Difficulty is typically classified by the resources needed tosolve the problem on a computer (such as computation time or memory) as a function of the problem size , this is, the length of the problem description n .Let us start by introducing some relevant complexity classes. To simplify the comparisonof problems, complexity theorists usually focus on “decision problems”, i.e., problems witha yes/no answer. While this seems overly restrictive, being able to answer the right yes/noquestion typically enables one to also find an actual solution to the problem. (E.g., one can askwhether the solution is in a certain interval, and divide this interval by two in every step.) So letus get started with the complexity classes: • P (“polynomial time”): The class of problems which can be solved in a time which scalesat most as some polynomial poly( n ) in the size of the problem. For instance, multiplyingtwo numbers is in P – the number of elementary operators scales like the product ofthe number of digits of the two numbers to be multiplied. Generally, we consider theproblems in P those which are efficiently solvable. • NP (“non-deterministic polynomial time”): The class of all decision problems where for“yes” instances, there exists a “proof” (i.e., the solution to the problem) whose correct-ness can be verified in a time which is poly( n ) , while for “no” instances, no such proofexists. Typical problems are decomposing a number into its prime factors (given theprime factors, their product can easily be verified), or checking whether a given graphcan be colored with a certain number of colors without coloring two adjacent vertices inthe same color (for a given coloring, it can be efficiently checked whether all adjacentvertices have different colors). • BQP (bounded-error quantum polynomial time), the quantum version of P : The class ofproblems which can be (probabilistically) solved by a quantum computer in time poly( n ) ;a famous example of a problem in BQP which is not known to be in P is Shor’s algorithmfor factoring [70]. • QMA (quantum Merlin Arthur), one quantum version of NP : The class of decision prob-lems where for a “yes” instance, there exists a quantum proof (i.e., a quantum state) whichcan be efficiently verified by a quantum computer, and no such proof exists for a “no” in-stance. For example, determining whether the ground state energy of a local Hamiltonian This is going to be a very brief introduction. If you want to learn more, consult e.g. Refs. [68, 69]. ondensed Matter Applications of Entanglement Theory C4.23
Fig. 8:
Relevant complexity classes and their relation. H is below some threshold E (up to a certain accuracy) is in QMA – in a “yes” case, theproof would be any state | ψ i with energy below the threshold, and the energy h ψ | H | ψ i can be efficiently estimated by a quantum computer, using e.g. phase estimation [71].The relation of the classes to each other is illustrated in Fig. 8. All of the inclusions shownare generally believed to be strict, but none of them has up to now been rigorously proven. Inparticular, whether NP = P is probably the most famous open problem in theoretical computerscience.An important concept it that of complete problems: A problem is complete for a class if it is ashard as any problem in the class. Completeness is established by showing that any problem inthe class can be reduced to the complete problem, this is, it can be reformulated as an instanceof the complete problem in poly( n ) time. The relevance of complete problems lies in the factthat they are the hardest problems in a class: This is, if a problem is e.g. NP -complete, andwe believe that NP = P , then this problem cannot be solved in polynomial time; and on similargrounds, QMA -complete problems cannot be solved in polynomial time by either a classical or aquantum computer. We will discuss physically relevant complete problems relating to quantummany-body systems in the next section.
In the following, we will discuss the computational complexity of quantum many-body prob-lems. The main problem we will consider is determining whether the ground state energy of aHamiltonian H which is a sum of few-body terms is below a certain threshold (up to a certainaccuracy). As we have argued above, this problem is inside the class QMA . In the following, wewill argue that the problem is in fact
QMA -complete, which implies that it is hard not only forclassical, but even for quantum computers; the following construction is due to Kitaev [71, 72].To this end, we need to consider an arbitrary problem in
QMA and show that we can rephrase itas computing the ground state energy of a certain Hamiltonian. A problem in
QMA is describedby specifying the quantum circuit which verifies the proof, and asking whether it has an inputwhich will be accepted with a certain probability. The verifying circuit is illustrated in Fig. 9.Now let us assume that there is a valid proof | φ i , and run the verifier with | φ i (plus ancillas | i ) as an input. At each time step t , the state of the register the verifier acts on is denoted by | ψ t i , and the proof is accepted if the first qubit of the state | ψ T i at time T is in state | i .Our aim will now be to build a Hamiltonian which has the “time history state” | Ψ i = X t | ψ t i R | t i T (15)as a ground state. To this end, we will use three types of terms: One term, H init , will ensurethat the ancillas are in the state | i at time t = 0 , by increasing the energy of non-zero ancillas4.24 Norbert Schuch Fig. 9:
QMA verifier: The proof | φ i , together with some ancillas | i , is processed by theverifying circuit U T · · · U ; the intermediate state at step t is denoted by | ψ t i . At t = T , the firstqubit is measured in the computational basis to determine whether the proof is accepted. if t = 0 . A second term, H prop , will ensure correct propagation from time t to t + 1 , i.e., | ψ t i = U t | ψ t − i . Finally, a last term H final will increase the energy if the proof is not accepted,i.e., the first bit of | ψ T i is not | i . Together, H = H init + H prop + H final (16)has a ground state with minimal energy if there exists a valid proof [which in that case is exactlythe time history state | Ψ i of Eq. (15)]. On the other hand, in a “no” instance in which no validproof exists, any ground state – which is still of the form (15) – must either start with incorrectlyinitialized ancillas, or propagate the states incorrectly (violating | ψ t i = U t | ψ t − i ), or be rejectedby H final , thereby increasing the energy sufficiently to reliably distinguish “yes” from “no”instances. Together, this demonstrates that finding the ground state energy of the Hamiltonian(16) is a QMA -complete problem, showing that it is impossible to solve this problem even for aquantum computer [71, 72]).While the Hamiltonian (16) is not spatially local, there has been a number of works sequentiallyproving
QMA -hardness for more and more simple Hamiltonians. In particular, nearest-neighborHamiltonians on a 2D square lattice of qubits are
QMA -complete [73], and, most notably, near-est neighbor Hamiltonians on one-dimensional chains [74], thus substantiating the difficultiesencountered in searching for convergence proofs for simulation methods even in 1D.Let us note that there is also a classical version of the above construction, showing NP -com-pleteness of classical ground state problems: In that case, we can use a “history state” | ψ i ⊗ | ψ i ⊗ · · · ⊗ | ψ T i with no coherence between different times, where we think of the states as being arranged incolumns. The Hamiltonian is similar to the one before, Eq. (16), just that the individual termsare no more conditioned on the value of a “time” register, but simply act on the columns ofthe history state corresponding to the right time t . (Sloppily speaking, the reason that we cando so is that all terms in the Hamiltonian act in a classical basis, and classical information canbe cloned.) The resulting Hamiltonian is at most four-local, acting across × plaquettesof adjacent spins. Again, it has been shown using differerent techniques that even classicaltwo-dimensional models with Ising-type couplings and local fields are still NP -complete [33].Given all these hardness results for determining ground state properties of classical and quan-tum spin systems, one might wonder why in practice, simulation methods often work very well.First, let us note that these hardness result tell us why in many cases, we cannot prove thatnumerical methods work: This would require to identify the necessary conditions such as toexclude all hard instances, while still keeping the relevant problems, and to find a proof re-lying exactly on these conditions. Indeed, there are many properties which separate practicalondensed Matter Applications of Entanglement Theory C4.25problems from the hard instances we discussed: For instance, in practice one usually considerstranslationally invariant (or nearly translationally invariant) systems (though there exist hard-ness results for translationally invariant systems even in one dimension [75]), and the systemsconsidered will have various symmetries. Even more importantly, all the QMA -complete mod-els known have a spectral gap of at most / poly(n) , as opposed to the typically constant gap inmany problems of interest. On the other hand, the NP -complete Hamiltonian outlined above iscompletely classical and thus has gap , while still being NP -complete. Overall, while there isno doubt that numerical methods are extremely successful in practice, computational complex-ity tells us that we have to carefully assess their applicability in every new scenario. It shouldalso be pointed out that the the fact that a family of problems (such as ground state problems) iscomplete for a certain complexity class addresses only the worst-case complexity, and in manycases, typical instances of the same problem can be solved much more efficiently. In this lecture, we have given an overview over condensed matter applications of quantum infor-mation methods, and in particular of entanglement theory. The main focus was on entanglement-based ansatzes for the description and simulation of quantum many-body systems. We startedby discussing the area law for the entanglement entropy which is obeyed by ground states oflocal interactions, and used this to derive the Matrix Product State (MPS) ansatz which is wellsuited to describe the physics of such systems. We showed that the one-dimensional structure ofMPS allows for the efficient evaluation of expectation values, and that this can be used to builda variational algorithm for the simulation of one-dimensional systems. We have also shownhow MPS appear as ground state of local “parent Hamiltonians” which makes them suitableto construct solvable models, and thus for the study of specific models as well as the generalstructure of quantum phases.We have then discussed Projected Entangled Pair States (PEPS), which naturally generalizeMPS and are well suited for the description of two-dimensional systems, and we have shownhow approximation methods can be used to implement efficient PEPS based simulations. Again,PEPS can also be used to construct solvable models very much the same way as for MPS. Wehave subsequently demonstrated that MPS and PEPS can be used to simulate the time evolutionand thermal states of systems governed by local Hamiltonians. Moreover, we have discussedother tensor network based approaches, such as MERA for scale-invariant systems or iMPSand iPEPS for infinite sytems, and concluded with a discussion on how to apply tensor networkmethods to fermionic systems.Finally, we have introduced another field in which quantum information tools are useful to un-derstand condensed matter systems, namely the field of computational complexity. Complexitytools allow us to understand which problems we can or cannot solve. In particular, we havediscussed the class
QMA which contains the problems which are hard even for quantum com-puters, and we have argued that many ground state problems, including 2D lattices of qubitsand 1D chains, are complete for this class. While this does not invalidate the applicability ofsimulation methods to these problems, it tells us that special care has to be taken, and points outto us why we cannot hope for convergence proofs without imposing further conditions.4.26 Norbert Schuch
References [1] C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A , 2046(1996), quant-ph/9511030.[2] J. Eisert, M. Cramer, and M. Plenio, Rev. Mod. Phys. , 277 (2010), arXiv:0808.3773.[3] M. Hastings, J. Stat. Mech. P08024 (2007), arXiv:0705.2024.[4] F. Verstraete and J. I. Cirac, Phys. Rev. B , 094423 (2006), cond-mat/0505140.[5] G. Vidal, Phys. Rev. Lett. , 040502 (2004), quant-ph/0310089.[6] R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. , 5188 (2001), quant-ph/0010033.[7] F. Verstraete and J. I. Cirac, Phys. Rev. A , 060302 (2004), quant-ph/0311130.[8] C. K. Majumdar and D. Ghosh, J. Math. Phys. , 1388 (1969).[9] F. Verstraete, M. M. Wolf, D. Perez-Garcia, and J. I. Cirac, Phys. Rev. Lett. , 220601(2006), quant-ph/0601075.[10] N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. , 140506 (2007),quant-ph/0611050.[11] U. Schollw¨ock, Rev. Mod. Phys. , 259 (2005), cond-mat/0409292.[12] D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, Quant. Inf. Comput. , 401(2007), quant-ph/0608197.[13] M. Fannes, B. Nachtergaele, and R. F. Werner, Commun. Math. Phys. , 443 (1992).[14] J. Eisert, Phys. Rev. Lett. , 260501 (2006), quant-ph/0609051.[15] N. Schuch, I. Cirac, and F. Verstraete, Phys. Rev. Lett. , 250501 (2008),arXiv:0802.3351.[16] S. R. White, Phys. Rev. Lett. , 2863 (1992).[17] F. Verstraete, D. Porras, and J. I. Cirac, Phys. Rev. Lett. , 227205 (2004),cond-mat/0404706.[18] U. Schollw¨ock, Ann. Phys. , 96 (2011), arXiv:1008.3477.[19] F. Verstraete, V. Murg, and J. I. Cirac, Advances in Physics , 143 (2008).[20] I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Phys. Rev. Lett. , 799 (1987).[21] A. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Commun. Math. Phys. , 477 (1988).[22] N. Schuch, I. Cirac, and D. P´erez-Garc´ıa, Ann. Phys. , 2153 (2010), arXiv:1001.3807.[23] N. Schuch, D. Poilblanc, J. I. Cirac, and D. P´erez-Garc´ıa, Phys. Rev. B , 115108 (2012),arXiv:1203.4816.ondensed Matter Applications of Entanglement Theory C4.27[24] N. Schuch, D. Perez-Garcia, and I. Cirac, Phys. Rev. B , 165139 (2011),arXiv:1010.3732.[25] D. Perez-Garcia, M. Wolf, M. Sanz, F. Verstraete, and J. Cirac, Phys. Rev. Lett. ,167202 (2008), arXiv:0802.0447.[26] M. Sanz, M. M. Wolf, D. Perez-Garcia, and J. I. Cirac, Phys. Rev. A , 042308 (2009),arXiv:0901.2223.[27] F. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B , 064439 (2010),arXiv:0910.1811.[28] X. Chen, Z. Gu, and X. Wen, Phys. Rev. B , 035107 (2011), arXiv:1008.3745.[29] M. A. Nielsen and I. A. Chuang, Quantum Computation and Quantum Information (Cam-bridge University Press, 2000).[30] B. Nachtergaele, Commun. Math. Phys. , 565 (1996).[31] X.-G. Wen,
Quantum Field Theory of Many Body Systems (Oxford University Press,2004).[32] A. Kitaev, Ann. Phys. , 2 (2003), quant-ph/9707021.[33] F. Barahona, J. Phys. A , 3241 (1982).[34] S. Yan, D. A. Huse, and S. R. White, Science , 1173 (2011), arXiv:1011.6114.[35] M. B. Hastings, Phys. Rev. B , 085115 (2006), cond-mat/0508554.[36] M. B. Hastings, Phys. Rev. B , 035114 (2007), cond-mat/0701055.[37] F. Verstraete and J. I. Cirac, (2004), cond-mat/0407066.[38] H. C. Jiang, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett. , 090603 (2008),arXiv:0806.3719.[39] Z.-C. Gu, M. Levin, and X.-G. Wen, Phys. Rev. B , 205116 (2008), arXiv:0806.3509.[40] Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett. , 160601(2009), arXiv:0809.0182.[41] N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. , 40501 (2008),arXiv:0708.1567.[42] A. W. Sandvik and G. Vidal, Phys. Rev. Lett. , 220602 (2007), arXiv:0708.2232.[43] L. Wang, I. Pizorn, and F. Verstraete, Phys. Rev. B , 134421 (2011), arXiv:1010.5450.[44] M. A. Levin and X.-G. Wen, Phys. Rev. B , 045110 (2005), cond-mat/0404617.[45] O. Buerschaper, M. Aguado, and G. Vidal, Phys. Rev. B , 085119 (2009),arXiv:0809.2393.4.28 Norbert Schuch[46] Z.-C. Gu, M. Levin, B. Swingle, and X.-G. Wen, Phys. Rev. B , 085118 (2009),arXiv:0809.2821.[47] M. B. Hastings and T. Koma, Commun. Math. Phys. , 781 (2006), math-ph/0507008.[48] B. Nachtergaele and R. Sims, Commun. Math. Phys. , 119 (2006), math-ph/0506030.[49] D. Perez-Garcia, F. Verstraete, J. I. Cirac, and M. M. Wolf, Quantum Inf. Comput. , 0650(2008), arXiv:0707.2260.[50] N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. , 30504 (2008),arXiv:0705.0292.[51] P. Calabrese and J. Cardy, J. Stat. Mech. P04010 (2005), cond-mat/0503393.[52] N. Schuch, M. M. Wolf, K. G. H. Vollbrecht, and J. I. Cirac, New J. Phys. , 33032(2008), arXiv:0801.2078.[53] M. C. Ba˜nuls, M. B. Hastings, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. , 240603(2009), arXiv:0904.1926.[54] F. Verstraete, J. J. Garcia-Ripoll, and J. I. Cirac, Phys. Rev. Lett. , 207204 (2004),cond-mat/0406426.[55] G. Vidal, Phys. Rev. Lett. , 110501 (2008), quant-ph/0610099.[56] G. Evenbly and G. Vidal, Phys. Rev. Lett. , 180406 (2009), arXiv:0811.0879.[57] A. Sfondrini, J. Cerrillo, N. Schuch, and J. I. Cirac, Phys. Rev. B , 214426 (2010),arXiv:0908.4036.[58] F. Mezzacapo, N. Schuch, M. Boninsegni, and J. I. Cirac, New J. Phys. , 083026 (2009),arXiv:0905.3898.[59] H. J. Changlani, J. M. Kinder, C. J. Umrigar, and G. K.-L. Chan, arXiv:0907.4646.[60] G. Vidal, Phys. Rev. Lett. , 070201 (2007), cond-mat/0605597.[61] J. Jordan, R. Orus, G. Vidal, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. , 250602(2008), cond-mat/0703788.[62] R. Orus and G. Vidal, Phys. Rev. B , 094403 (2009), arXiv:0905.3225.[63] P. Corboz, G. Evenbly, F. Verstraete, and G. Vidal, Phys. Rev. A , 010303(R) (2010),arXiv:0904.4151.[64] C. V. Kraus, N. Schuch, F. Verstraete, and J. I. Cirac, Phys. Rev. A , 052338 (2010),arXiv:0904.4667.[65] C. Pineda, T. Barthel, and J. Eisert, Phys. Rev. A , 050303(R) (2010), arXiv:0905.0669.[66] P. Corboz and G. Vidal, Phys. Rev. B , 165129 (2009), arXiv:0907.3184.ondensed Matter Applications of Entanglement Theory C4.29[67] P. Corboz, R. Orus, B. Bauer, and G. Vidal, Phys. Rev. B , 165104 (2010),arXiv:0912.0646.[68] C. M. Papadimitriou, Computational complexity (Addison-Wesley, Reading, MA, 1994).[69] M. Sipser,