Algorithms for tensor network renormalization
aa r X i v : . [ c ond - m a t . s t r- e l ] S e p Algorithms for tensor network renormalization
G. Evenbly Department of Physics and Astronomy, University of California, Irvine, CA 92697-4575 USA ∗ (Dated: September 25, 2015)We discuss in detail algorithms for implementing tensor network renormalization (TNR) for thestudy of classical statistical and quantum many-body systems. Firstly, we recall established tech-niques for how the partition function of a 2 D classical many-body system or the Euclidean pathintegral of a 1 D quantum system can be represented as a network of tensors, before describing howTNR can be implemented to efficiently contract the network via a sequence of coarse-graining trans-formations. The efficacy of the TNR approach is then benchmarked for the 2 D classical statisticaland 1 D quantum Ising models; in particular the ability of TNR to maintain a high level of accuracyover sustained coarse-graining transformations, even at a critical point, is demonstrated. PACS numbers: 05.30.-d, 02.70.-c, 03.67.Mn, 75.10.Jm
CONTENTS
I. Introduction 1II. Tensor network representations of many-bodysystems 2A. Classical many-body systems 2B. Quantum many-body systems 3III. Coarse-graining tensor networks 3A. Local approximations 4B. Truncated singular value decomposition 4C. Projective truncations 5IV. Tensor network renormalization 6A. Coarse-graining step of the binary TNRscheme 6B. Optimization of tensors 8C. RG flow of tensors 8D. Algorithmic details 9V. Calculation of observables 9VI. Benchmark results 11A. 2D classical Ising model 11B. 1D quantum Ising model 13VII. Discussion 14A. Coarse-graining along a single dimension 15B. Ternary TNR scheme 15C. Isotropic TNR scheme 16D. Reflection symmetry 17E. Reduction of computational cost 18F. Optimization using a larger environment 19G. Achieving a scale-invariant RG flow 19 References 20
I. INTRODUCTION
Tensor network renormalization (TNR) is a recentlyintroduced approach for coarse-graining tensor networks,with application to the efficient simulation of classicalstatistical many-body systems and quantum many-bodysystems. A key feature of TNR that differentiates itfrom previous methods for coarse-graining tensor net-works, including Levin and Nave’s tensor renormaliza-tion group (TRG) as well as other subsequently devel-oped approaches , is the use of unitary disentanglers in TNR that function to allow removal of all short-rangedcorrelation at each length scale. This proper removalof short-ranged correlation allows TNR to resolve sig-nificant computational and conceptual problems encoun-tered by previous methods.Despite the success and usefulness of TRG, it is knownto suffer a computational breakdown when at or near acritical point , where the cost of maintaining at accurateeffective description of the system grows quickly withcoarse-graining step, due to the accumulation of short-ranged correlation. The use of disentanglers allows TNRto prevent this accumulation, such that TNR can main-tain an description over repeated coarse-graining steps,or equivalently for very large system sizes, without re-quiring a growth of computational cost. Previous meth-ods for coarse-graining tensor networks, such as TRG,are also conceptually problematic if they are to be inter-preted as generating a renormalization group (RG) flowin the space of tensors, in that they do not reproduce theexpected structure of RG fixed points. This flaw was par-tially resolved with the proposal of tensor entanglementfiltering renormalization (TEFR) in Ref.6, which repro-duces the proper structure of gapped RG fixed points.On the other hand, TNR fully resolves these problems,reproducing the proper structure of gapped fixed pointsas well as producing scale-invariant fixed points whenapplied to critical systems corresponding to discrete ver-sions of conformal field theories (CFT), thus cor-rectly realizing Wilsons RG ideas on tensor networks.By capturing scale-invariance, and producing a rescalingtransformation for the lattice consistent with conformaltransformations of the field theory , TNR can producean accurate description of the fixed-point RG map, fromwhich the critical data characterizing the CFT can thenbe extracted.The use disentanglers in TNR, and their success in pre-venting the retention and accumulation of short-rangeddegrees of freedom, is closely related to the use and suc-cess of disentanglers in entanglement renormalization (ER) and in the multi-scale entanglement renormaliza-tion ansatz (MERA). This connection was formalizedin Ref.18, which showed that TNR, when applied to theEuclidean path integral of a quantum Hamiltonian H ,can generate a MERA for ground, excited and thermalstates of H . Thus TNR also provides an alternativeto previous algorithms based upon variational energyminimization for obtaining optimized MERA, and alsoallows methods developed for extracting scale-invariantdata from quantum critical systems using MERA tobe generalized to classical statistical systems.In this manuscript we introduce the numeric algo-rithms required to implement TNR for the study of 2 D classical or 1 D quantum many-body systems. Due tothe use of disentanglers, which are key to the TNR ap-proach, implementation of TNR requires more sophisti-cated optimization strategies than have been necessary inprevious tensor RG approaches. This manuscript is orga-nized as follows. First we discuss the standard techniquesthrough which the partition function of a classical systemor the Euclidean path integral of a quantum system canbe expressed as a tensor network. Then we discuss thegeneral principle of local approximations on which ten-sor RG schemes are based, before detailing the particu-lar projective truncations involved in the TNR approach.Optimization algorithms for the implementation of TNRare then presented, and their performance benchmarkedin in the 2 D classical and 1 D quantum Ising models. II. TENSOR NETWORK REPRESENTATIONSOF MANY-BODY SYSTEMS
In this section we discuss approaches for which thepartition function of a 2 D classical statistical system orthe Euclidean path integral of a 1 D quantum system caneach be expressed as a square lattice network, which isthe starting point for the TNR approach [and other ten-sor renormalization methods, such as TRG, in general]. A. Classical many-body systems
Here we describe an approach for expressing the par-tition function Z at temperature T of a 2 D classical sta- FIG. 1. (a) A square lattice of classical spins σ ∈ { +1 , − } .(b) The partition function of the classical system can be en-coded as a square network (tilted 45 ◦ with respect to thespin lattice) of four-index tensors A ijkl , with a tensor sittingin the center of every second plaquette of spins. Here eachtensor A encodes the Boltzmann weights of associated to theinteractions of spins on the edges of the plaquette, see Eq.3. tistical system, Z = X { σ } e − H ( { σ } ) /T , (1)as a network of tensors. As a concrete example let us theclassical Ising model on the square lattice, with Hamil-tonian functional, H ( { σ } ) = − X h i,j i σ i σ j , (2)where σ i ∈ { +1 , − } is an Ising spin on site i . Weconstruct a representation of the partition function asa square-lattice tensor network composed of copies of afour-index tensor A ijkl , where a tensor sits in the cen-ter of every second plaquette of Ising spins according toa checkerboard tiling, such that the square lattice of A tensors is tilted 45 ◦ with respect to the lattice of Isingspins, see also Fig.1. Notice that this corresponds to hav-ing one tensor A for every two spins. We define the tensor A to encode the four Boltzmann weights e σ i σ j /T of theIsing spin interactions on the edges of the plaquette onwhich it sits, A ijkl = e ( σ i σ j + σ j σ k + σ k σ l + σ l σ i ) /T , (3)such that the partition function is then given by the sumover all indices, Z = X ijk ··· A ijkl A mnoj A krst A opqr · · · . (4)This construction for expressing the partition functionas a tensor network can be employed for any model withnearest-neighbor interactions on the square-lattice, andcan also be generalized to other lattice geometries and tomodels with longer range interactions. B. Quantum many-body systems
Here we describe how, given a local Hamiltonian H for a 1 D quantum system, an arbitrarily precise tensornetwork representation of the Euclidean time evolutionoperator e − βH can be obtained using a Suzuki-Trotterdecomposition . We assume, for simplicity, that Hamil-tonian H is a sum of identical nearest-neighbor terms h , H = X r h r,r +1 . (5)We begin by expanding the time evolution operator as aproduct of evolutions over some small time step τ , e − βH = (cid:0) e − τH (cid:1) ( β/τ ) . (6)The evolution e − τH over small time step τ may then beapproximated, e − τH ≈ e − τH odd e − τH even (7)where H odd and H even represent the contribution to H given from sites r odd or r even respectively, and anerror of order O ( τ ) has been introduced. [Note that onecan obtain an error O ( τ n ), n >
1, by using a higherorder Suzuki-Trotter decomposition ]. Since H odd is asum of terms that act on different sites and thereforecommute, e − τH odd is simply a product of two-site gates,and similarly for e − τH even , e − τH odd = Y odd r e − τh r,r +1 ,e − τH even = Y even r e − τh r,r +1 . (8)Thus, if one regards each two-site gate e − τh as a fourindex tensor and Eqs.8 and 7 are substituted into Eq.6,a representation of the Euclidean path integral e − βH asa square-lattice tensor network is obtained, see also Fig.2(a). Note that this representation of e − βH has incurredan error of order O ( βτ ), which can be diminished throughuse of a smaller time step τ .While this network could potentially serve as the start-ing point for the TNR approach [or other algorithm forthe renormalization of a tensor network] it is desirableto perform some preliminary manipulations before em-ploying TNR. This initial manipulation involves (i) atransformation that maps to a new square-lattice net-work tilted 45 ◦ with respect to the initial network, fol-lowed by (ii) coarse-graining in the Euclidean time di-rection. Given that the initial tensor network is highlyanisotropic for small time step τ , as the operator e − τh is very close to the identity, step (ii) is useful to obtaina tensor network representation of e − βH that is closerto being isotropic [and thus more suitable as a startingpoint for TNR]. Step (i) is accomplished by performing a modified stepof the TRG algorithm as follows. The singular valuedecomposition (SVD) is taken across a vertical partitionof the gate e − τh , e − τh = (cid:0) u √ s (cid:1) (cid:0) √ sv † (cid:1) , (9)where the root of the singular weights s has been ab-sorbed into each of the unitary matrices u and v , andlikewise the eigen-decomposition is taken across a hori-zontal partition of the gate e − τh , e − τh = (cid:16) w √ d (cid:17) (cid:16) √ dw † (cid:17) , (10)see Fig.2(d). Here w is a unitary matrix, which followsfrom e − τh being Hermitian, and d are the eigenvalues[which can be argued to be strictly positive for sufficientlysmall time step τ ]. The SVD and eigen-decompositionsare performed throughout the network according to thepattern indicated in Fig.2(a), and a new square networkof tensors A , tilted 45 ◦ with respect to the original, isformed by contracting groups of the resulting tensors to-gether as indicated Fig.2(b-c).In step (ii) the network of tensors A is then coarse-grained in the Euclidean time direction using standardtechniques, i.e. by combining pairs of rows together andthen truncating the resulting squared bond index similarto the HOTRG method [see Appendix A for details],until the network is sufficiently isotropic in terms of itscorrelations. One way to examine how close the networkis to being isotropic is to compute the spectra of thetransfer matrices formed by tracing out the horizontal orvertical indices of a single A tensor, whose decay shouldmatch as closely as possible. III. COARSE-GRAINING TENSORNETWORKS
Consider a tensor network G consisting of copies offour index tensors A ijkl that we assume are arranged inan L × L square-lattice network with periodic bound-ary conditions. Our goal is to contract this network, orperhaps this network with single or multiple impuritytensors, to evaluate the scalar, denoted hGi , associatedto network. As an exact contraction of the network G isexponentially expensive in system size L one must relyon approximations in order to evaluate a large network.In this section we first describe the generic concept of local approximations that could be employed to approx-imate such a contraction, then discuss the class of localapproximation used in TRG, namely the truncated sin-gular value decomposition (SVD), before introducing theparticular class of local approximation that the TNR al-gorithm is based on, which we call projective truncations . FIG. 2. (a) The imaginary time-evolution operator e − βH isexpressed via Suzuki-Trotter expansion as a product of two-site gates e − τh . Red dashed lines denote how the gates areto be decomposed at the next step. (b) The two-site gates e − τh are decomposed into a product of ternary tensors accord-ing to either a horizontal partition (accomplished via singularvalue decomposition) or a vertical partition (accomplished viaeigen-decomposition). (c) Groups of four ternary tensors arecontracted together to form four-index tensors A . (d) Depic-tions of the vertical and horizontal partitions of the two-sitegates e − τh , and definition of the four index tensors A . A. Local approximations
Let F denote a sub-network of tensors, for example a2 × A , from the full network G . Thekey idea underlying coarse-graining methods for tensornetworks is that of the local approximation : that one cansafely replace a sub-network F with a different networkof tensors ˜ F if they differ by a small amount ε , ε ≡ (cid:13)(cid:13)(cid:13) F − ˜ F (cid:13)(cid:13)(cid:13) , (11)where we assume for convenience that F has been nor-malized such that kFk = 1. If this condition is fulfilled,then the scalar hGi associated to the contraction of net-work G will only differ by a small amount O ( ε ) underreplacement of sub-network F by new sub-network ˜ F .Note that we use the Hilbert Schmidt norm, k A k = q tTr ( A ⊗ A † ) , (12)where ‘tTr’ denotes the tensor trace, or equivalently thecontraction of all indices, between two tensors of equal di-mensions [or two networks with matching ‘open’ indices], FIG. 3. (a) Depiction of [the square of] the Hilbert-Schmidtnorm of a four index tensor u . Note that a darker shade isused to represent the conjugate tensor, which is also drawnwith opposite vertical orientation. (b) Given a square latticetensor network, we wish to replace a 2 × F from the network with a different sub-network of tensors ˜ F .(c) The square of the difference between F and ˜ F under theHilbert-Schmidt norm is depicted, where again darker shadesare used to depict conjugate tensors which are drawn withopposite vertical orientation to regular tensors. The replace-ment in (b) is valid if the difference (cid:13)(cid:13) F − ˜ F (cid:13)(cid:13) is sufficientlysmall. (c) Assuming the local the square-lattice network ishomogeneous, one can replace F by ˜ F in all 2 × see also Fig.3(a). In general, renormalization schemesfor tensor networks, such as TRG or the focus of thismanuscript, TNR, employ a pattern local approximationsover all positions on the tensor network, in conjunctionwith contractions, in order to generate coarser networksof tensors. For example, as illustrated in Fig.3, local re-placements of all 2 × F with a newnetwork ˜ F , consisting of a four-index core tensor sur-rounded by three-index tensors, can result in a coarser,( L/ × ( L/
2) network of new four index tensors. Assum-ing that sufficient accuracy could be maintained over re-peated steps, this procedure could be iterated O (log L )times, resulting in a network of O (1) linear dimensionwhich could then be exactly contracted. B. Truncated singular value decomposition
In principle, any form of local approximation capableof yielding a small error ε in Eq.11 could be viable as partof a coarse-graining scheme. In the original TRG algo-rithm proposed by Levin and Nave , and in many of thegeneralizations and improvements to TRG , the localapproximations that are used are based upon a truncatedsingular value decomposition (SVD) of single tensors [ora generalized form of the SVD known as the higher-ordersingular value decomposition (HOSVD)], which we nowdiscuss.Consider a four index tensor A ijkl where each indexis of dimension χ . If the tensor is viewed as a χ × χ matrix according to the pairing of indices A [ ij ][ kl ] thenthe SVD can be performed, A ijkl = χ X m u ijm s mm v mkl , (13)where u and v are unitary according to grouping of in-dices u [ ij ][ m ] and v [ m ][ kl ] respectively, and s is a positivediagonal matrix of singular values λ , i.e. s nm = δ nm λ m ,that we assume are ordered such that λ m < λ m +1 . Ifwe truncate the SVD to retain only the χ ′ < χ largestsingular values then the decomposition becomes approx-imate, A ijkl ≈ χ ′ X m u ijm s mm v mkl , (14)where the truncation error ε , as defined in Eq.11, is seento equal to the square-root of the sum of the squares ofthe discarded singular values, ε = vuut χ X m = χ ′ +1 ( λ m ) . (15)Here we have assumed that the tensor A was normal-ized, k A k = 1, or equivalently that the singular valueswere normalized as pP m ( λ m ) = 1. The SVD is knownto provide the optimally accurate decomposition of a ten-sor A into the product of a pair of tensors [connected byan index of some rank χ ′ ], thus it has proved vitally use-ful as the foundation for many previous schemes for therenormalization of tensor networks. C. Projective truncations
The TNR approach requires use of a broader class of lo-cal approximation than those based upon the SVD, whichwe term projective truncations . In a projective trunca-tion, a local sub-network F is replaced by a new localnetwork ˜ F that consists of a projector P , which satisfies P P † = P = P , acting on some or all of the open indicesof F , ˜ F = F P = F ww † , (16)see Fig.4(a) for an example. Here we have expandedthe projector P as the product of an isometric tensor w and its conjugate, P = ww † , where the isometry satis-fies w † w = I , see also Fig.4(b-c). [Note that, as part ofthe TNR algorithm, we shall also consider cases whereprojector is decomposed as a more complicated productof many different isometries, see for example Fig.6(c)].Projective truncations are a particularly useful class oflocal approximation, as the figure of merits to optimizeprojector P takes a very simple form. To see this, wefirst expand the terms in Eq.11, ε = (cid:13)(cid:13) F (cid:13)(cid:13) + (cid:13)(cid:13) ˜ F (cid:13)(cid:13) − tTr (cid:16) F ⊗ ˜ F ∗ (cid:17) − tTr (cid:16) ˜ F ⊗ F ∗ (cid:17) . (17)Notice that, as P P † = P , for a projective truncation wehave, (cid:13)(cid:13) ˜ F (cid:13)(cid:13) = tTr (cid:16) F ⊗ ˜ F ∗ (cid:17) = tTr (cid:16) ˜ F ⊗ F ∗ (cid:17) . (18)It follows that the expression for the replacement error ε can be simplified, ε = q kFk − kF P k = q kFk − kF w k , (19)where we have again made use of P P † = P in reachingthe second line of working, see also Fig.4(d).Let us now turn to the problem of optimizing the pro-jector P such that the error ε of Eq.19 is minimized.For simplicity we consider the case where the projector P decomposes as a product of a single isometry w andits conjugate, P = ww † , although a key feature of themethod we discuss is that it can be applied to the morecomplicated case, an in Fig.6(c), where P is representedas a product of several different isometric tensors. No-tice that the error ε of Eq.19 is minimized when kF w k ismaximized, which follows as kF w k ≤ kFk ; we now dis-cuss an iterative strategy for optimizing isometry w tomaximize the expression kF w k .The strategy we employ is based upon linearization ofthe cost function. Given that the expression kF w k hasquadratic dependence on isometry w [or, more specifi-cally, in depends on both w and w † ], we simplify theproblem by temporarily holding w † fixed and then solv-ing the resulting linear optimization for w , and iteratingthese steps until w is sufficiently converged. Note thatthis follows the same strategy employed to optimize ten-sors in a MERA as described in Ref.19, to which we referthe interested reader for more details. We begin by ex-pressing the closed network kF w k in a factorized form, kF w k = tTr (Γ w ⊗ w ) , (20)where Γ w , referred to as the environment of w , representsthe contraction of everything in kF w k excluding tensor w , see also Fig.4(e-f). The singular value decompositionof the environment Γ w , when considered as a matrix ac-cording to the same partition of indices for which tensor w is isomeric, is then taken,Γ w = usv † , (21) FIG. 4. (a) In a projective truncation a sub-network F is re-placed by a new sub-network ˜ F , which consists of a projector P applied to the original sub-network, i.e. ˜ F = F P . (b) Herewe assume that P is decomposed as a product of an isometrictensor w and its conjugate, P = ww † . (c) By definition, isom-etry w contracts to identity with its conjugate, w † w = I . (d)The square of the error in a projective truncation is expandedas a sum of four terms; however given that P = P , two ofthe terms cancel, see also Eq.18. (e) The environment Γ w ofisometry w is defined as the network that results by removinga single instance of w from kF w k , see also Eq.20. (f) Byconstruction, the contraction of w and its environment Γ w isequal to kF w k . (g) Environment Γ w is decomposed, via sin-gular value decomposition (SVD), into a product of isometrictensors u , v and diagonal matrix s . as shown in Fig.4(g). The isometry w is then updated tobecome, w = vu † , (22)where it can be argued that this choice of updated isom-etry maximizes Eq.20. However, as the cost function waslinearized, such that w † in the environment Γ w was heldfixed, these steps of (i) computing the environment Γ w and (ii) obtaining an updated w through the SVD of theenvironment Γ w must be iterated many times, until thesolution converges. IV. TENSOR NETWORK RENORMALIZATION
Tensor network renormalization is a class of coarse-graining scheme designed to be compatible with properremoval of all short-ranged correlations at each RG step .For any given lattice geometry there are many poten-tial TNR schemes the fulfill this requirement. In thismanuscript we focus on a particular implementation of FIG. 5. The sequence of coarse-graining steps used in the bi-nary TNR scheme in order to map an initial square lattice oftensors A , where every second row of tensors has been conju-gated as described in Appendix D, to a coarser square latticecomposed of tensors A ′ . (a) A projective truncation is madeon all 2 × u are contracted to identity. (c) A pro-jective truncation is made on all B tensors, see Fig.6(d-e).(d) A final projective truncation is made, see Fig.6(f-g) fordetails. (e) Conjugate pairs of isometries w are contracted toidentity. TNR for a 2 D square lattice network that we call the binary TNR scheme, as introduced in Ref.1, which re-duces the linear dimension of the network by a factorof 2 with each RG step. In Appendix B we discuss a ternary
TNR scheme, which reduces the linear dimen-sion of the network by a factor of 3 with each RG step,and in Appendix C we discuss an isotropic binary TNRscheme that treats both dimensions of the tensor net-work equally while also reducing the linear dimension ofthe network by a factor of 2 at each RG step. SimilarlyTNR can also be implemented in other lattice geome-tries besides the square-lattice, including those in higherdimensions. The majority of the algorithmic details wepresent for implementation of the binary TNR schemecarry over to other TNR schemes.
A. Coarse-graining step of the binary TNR scheme
The starting point for an iteration of the binary TNRscheme is a square lattice tensor network G composed offour index tensors A ijkl , where indices are assumed tobe of some dimension χ . As discussed in Sect.II such anetwork could represent the partition function of a 2 D classical statistical system or the Euclidean path inte-gral of a 1 D quantum system. For simplicity we assumethat network G is spatially homogeneous, i.e. that all itstensors are copies of a unique tensor A ijkl , while notingthat the algorithm can easily be extended to deal withnon-homogeneous networks [special examples, including FIG. 6. (a) Details of the projective truncation made at thefirst step of the TNR iteration; here two copies of a projec-tor P u , which is composed of a product isometric and unitarytensors, are applied to a 2 × A tensors. (b) Defini-tion of four index tensor B . (c) Projector P u is formed fromisometries v L , v R and disentangler u [and their conjugates].(d) Details of the projective truncation made at the secondstep of the TNR iteration. (e) Definition of matrix D . (f)Details of the projective truncation made at the third step ofthe TNR iteration. (g) Definition of new four-index tensor A ′ , copies of which comprise the coarse-grained square-latticetensor network. (h) Delineation of the different dimensions { χ u , χ v , χ w , χ y } of indices on tensors { u, v L , v R , y L , y R , w } . networks with an open boundary or a defect line, can behandled using similar methods as those developed in thecontext of MERA , and are discussed separately inRef.33]. We also assume that G is invariant under com-plex conjugation plus reflection on the horizontal axis,as discussed further in Appendix D. Again, this assump-tion is not strictly necessary, but is useful in simplifyingthe TNR algorithm. In the case that G represents a Eu-clidean path integral of a quantum Hamiltonian H thepresence of this symmetry follows from H being Hermi-tian, while in the case that G represents the partitionfunction of a classical system the symmetry is present ifthe underlying 2 D classical statistical model has an axiswith which it is invariant under spatial reflection. Wenow describe the coarse-graining steps involved in an it-eration of the binary TNR scheme, which maps network G to the coarser network G ′ whose linear dimension hasbeen reduced by a factor of 2, before discussing the op-timization of the tensors involved and other algorithmic components in more detail.The first step of the iteration is to apply a particulargauge change on the horizontal indices on every secondrow of tensors in G , as discussed in Appendix D. Herethe gauge change is chosen such that it is equivalent toflipping top-bottom indices of tensors A and taking thecomplex conjugation, as such we denote the transformedtensors A † . That such a gauge transformation exists fol-lows from the assumed reflection symmetry. Next, Fig.5depicts the remaining steps in transforming network G into the coarser network G ′ . In Fig.5(a), a projectivetruncation is enacted on 2 × A [wheretwo of the tensors are have undergone the aforementionedchange of gauge as A † ], the details of which are shownin Fig.6(a). The projector P u used at this step is rep-resented as a product of two isometries v L and v R anda unitary tensor u [and their conjugates] as shown inFig.6(c). The unitary tensors u , which we call disentan-glers , act on two neighboring indices such that, if we re-gard each index of the network as hosting a χ -dimensionalcomplex vector space V χ , they describe a mapping be-tween vector spaces, u : V χ ⊗ V χ → V χ ⊗ V χ . (23)By virtue of being unitary, the disentanglers satisfy u † u = I ⊗ where I is the identity operator on V χ . Con-ceptually, the role of disentanglers is to remove short-range correlations that would otherwise be missed, as dis-cussed in greater detail in Ref.1, and they constitute thekey difference between TNR and previous tensor renor-malization schemes. Isometries v L and v R each map twoindices in the network, one horizontal and one vertical,to a new index of some chosen dimension χ ′ ≤ χ , v L : V χ ′ → V χ ⊗ V χ , v R : V χ ′ → V χ ⊗ V χ , (24)where the new index has been regarded as hosting a χ ′ -dimensional complex vector space V χ ′ . By definition,isometries satisfy v † L v L = v † R v R = I ′ , with I ′ the iden-tity operator on V χ ′ . After the coarse-graining step ofFig.5(a), it is useful to define a new four index tensor B ,which is defined from the block of A tensors and from u , v L and v R , as depicted in Fig.6(b).After the projective truncation implemented by pro-jector P u is enacted on all 2 × B tensorsinterspersed with groups of isometries v L and v R . Next,as depicted in Fig.5(b) and further detailed in Fig.6(d-e), a projective truncation is made on B tensors. Twoprojectors P L and P R are used at this step, acting on theleft or right indices of B tensor respectively, each formedas a product of isometries, P L ≡ y L y † L and P R ≡ y R y † R .Isometries y L and y R , which satisfy y † L y L = y † R y R = I ′ ,each map two indices to a single index also assumed tobe of dimension χ ′ , y L : V χ ′ → V χ ′ ⊗ V χ ′ , y R : V χ ′ → V χ ′ ⊗ V χ ′ . (25) FIG. 7. The linearized environments of tensors { v L , v R , u, y L , y R , w } involved in an iteration of the bi-nary TNR scheme. (a-c) Environments Γ v L , Γ v R and Γ u ofthe isometries v L , v R and disentangler u involved in the firstprojective truncation of the TNR iteration, as detailed inFig.6(a). (d-e) Environments Γ y L and Γ y R of isometries y L and y R from the second projective truncation of the TNRiteration, as detailed in Fig.6(d). (f) Environment Γ w ofisometry w from the third projective truncation of the TNRiteration, as detailed in Fig.6(f). We then define matrix D from enacting isometries y L and y R on tensor B , as shown Fig.6(e). It is useful,though not strictly necessarily, to work in a gauge wherematrix D is diagonal and positive, which can always beachieved through proper choice of gauge on isometries y L and y R , such that the matrix can easily be decomposedas D = √ D √ D .After projective truncations have been made on B ten-sors a final projective truncation, as shown Fig.5(d) andfurther detailed in Fig.6(f-g), is made using projector P w ≡ ww † with isometry w mapping two χ -dimensionalindices to a single index of dimension χ ′ , w : V χ ′ → V χ ⊗ V χ , (26)where the isometry satisfies w † w = I ′ . After projector P w has been implemented throughout the network, pairsof isometries w from neighboring cells can annihilate toidentity with their conjugates, as depicted in Fig.5(e).This final step yields a coarse-grained square lattice G ′ of four index tensors A ′ , whose indices are of some spec-ified dimension χ ′ ≤ χ . Notice that the new four-indextensor A ′ is defined from a product of various tensors ob-tained throughout the coarse-graining step, namely from { v L , v R , y L , y R , √ D, w } , as shown Fig.6(g). B. Optimization of tensors
Each coarse graining iteration of the binary TNRscheme follows from a series of projective truncations.The projectors involved can be optimized using the iter-ative SVD update strategy, described in Sect.III C, as wenow discuss in more detail.The first step of the TNR iteration, as depictedFig.5(a), requires optimization of a projector P u com-posed of isometries v L and v R and disentangler u , asshown Fig.6(c). To update one of these tensors one firstcomputes its environment, where the environments Γ v L ,Γ v R and Γ u are shown in Fig.7(a-c), then updates thetensor from the SVD of the environment as discussed inSect.III C. Each of the tensors v L , v R , u should be up-dated in turn and the process iterated until all tensorsare sufficiently converged [which typically requires of or-der a few hundred iterations]. The computational costof computing each of the environments scales as O ( χ ),assuming the indices involved in the network are all χ -dimensional.The second projective truncation step of the TNR it-eration, as depicted Fig.5(c), requires the optimization ofisometries y L and y R , which again are optimized throughalternating, iterative SVD updates based upon calcula-tion of their environments Γ y L and Γ y L , as shown inFig.7(d,e). Note that, in order to ensure that the reflec-tion symmetry [which was assumed to be present in theinitial tensor network] is preserved, it is necessary to sym-metrize the environments before performing the SVD,as described in Appendix D. The computational cost ofcomputing each of the environments is O ( χ ) assumingthe indices involved in the network are χ -dimensional.The third and final projective truncation of the TNRiteration, as depicted Fig.5(d), requires optimization ofisometry w which can be achieved through iterative SVDupdates of the environment Γ w , depicted in Fig.7(f). Thecomputational cost of computing the environment Γ w is O ( χ ) assuming the indices involved in the network are χ -dimensional. C. RG flow of tensors
The single iteration of the TNR approach describedin Sect.IV A, which mapped the initial tensor network G to the coarser network G ′ , can be iterated many timesto generate a sequence of increasing coarse-grained net-works, G (0) → G (1) → G (2) → · · · → G ( s ) → · · · (27)with G (0) ≡ G now as the initial network and network G ( s ) as tensor network after s iterations of TNR, wherethe linear dimension of the lattice G ( s ) has been reducedby a factor of 2 compared to the previous network G ( s − .Each network G ( s ) consists of copies of a four index tensor A ( s ) ijkl , whose indices are of dimensions χ ( s ) , arranged ina square lattice configuration. Here the initial dimension χ (0) is fixed from the starting tensors A (0) ijkl , while thedimensions χ ( s ) at later steps are user specified, and canbe increased to improve the accuracy of the calculation atthe cost of increasing the computational expense [aspectsof computational efficiency will further be discussed inSect.IV D]. Tensors A ( s +1) are defined from [four copiesof] the previous tensor A ( s ) under coarse-graining via aproduct of optimized isometric tensors, A ( s ) n v ( s ) L ,v ( s ) R ,u ( s ) ,y ( s ) L ,y ( s ) R ,w ( s ) o −−−−−−−−−−−−−−−−−−−−→ A ( s +1) , (28)as depicted in Fig.6(g), such that we can consider TNRas generating an RG flow in the space of [four-index]tensors, A (0) → A (1) → A (2) → · · · → A ( s ) → · · · . (29)Properties of the system under consideration, such as theexpectation values of local observables, can be computedfrom both the coarse-grained tensors A ( s ) and the ten-sors { u ( s ) , v ( s ) L , v ( s ) R , y ( s ) L , y ( s ) R , w ( s ) } involved in the coarse-graining, as further discussed in Sect.V. D. Algorithmic details
While the TNR algorithm discussed thus far is a viableimplementation of approach, we now detail how this basicTNR algorithm can be improved in a number of ways.In most calculations it is convenient to use the samebond dimensions χ ( s ) for all RG steps s , i.e. such that χ ( s ) = χ for s > χ (0) is fixed by the local di-mension of the model under consideration]. However,even when using the same bond dimension throughoutdifferent coarse-graining iterations, it can still be use-ful, in order to maximize the efficiency of the TNR ap-proach, to use different bond dimensions on vertical andhorizontal indices of tensors A ( s ) , as well as different di-mensions on tensors that appear at intermediate steps ofeach coarse-graining iteration. Following this idea, fourdifferent refinement parameters { χ u , χ v , χ w , χ y } can bedefined, each denoting a dimension of an outgoing indexon different isometric tensors as indicated in Fig.6(h).Here χ w and χ y denote the dimensions of vertical andhorizontal indices on A tensors respectively, while χ u and χ v denote dimensions of indices that only appearat intermediate steps of each TNR iteration. The ratioof dimensions should be adjusted such that the trunca-tion errors ε given at different intermediate steps becomeroughly equal; in practice such dimensions can be deter-mined heuristically. For several test models it has beenobserved that the different optimal bond dimensions are,to good approximation, linearly related to one another[i.e. the ratios of different optimal bond dimensions re-main roughly constant for a given model]. This impliesthat the cost scaling of an algorithmic step can be spec-ified unambiguously, for instance as having cost O ( χ ), without detailing the dependence on the different dimen-sions involved [as they are linearly related]. Note that,in the benchmark results presented in Sect.VI we labelresults of a TNR simulation by a single dimension χ ,by which we mean the largest of the bond dimensions { χ u , χ v , χ w , χ y } that was used in the simulation.The computational cost of the binary TNR can be re-duced by implementing an additional projective trunca-tion on the 2 × A tensors at the start of theTNR iteration, as described in Appendix E. This addi-tional step reduces the cost of computing the environ-ments Γ v L , Γ v R and Γ u , see Fig.7(a-c), from O ( χ ) to O ( χ ), thus also reduces the cost of optimizing tensors v L , v R and u . When utilizing the ideas of Appendix E thetensor contractions required to implement TNR are all ofcost O ( χ ) or less, thus the efficiency of the algorithm isgreatly improved.The accuracy of TNR, for a fixed bond dimension χ ,can be improved by taking a larger environment into ac-count for optimization of the tensors, using an approachsimilar to the approach of Refs.5 and 7 for improvingstandard TRG by taking the environment into consider-ation. Appendix F describes how the tensors involvedin the TNR coarse-graining iteration can be optimizedusing a larger [though still local] environment, and thebenefits of doing so. Note that, by also incorporating theideas of Appendix E, the leading order computationalcost of optimizing using the larger environments remainsunchanged at O ( χ ). It is also possible to modify TNRto take account of the full environment from the network,similar to how SRG modifies the TRG to take account ofthe full environment. This modification, which we shallnot detail in the present manuscript, requires sweepingthe optimization back and forth over different scales ofcoarse-graining, functioning similarly to the energy min-imization algorithm for optimizing a MERA . V. CALCULATION OF OBSERVABLES
In this section we discuss how TNR can be applied tocompute expectation values of local observables in classi-cal statistical or quantum many-body systems. There aremany different ways of performing this calculation; oneapproach could be to construct a MERA from the TNRcoarse-graining transformations, as described in Ref.18,from which expectation values could then be computedusing standard MERA techniques . In this manuscriptwe describe a different approach based upon directlycoarse-graining the network with the addition of an im-purity tensor.Let G (0) be a homogeneous square lattice tensor net-work with periodic boundaries that consists of an L × L array of copies of a four index tensor A (0) . Assume thatthe sequence of T = log ( L/
2) TNR transformationshave been optimized as to generate a sequence of coarserlattices, see Eq.27, where G ( T ) is a 2 × A ( T ) that can be exactly contacted. In order to evalu-0ate the expectation value of a local observable, we mustnow evaluate the tensor network G (0) with the additionof an impurity tensor representing the local observableunder consideration. Here we consider an impurity ten-sor M (0) that replaces a 2 × A tensors from G (0) . To evaluate the impurity network we use the sameprojective truncations as was used to coarse-grain thehomogeneous network G (0) everywhere except in the im-mediate vicinity of the impurity, the presence of which isincompatible with the coarse-graining steps used for thehomogeneous system, see Fig.8(a-e). After an iterationof TNR to the impurity network, one obtains a new im-purity network, equal to the homogeneous network G (1) except where a new impurity M (1) replaces a 2 × A (1) , see Fig.8(f). The coarse-grained impurity M (1) is defined from applying a set of two-body gates, { G R , G L , G U , G Y } , to the initial impurity tensor M (0) ,as depicted in Fig.8(g), where the gates are functionsof A (0) and the isometries { v L , v R , u, y L , y R , w } used inthe TNR iteration, as depicted in Fig.8(h). The coarse-graining transformation can be applied to the impuritynetwork multiple times as to generate a sequence of im-purity tensors, M (0) → M (1) → · · · → M ( T ) , (30)each imbedded in an increasingly coarse-grained square-lattice network. We call the transfer operator that mapsimpurities tensors from one length-scale to the next,which was introduced in Ref.15, the ascending superop-erator R , M ( s +1) = R (cid:16) M ( s ) (cid:17) , (31)as shown in Fig.8(g). After T = log ( L/
2) RG steps,the impurity network only contains a single tensor M ( T ) ,in place of the 2 × A ( T ) that wouldotherwise have been in the homogeneous network G ( T ) ,from which the expectation value is evaluated throughthe appropriate trace of M ( T ) as depicted in Fig.9.Note that the evaluation of two-point correlators,which corresponds to contracting a network with two lo-cal impurities, can be handled in a similarly. In this caseeach of the impurities will transform individually in thesame way as Eq.31 until they become adjacent to oneanother in the network, where they will then fuse into asingle impurity after the next TNR iteration.The evaluation of local observables can also be formu-lated in a more general way, using TNR to map the net-work on the punctured plane to an open cylinder, analo-gous to the logarithmic transformation in CFT . Thismapping using TNR was originally formulated in Ref.15,to which we refer the interested reader for more details.A brief summary is included here for completeness. InFig.10(a) we consider a finite square lattice network thathas a 2 × FIG. 8. (a-e) An iteration of the TNR coarse-graining trans-formation for a square-lattice tensor network that is homo-geneous everywhere except for a 2 × M (0) . The projective trun-cations of the TNR iteration are applied as usual, see Fig.5,neglecting those which are incompatible impurity tensor. (f)The coarse-grained square lattice is homogeneous everywhereexcept for a 2 × M (1) . (g) The coarse-grained impurity tensor M ( s +1) is givenby enacting ascending superoperator R , defined from twobody gates { G R , G L , G U , G Y } , on the impurity tensor M ( s ) .(h) Definition of two body gates { G R , G L , G U , G Y } .FIG. 9. A finite network with periodic boundaries and a localimpurity M ( s ) is coarse-grained into a single impurity tensor M ( s +1) under the transformation depicted in Fig.8. The ex-pectation value of the local observable corresponding to theimpurity is given from the trace of M ( s +1) as depicted. the same projective truncations except in the immedi-ate vicinity of the open indices, which must remain un-touched. The result of this is shown in Fig.10(a-d); thesquare lattice network [with an open ‘hole’] is mappedto a tensor network on a finite width cylinder. Here oneend of the cylinder has free indices, which correspond tothose of the open ‘hole’, and moving along the lengthof the cylinder corresponds to a change of scale in theoriginal system, where each double row of tensors in thecylinder corresponds to an application of the ascending1superoperator R as defined in Fig.8(g).Using the same transformations, the tensor networkwith local impurity, as depicted in Fig.10(e), is mappedto a closed cylindrical network after coarse-graining withTNR, as depicted in Fig.10(f). Contracting this networkfrom bottom-to-top it would be equivalent to evaluatingthe expectation value by coarse-graining the observableas discussed previously, see Eq.30. However, one couldalso evaluate the observable by contracting the networkcontracting from top-to-bottom, or in some other de-sired order. In practice, exact contraction of the networkshown in Fig.10(f) is computationally expensive for largebond dimension χ , thus it may be necessary to use anapproximate method for this evaluation. The approx-imate method we employ in this manuscript is basedupon contracting the network layer by layer from top-to-bottom, where we approximate the boundary state asa matrix product state (MPS) and use the TEBDalgorithm to apply each layer of gates from cylinderwhile maintaining an MPS representation. VI. BENCHMARK RESULTSA. 2D classical Ising model
In this section we provide benchmark calculations forthe binary TNR algorithm, comparing against TRG, forthe partition function of 2 D classical Ising model, as de-fined in Eqs.1 and 2. We begin by encoding the partitionfunction as a tensor network as discussed in Sect.II A: afour index tensor A ijkl is used to encode the four Boltz-mann weights on the edge of a plaquette, as per Eq.3,which corresponds to having one tensor A for every twospins and a tensor network with a 45 degree tilt with re-spect to the spin lattice, see Fig. 1(a-b). For conveniencewe then contract a 4 × A to form anew tensor A (0) of bond dimension χ = 16, which servesas the starting point for the both the TNR and TRGapproaches. We apply up to 20 RG steps of either TNRor TRG, which corresponds to lattices of Ising spins withlinear dimension up to L = 4 × ≈ × spins. Herewe define an RG step as that which maps the square lat-tice to a new square lattice of the same orientation, butof half the linear dimension; note that this correspondsto two steps of TRG as defined in Ref.2. Each of the ap-proaches generates a sequence of coarse grained tensors,as per Eq.29, where copies of tensor A ( s ) comprise thecoarse-grained network after s RG steps.In the implementation of TNR we enforce reflectionsymmetry, as discussed in Appendix D, on both spatialaxes, and also employ the ideas discussed in Appendix Eto reduce the leading order cost of the TNR algorithmfrom scaling as O ( χ ) to scaling as O ( χ ) in terms of thebond dimension χ . Furthermore we optimize tensors us-ing the larger environment as discussed in Appendix F,and also use Z invariant tensors [recall that the Isingmodel has a global Z symmetry: it is invariant under FIG. 10. (a) An 8 × × R as defined in Fig.8(g). (e) An 8 × σ positioned on a link. (f)The network from (e) after two iterations of coarse-grainingwith TNR. the simultaneous flip σ k → − σ k of all the spins], whichare employed using standard methods for incorporatingglobal symmetries in tensor networks . The TRG resultshave been calculated using the square-lattice TRG algo-rithm as presented in Ref.2, the cost of which also scalesas O ( χ ) in terms of the bond dimension χ . While thecost of TRG and TNR both scale as O ( χ ), the overallcost of a TNR calculation is greater than a TRG calcu-lation of the same bond dimension by a constant factor, k ≈ ε , as definedin Eq.11, incurred in RG step s of the either the TRGor TNR approach applied to the Ising model at criticaltemperature, T c = 2 / log (cid:0) √ (cid:1) ≈ . χ of TRGreduces the initial truncation error ε , the error alwaysincreases quickly as a function of RG step s regardlessof the bond dimension in use, a phenomenon described2 FIG. 11. (a) Comparison between TRG and TNR of the trun-cation error ε , as defined in Eq.11, as a function of RG step s in the 2 D classical Ising model at critical temperature T c .While increasing the bond dimension χ gives smaller trunca-tion errors, the truncation errors still grow quickly as a func-tion of RG step s under TRG. Conversely, truncation errorsremain stable under coarse-graining with TNR. (b) Relativeerror in the free energy per site δf at the critical temper-ature T c , comparing TRG and TNR over a range of bonddimensions χ . The error from TRG is seen to diminish poly-nomially with bond dimension, with fit δf ∝ χ − . (wherethe inset displays the same TRG data with logarithmic scaleson both axes), while the error from TNR diminishes expo-nentially with bond dimension, with fit δf ∝ exp( − . χ ).Extrapolation suggests that TRG would need bond dimension χ ≈
750 to match the accuracy of the χ = 42 TNR result. as the break-down of TRG at criticality in the originalwork by Levin and Nave . In comparison it is seen thatTNR avoids such a break-down; the truncation errors ǫ remain constant over many RG steps s when coarse-graining with TNR, beyond a slight increase after theinitial RG step.The relative error in the free energy per site, f = − T log( Z ) /N , at the critical temperature T c is comparedfor TRG and TNR over a range of bond dimensions χ in Fig.11(b). Evident is a qualitative difference in theconvergence of the free energy between the approaches.In TRG the free energy f is seen to converge polyno-mially with χ , with fit δf ∝ χ − . , while in TNR f is seen to converge exponentially fast with χ , with fit δf ∝ exp( − . χ ). Given that the cost of implement-ing the two approaches differs only by a constant fac-tor, k ≈ χ , it is evi-dent that TNR can produce a significantly more accuratefree energy than would be computationally viable withstandard TRG. For instance, extrapolation suggests thatin order to match the accuracy in the free energy for χ = 42 TNR, which required approximately 12 hourscomputation time on a laptop computer, one would needto implement TRG with χ ≈ FIG. 12. (a) Spontaneous magnetization M ( T ) of the 2 D classical Ising model near critical temperature T c , both exactand obtained with TNR with χ = 6. Even very close tothe critical temperature, T = 0 . T c , the magnetization M ≈ .
48 is reproduced to within 1% accuracy. (b) Specificheat, c ( T ) = − T ∂ f∂T , both exact and obtained using TNRwith χ = 6. the spontaneous magnetization M ( T ) and specific heat c ( T ) = − T ∂ f∂T obtained with TNR for χ = 6, over arange of temperatures T near the critical temperature T c .Remarkable agreement with the exact results is achievedthroughout, even very close to the critical point T c .Next we explore ability of TNR to produce a scale-invariant RG flow in the space of tensors A ( s ) for theIsing model at critical temperature T c . We apply upto 20 RG steps of TNR to the partition function, em-ploying the gauge fixing strategy discussed in AppendixG from the third RG step onwards. The difference δ ( s ) ≡ k A ( s ) − A ( s − k between tensors at successiveRG steps [where tensors have been normalized such that k A ( s ) k = 1] is displayed in Fig.13. For the larger χ cal-culations, the difference δ ( s ) reduces after the initial RGsteps, before increasing again in the limit of many RGsteps. This behavior is to be expected. The main limita-tion to realizing scale-invariance exactly in the initial RGsteps is physical: the lattice system includes RG irrele-vant terms that break scale-invariance at short-distancescales, but are suppressed at larger distances. On theother hand, after many RG steps the main obstruction toscale invariance is the numerical truncation errors, whichcan be thought of as introducing RG relevant terms, ef-fectively shifting the flow away from criticality and thusaway from scale invariance. However, use of a larger bonddimension χ reduces truncation errors, allowing TNR tonot only achieve a more precise approximation to scale-invariance, but also to hold it for more RG steps.In order to demonstrate that the [approximate] fixedpoint map given by TNR is representative of the 2 D Ising universality class we extract the critical data from3
FIG. 13. The precision with which TNR approximates ascale-invariant fixed point tensor for the 2 D classical Isingmodel at critical temperature T c is examined by compar-ing the difference between tensors produced by successiveTNR iterations δ ( s ) ≡ k A ( s ) − A ( s − k , where tensors havebeen normalized such that k A ( s ) k = 1. The precision withwhich scale-invariance is approximated in the initial RG steps[small s ] is limited by the presence of RG irrelevant terms inthe lattice Hamiltonian that break scale-invariance at short-distance scales, while numerical truncation errors, which canbe thought of as introducing RG relevant terms, shift the sys-tem from criticality [and thus scale invariance] in the limit ofmany RG steps s . this map. This calculation was previously carried out inRef.15, to which we refer the interested reader for moredetails. Scaling operators, with their corresponding scal-ing dimensions, are obtained from diagonalization of theascending superoperator R , see Fig.8(a), associated tothe s = 4 coarse-graining iteration with TNR. At thislevel of coarse-graining the RG irrelevant terms in thesystem have been sufficiently suppressed, such that agood approximation to scale-invariance is realized. Thesmallest 101 scaling dimensions obtained from a χ = 6TNR calculation are displayed in Fig.14, all of which arewithin 2% of their exact values. Also computed werethe operator product expansion (OPE) coefficients for the primary fields, see again Ref.15 for details, whichwere found to match their correct values from CFT towithin 0 . B. 1D quantum Ising model
In this section we provide benchmark calculations forthe binary TNR algorithm applied to the Euclidean pathintegral of the 1 D quantum Ising model, which has FIG. 14. The smallest 101 scaling dimensions of the 2 D classi-cal Ising at critical temperature T c , obtained by diagonalizingthe ascending superoperator R , see Fig.8(g), using TNR withbond dimension χ = 6. The scaling dimensions are organizedaccording to the parity p = ± Hamiltonian H Is. defined, H Is. = X k (cid:0) σ xk σ xk +1 + λσ zk (cid:1) (32)where σ x and σ z are Pauli matrices, and λ representsthe magnetic field strength. For convenience we performan initial blocking step, in which blocks of four spinsare combined into effective sites of local dimension d =16, and then generate the tensor network representationof the Euclidean path integral as explain in Sect.II B.We typically use a time step of τ = 0 .
002 and coarse-grain in the Euclidean time direction using 10 iterationsof the single dimension coarse-graining scheme explainedin Appendix A, before beginning the TNR calculation.To start with, we compare the performance of theTNR algorithm as a means to optimize a MERA ver-sus the energy minimization approach of Ref.19. Theground state of the Ising Hamiltonian H Is. at criticalmagnetic field strength, λ = 1, is represented using ascale-invariant MERA , here consisting of threetransitional layers followed by infinitely many copies of ascale-invariant layer, obtained in two different ways. Inthe first approach, we apply four coarse-graining itera-tions of TNR to the Euclidean path integral of H Is. , afterwhich the flow in the space of tensors has reached an ap-proximate scale-invariant fixed-point. A scale-invariantMERA is then built from certain tensors that were pro-duced from the TNR calculation, specifically the disen-tanglers u and isometries w , as described in the Ref.18.Here the first three iterations of TNR generate the threetransitional layers, while the fourth iteration of TNRis used to build the scale-invariant layer. In the sec-ond approach, we use the energy minimization algorithmof Refs.19 and 23 to directly optimize a scale-invariantMERA [with three transitional layers preceding the scale-invariant layers] by iteratively minimizing the expecta-tion value of the H . Presented in Fig. 15(a) is the com-parison between the ground energy error δE obtained by4the two methods. For an equivalent bond dimension χ ,energy minimization produces a MERA with smaller er-ror δE than TNR, by a factor k ≈
10 that is roughlyindependent of χ . However, optimization using TNRis also computationally cheaper than optimization us-ing the energy minimization algorithm; the cost of TNRscales as O ( χ ) in terms of bond dimension χ , while theenergy minimization scales as O ( χ ). In addition, theenergy minimization algorithm, which iteratively sweepsover all MERA layers, requires more iterations to con-verge than does TNR. Taking these considerations intoaccount, TNR can be seen to be the more efficient ap-proach to obtain a ground state MERA to within a givenlevel of accuracy in the energy δE , see Ref.18 for ad-ditional details. Thus the TNR approach represents auseful, alternative means for optimizing a MERA and,by extension, is promising as a tool for the explorationof ground states of quantum systems.In Fig.15(b) the low energy excitation spectra of H Is. at criticality, λ = 1, is plotted as a function of systemsize L . This was computed using TNR to coarse-grainthe Hamiltonian H Is. , as explained in Ref.18, which wasthen diagonalized on a finite lattice with periodic bound-aries. The excitation spectra reproduce the expected 1 /L scaling with system size and match the predictions fromCFT , demonstrating that, in addition to the groundstate, TNR can accurately approximate the low-energyeigenstates of H Is. .Next we explore the use of TNR for computing expec-tation values of finite temperature thermal states. Thisis achieved by applying TNR to the tensor network cor-responding to e − βH Is. for inverse temperature β , as dis-cussed in Sect.II B, which is then used to generate a ther-mal state MERA as explained in Ref.18. The thermal en-ergy per site as a function of β is displayed in Fig.16(a)for several different magnetic field strengths λ . In thegapped regime, λ >
1, the thermal energy converges ex-ponentially quickly to the ground energy in the limit oflarge β [or small temperature T ], while at the gaplesscritical point, λ = 1, the thermal energy converges poly-nomially quickly to the ground energy with β . The re-sults from χ = 12 TNR accurately reproduce the exactenergies over the range of parameters considered. Two-point correlation functions of the critical, λ = 1, systemare examined in Fig.16(b) over a range of inverse temper-atures β , where correlations are seen to decay faster atsmaller β [or equivalently at higher temperature T ] dueto thermal fluctuations. Again, the results from χ = 12TNR accurately reproduce the exact correlations, indi-cating that TNR can be used to approximate thermalstates of quantum systems over a wide range of temper-atures. VII. DISCUSSION
After reviewing the conceptual foundations for real-space renormalization of partition functions and Eu-
FIG. 15. (a) Relative error in the energy of scale-invariantMERA optimized for the ground state of the 1 D quantumIsing model at criticality in terms of bond dimension χ , com-paring MERA optimized using TNR to those optimized us-ing variational energy minimization. Energy minimizationproduces MERA with a more accurate approximation to theground energy, but is significantly more computationally ex-pensive [with a computational cost that scales as O ( χ ) ver-sus as O ( χ ) for TNR]. (b) Low energy eigenvalues of the1 D quantum Ising model at criticality as a function of 1/L,computed with χ = 12 TNR. Discontinuous lines correspondto the finite-size CFT prediction, which ignores corrections oforder L − .FIG. 16. (a) Thermal energy per site (above the ground stateenergy) as a function of the inverse temperature β , for the 1 D quantum Ising model in an infinite chain, for different valuesof magnetic field λ . Data points are computed with χ = 12TNR while continuous lines correspond to the exact solution.(b) Connected two-point correlators at the critical magneticfield λ = 1, as a function of the distance d , for several valuesof β . Data points are computed with χ = 12 TNR whilecontinuous lines again correspond to the exact solution. clidean path integrals, when expressed as tensor net-5works, a self-contained description of the algorithm foremploying TNR to study properties of classical and quan-tum many-body systems was provided.Benchmark results, provided in Sect.VI, demonstratedsome of the advantages of TNR. These include (i) provid-ing a computationally sustainably coarse-graining trans-formation even for systems at or near critical point [i.e.one that can be iterated many times without increase intruncation error], (ii) convergence in the RG flow of ten-sors to a scale-invariant fixed point for a critical system[which then allows calculation of the critical data fromthe fixed point RG map directly], (iii) providing an al-ternate, more efficient means for optimizing a MERA forthe ground state of a quantum system, and (iv) providinga means to accurately study properties of thermal statesof quantum systems over a wide range of temperatures T . With regards to (i) above, TNR was demonstratedto overcome a major obstacle of previous schemes forrenormalization of tensor networks, such as TRG, whichexhibit a computational breakdown when near or at acritical point. As a consequence, the free energy per site f of the 2 D classical Ising model at criticality was seen toconverge to the exact value exponentially faster in bonddimension χ with TNR than with the previous TRG ap-proach.Future work shall include the development and imple-mentation of TNR schemes for the coarse-graining of net-works on 3 D lattices, which could be applied to study 3 D classical statistical and 2 D quantum many-body systems.A TNR scheme for 3 D lattices would offer an alternative,potentially more efficient, means to optimize a 2 D MERAover the previous [often prohibitively expensive] strate-gies based upon energy minimization . The TNR ap-proach presented in this manuscript can also be used tocompute the norm h Ψ | Ψ i of a 2 D quantum many-bodystate encoded in a PEPS , thus it could be incorpo-rated as a key part of an algorithm for simulation of 2 D quantum many-body systems using PEPS .The author thanks Markus Hauru and Guifre Vidal forinsightful comments. The author acknowledges supportby the Sherman-Fairchild Foundation and by the SimonsFoundation (Many Electron Collaboration). Appendix A: Coarse-graining along a singledimension
In this appendix we describe a scheme that coarse-grains a square-lattice along one dimension only, whichis similar in effect to the higher order tensor renormal-ization group (HOTRG) method introduced in Ref.8. Itis useful to perform this coarse-graining to rescale onedimension before applying the TNR algorithm if the ini-tial tensor network is highly anisotropic in the strength ofits correlations [which occurs, for instance, when the net-work represents the Euclidean path integral of a quantumsystem expanded with a very small time-step τ ]. The pre-liminary coarse-graining can generate a network that is FIG. 17. An iteration of a coarse-graining scheme that actsto compress only the vertical dimension of the square-latticetensor network. (a) A projective truncation, involving isome-tries w , is employed. (b) Details of the projective truncationfrom (a); here a projector P , comprised of an isometry w andits conjugate, is applied to a pair of A tensors. (c) The coarse-grained network, comprised of tensors A ′ is given via tensorcontractions. (d) Definition of the coarse-grained tensor A ′ . more isotropic in the strength of its correlations and thusmore suitable as a starting point for the TNR approach,which then rescales both lattice dimensions equally.An iteration of the coarse-graining, which acts to com-press the vertical dimension of the square-lattice tensornetwork by a factor of two, is depicted in Fig.17. The firststep, as shown Fig.17(a), involves a projective truncationthat acts upon a pair of A tensors, as further detailed inFig.17(b). The isometries w that comprise the projec-tor can be optimized in the standard way, as discussedin Sect.III C. The second step, as shown in Fig.17(c), in-volves contraction of a pair of A tensors with isometries w , as detailed in Fig.17(d), to generate the tensors A ′ ofthe coarse-grained network. Appendix B: Ternary TNR scheme
The binary TNR scheme presented in Sect.IV A is oneof many possible TNR schemes for the square-lattice;here we describe a ternary TNR scheme which reducesthe linear dimension of the lattice by a factor of 3 witheach iteration [or, equivalently, coarse-grains a 3 × × .The steps of an iteration of the ternary TNR schemeare shown in Fig.18(a-e). These steps are as follows: (a)6 FIG. 18. A depicted of an iteration of the ternary TNRscheme, which maps a square-lattice tensor network to acoarse-grained square-lattice of one-third the linear dimen-sion of the original. (a) A projective truncation, detailed inFig.19(a). (b) Pairs of disentanglers u annihilate to iden-tity. (c) A projective truncation, detailed in Fig.19(b). (d)A projective truncation, detailed in Fig.19(c). (e) Pairs ofisometries w annihilate to identity, yielding the coarse-grainedsquare lattice network. a projective truncation involving isometries v L , v R anddisentangler u as detailed in Fig.19(a), (b) annihilationof conjugate pairs of disentanglers u to identity, (c) aprojective truncation involving isometries y L and y R asdetailed in Fig.19(b), (d) a projective truncation involv-ing isometry w as detailed in Fig.19(c), (e) annihilationof conjugate pairs of the isometry w to identity. The four-index tensor A ′ of the coarse-grained network, as definedin Fig.19(c), now accounts for a 3 × A from the original square-lattice network. Appendix C: Isotropic TNR scheme
In this appendix we present another TNR scheme forthe square-lattice. Like the binary TNR scheme pre-sented in the main text, this scheme also reduces thelinear dimension of the lattice by a factor of 2 with eachiteration. However, unlike the binary TNR scheme, thisscheme treats both dimensions of the lattice equally, forwhich we refer to it as the isotropic
TNR scheme. Thus,if the starting network that is invariant under 90 ◦ rota-tions, such as that corresponding to the partition func-tion of an isotropic 2 D classical statistical model, thisTNR scheme can preserve the rotational symmetry un-der coarse-graining.The isotropic TNR scheme is applied to a square-lattice network with a four-site unit cell, where it is as-sumed that the network consists of three types of fourindex tensor, A b , A p and A r . As shown in Fig.20 the lat-tice has a 2 × A b tensor connectswith four A p tensors in the network, and likewise each FIG. 19. Overview of the projective truncations involved inthe ternary TNR scheme. (a) A projective truncation, imple-mented by projector P u composed of isometries v L , v R anddisentangler u , acts upon a 3 × A tensors. (b) A pro-jective truncation, formed from isometries y L , y R and theirconjugates, acts upon B tensors. (c) A projective truncation,formed from isometry w and its conjugate, acts to give thenew four index tensor A ′ of the coarse-grained network. A r tensor also connects with four A p . It is assumed that A p and A r tensors are invariant with respect to 90 ◦ rota-tions, while A p tensors are invariant with respect to 180 ◦ rotations; under this assumption the square-network it-self is invariant with respect to 90 ◦ rotations centeredabout either an A p or A r tensor. Notice that a uni-form [one-site unit cell] square lattice is just a specialcase of this four-site unit cell lattice, where indices onone of the sub-lattices, such as indices connected to A r tensors, are fixed at trivial dimension, i.e. bond dimen-sion χ = 1. Thus this isotropic TNR scheme can bedirectly applied to the partition function of an isotropic2 D classical statistical model, when it is represented asa uniform square-lattice network of tensors A that areinvariant with respect to 90 ◦ rotations.The steps of a coarse-graining iteration of the isotropicTNR scheme are shown in Fig.20(a-d). The initial step,shown in Fig.20(a), involves two different projective trun-cations. One of the projective truncations, as detailed inFig.21(a), acts upon a block of four A p tensors and a A b tensor, while the other acts upon individual A r ten-sors as detailed in Fig.21(b). In the second step of theiteration, shown in Fig.20(b), conjugate pairs of disen-tanglers u annihilate to the identity. A pair of projectivetruncations are used in the third step of the iteration,seen in Fig.20(c). These projective truncations, whichinvolve a projector composed of an isometry w and itsconjugate, are detailed in Fig.21(c-d). In the final stepof the iteration, as shown in Fig.20(d), certain isometries w annihilate to identity with their conjugates, yieldingthe coarse-grained network. Notice that the tensors A b ′ ,7 FIG. 20. Depiction of a single iteration of the isotropic TNRscheme, which maps a square-lattice network, comprised oftensors A b , A p and A r , to a coarser lattice of the same type.(a) Two types of projective truncation are made, as detailed inFig.21(a-b). (b) Conjugate pairs of disentanglers u annihilateto identity. (c) Two types of projective truncation are made,as detailed in Fig.21(c-d). (d) Conjugate pairs of isometries w annihilate to identity, yielding a coarse-grained networkcomprised of tensors A ′ b , A ′ p and A ′ r . A r ′ and A p ′ of the coarse grained network, as definedin Fig.21, possess the same rotational symmetry as thecorresponding initial tensors A b , A r and A p , and alsothat the coarse-grained network has the same unit cell[in terms of the coarse-grained tensors] as the initial net-work, but is reduced by a factor of 2 in linear dimension. Appendix D: Reflection symmetry
In this appendix we describe the details, given a ten-sor network G that is symmetric with respect to spatialreflections [perhaps in conjunction with complex conju- FIG. 21. Overview of the projective truncations involved inan iteration of the isotropic TNR scheme. (a) A projectivetruncation, involving isometries v and disentanglers u , is en-acted upon a product of four A p tensors and a A b tensor.The coarse-grained tensor A ′ b is also defined. (b) A projec-tive truncation, involving isometries y and disentanglers u , isenacted upon A r tensors. (c) A projective truncation, involv-ing isometries w , is enacted, yielding coarse-grained tensors A ′ r . (d) A final projective truncation, again involving isome-tries w , is applied to a product of A b and v tensors, yieldingcoarse-grained tensors A ′ p . gation] along one axis, for how the symmetry can be pre-served under coarse-graining with TNR. In the case that G represents a Euclidean path integral the presence ofthis symmetry follows from the Hermicity of the Hamilto-nian, and is thus always present, while in the case that G represents a partition function the symmetry is present ifthe underlying 2 D classical statistical model is invariantunder spatial reflection along an axis. Proper exploita-tion of this symmetry is desirable as it can significantlysimplify the TNR algorithm.We say the homogeneous tensor network G is Hermi-tian symmetric with respect to the horizontal axis if arow of tensors, which form a matrix product operator(MPO), is invariant with respect to permutation of top-bottom indices in conjunction with complex conjugation,see also Fig.22. If a row of tensors satisfies the above def-inition of reflection symmetry, then it can be shown thatpermutation of top-bottom indices and complex conjuga-8tion of tensor A is equivalent to enacting a unitary gaugechange on its horizontal indices, A † ≡ ( A ilkj ) ∗ = X j ′ ,l ′ x ∗ j,j ′ A ijkl x l ′ ,l , (D1)see also Fig.22(b-c). Here the indices of tensor A ijkl arelabeled clockwise from the top, as per Fig.1(a), and x issome unitary matrix.In the first step of the TNR iteration, as discussed inSect.IV A, it is useful to enact the unitary gauge change x on every second row of tensors in the network G as de-picted in Fig.2(b). Let us define tensor Q as the tensorformed from contracting two copies of A and two copies of A † together as depicted in Fig.2(d). The reason that theinitial gauge transformation on G is useful is that it allowstensor Q to be Hermitian under exchange of top-bottomindices, thus at the first step of the TNR iteration thesame projector P u may be used both on the top and bot-tom of Q , see Fig.6(a-c). The reflection symmetry can bepreserved under the TNR iteration, such that the tensors A ′ of the coarse grained network satisfy Eq.D1 for someunitary matrix x ′ , if the isometries y L and y R used inthe second step of the TNR iteration satisfy the relationshown in Fig.22(e-f). This relation states that the isome-tries should be invariant under complex conjugation inconjunction with permutation of their incoming indicesand a unitary gauge change, enacted by unitary matrix x ′ , on their outgoing index. Isometries y L and y R satis-fying this relation can be obtained by symmetrizing theenvironments Γ y L and Γ y R of the isometries during theiroptimization, similar to previously strategies for preserv-ing reflection symmetry in MERA described in Ref.31.Finally, we remark these ideas can be extended suchthat reflection symmetry along the vertical axis can bepreserved [simultaneously with that on the horizontalaxis], if both symmetries are present in the initial net-work. Appendix E: Reduction of computational cost
The cost of the TNR algorithm as described in Sect.IVscales as O ( χ ) in terms of the bond dimension χ ; in thisappendix here we describe how this cost scaling can bereduced to O ( χ ). This reduction in cost is achieved bydoing an additional projection truncation at the start ofeach TNR iteration; specifically this projective trunca-tion is enacted on 2 × A tensors, with twoof the tensors conjugated as discussed in Appendix D,before the projective truncation step of Fig.6(a-c). Theprojector P involved in this step is composed of isome-tries q l , q r and z [and their complex conjugates], as de-picted in Fig.23, and can be optimized with the standarditerative SVD approach as described in Sect.III C.After this initial projective truncation the cost ofthe subsequent step of the TNR iteration is reduced.Fig.23(c) depicts the new tensor ¯ B , which differs from FIG. 22. (a) The tensor A is Hermitian symmetric if a row ofsuch tensors is invariant with respect to permutation of top-bottom indices in conjunction with complex conjugation. (b)Permutation and complex conjugation of Hermitian symmet-ric A is equivalent to a gauge change, enacted by some unitarymatrix x , on its horizontal indices. (c) Definition of A † . (d)A gauge change is performed on the tensors of every secondrow of the square-lattice network. (e) Definition of tensor Q .(f) Tensor Q is seen to be Hermitian. (g-h) Constraints forisometries y L and y R necessary such that the tensors of thecoarse-grained network remain Hermitian symmetric. the tensor B in Fig.6(b) only by an amount related to thetruncation error ε of the initial projection step, but canbe computed with a cost that scales O ( χ ), as opposedto O ( χ ) for computing B . Likewise the environmentΓ u of disentangler u when expressed in terms of ¯ B , seeFig.23(d), can also be computed with cost O ( χ ) insteadof the cost O ( χ ) associated to computing the previousenvironment of Fig.7(c). Similarly, the environments ofisometries v L and v R can also be computed with cost O ( χ ) when using ¯ B . Thus, when using the results ofthis appendix, no operation required to implement thebinary TNR scheme has a cost scaling of greater than O ( χ ).9 FIG. 23. (a) Details of an optional, preliminary projectivetruncation for the binary TNR scheme where two copies of aprojector P , which is formed from a product isometric tensors q L , q R , z and their conjugates, are enacted upon a 2 × A tensors. (b) Definition of tensor K . (c) Definition oftensor ¯ B , which differs from the previous B , see Fig.6(b),only by an amount related to the truncation error of P in(a). (d) Detail of the environment Γ u computed from ¯ B . Thecost of contracting this network sales as O ( χ ) in terms ofthe bond dimension χ , as opposed to O ( χ ) for the previousenvironment of Fig.7(c). Appendix F: Optimization using a largerenvironment
In this appendix we discuss how the accuracy of thebinary TNR scheme, for given bond dimension χ , canbe improved by taking a larger environment into accountat each truncation step. This follows similar ideas in-troduced in Ref.7 to improve TRG by taking the localenvironment into consideration at each truncation step.The TNR approach is based upon the use projec-tive truncations to implement coarse-graining transfor-mations, as discussed in Sect.III C. A projective trunca-tion involves application of a projector P to a local sub-network of tensors F , where it is desired that P acts on F as an approximate resolution of the identity, see Eq.19.Use of a larger sub-network F typically allows a moreaccurate truncation, as the projector P can take into ac-count correlations from a larger region of the network.Fig.24(a) depicts a larger sub-network, consisting of twocopies of the B tensor in addition to v L and v R tensors,that can be used in the determination of the projectors P L , P R and P w in the second and third step of the TNRiteration. The condition that projectors P L , P R and P w act with small truncation error on this sub-network, asshown in Fig.24(a), is less restrictive than the conditionpreviously imposed on the projectors in Sect.IV [whichused a smaller sub-network], thus potentially allows formore accurate projectors to be chosen. The isometric tensors { y L , y R , w } that compose these projectors can bechosen to minimize the truncation error ε by optimiz-ing them to maximize (cid:13)(cid:13) ˜ A (cid:13)(cid:13) , with tensor ˜ A as definedin Fig.24(b), which effectively replaces the two separateoptimizations depicted previously in Fig.6(d,f).We would also like to use the larger sub-network inthe optimization of the disentangler u . Given that the B tensors depend on the disentanglers u , as depictedin Fig.6(b), one could likewise optimize u to maximize (cid:13)(cid:13) ˜ A (cid:13)(cid:13) . However, the dependance of ˜ A on disentanglers u is not in a form that is directly compatible with the op-timization problem discussed in Sect.III C. To this end,we use a modified environment ˜Γ u of u generated from (cid:13)(cid:13) ˜ A (cid:13)(cid:13) , as depicted in Fig.24(c), where the modified envi-ronments results from the removal of either a pair tensors, v L v † L or v R v † R , from ˜ A . The modified environment ˜Γ u isnow compatible with the previous optimization strategyof iterative SVD updates, thus can be used to directlyreplace the previous environment Γ u of u from Fig.7(c).Note that ˜Γ u can be computed with cost that scales as O ( χ ) in terms of bond dimension χ [or O ( χ ) whenemploying the ideas of Appendix E], which is the samescaling with χ as the basic TNR algorithm discussed inSect.IV. Appendix G: Achieving a scale-invariant RG flow
A novel and useful feature of the TNR approach isthat, when applied to a scale-invariant critical system, itcan generate an explicitly scale-invariant RG flow, suchthat tensors at different scales s of coarse-graining areequal, A ( s +1) ≈ A ( s ) , (G1)up to small differences stemming from truncation errors.However, realization of scale invariance requires fixingthe gauge degree of freedom, which we now discuss.Give a square lattice network G composed of identicaltensors A ijkl there exists a local gauge freedom in thenetwork relating to a local change of basis on individualindices of the tensor, implemented by unitary matrices x and y , A ijkl → X i ′ j ′ k ′ l ′ ( A ) i ′ j ′ k ′ l ′ x ii ′ ( x † ) k ′ k y ll ′ ( y † ) j ′ j , (G2)under which the tensor network remains unchanged.[Note, in general the tensor network is invariant underchanges of gauge implemented by invertible matrices,however the more restrictive class of unitary changes ofgauge is sufficient to consider if reflection symmetry is ex-ploited, see Appendix D]. If the gauge degree of freedomis not given proper consideration, application of TNR to ascale-invariant critical will not system will not, in general,produce an explicitly scale-invariant RG flow, as definedby Eq.G1. Instead an implicitly scale-invariant RG flow0 FIG. 24. (a) Isometric tensors y L , y R , w yielding a more accu-rate coarse-graining transformation can found by optimizingthem to minimize the truncation error when applied to a re-gion of the network, here containing two copies of the B ten-sor, that is larger than was previously considered in Fig.6(d,f).Notice that minimizing the truncation error is equivalent tomaximizing the norm of ˜ A . (b) Definition of tensor ˜ A . (c)The two contributions to the modified environment ˜Γ u fordisentangler u , as generated from (cid:13)(cid:13) ˜ A (cid:13)(cid:13) . may be given, where the tensors A ( s +1) and A ( s ) differby choice of gauge, even though they are representativeof the same critical fixed point. [Conversely, it can bedemonstrated that TRG does not generate an implicitlyscale-invariant RG flow when applied to a scale-invariantsystem, as certain gauge-invariant properties of tensors A ( s ) diverge with RG step s , see Ref.1]. We now explainhow the gauge freedom in TNR can be fixed such thatan otherwise implicitly scale-invariant RG flow becomesexplicitly scale-invariant.There are many strategies one could employ to fixthe choice of gauge on tensor A ( s +1) to be compatiblewith that on the previous tensor A ( s ) . One possibil-ity is to include separate gauge-fixing step after eachcoarse-graining iteration that minimizes the difference (cid:13)(cid:13) A ( s +1) − A ( s ) (cid:13)(cid:13) through optimization of unitary matri-ces x and y that implement a change of gauge on A ( s +1) ,as per Eq.G2. A different possibility, one that we findmore convenient, is to include the gauge-fixing as part ofthe tensor optimization described in Sect.IV B. Let B ( s ) be the tensor obtained after the first step of the TNRiteration on A ( s ) , as per Fig.6(b), and assume we wishto choose a gauge on isometries v ( s ) L , v ( s ) R and disentan-gler u ( s ) consistent with the previous TNR iteration, i.e.such that the choice of gauge on B ( s ) takes it as close topossible B ( s − . Let us define,˜ B = B ( s ) + δB ( s − , (G3)for some δ >
0. Then, during the optimization of v ( s ) L , v ( s ) R and u ( s ) , if ˜ B is used instead of B ( s ) in the calculationof the tensor environments, Fig.7(a-c), the optimizationis biased towards ensuring that the tensors are chosenin the same gauge as those at the previous TNR itera-tion. Typically we take δ = 1 for the early stages of theoptimization of tensors v ( s ) L , v ( s ) R and u ( s ) , but reduce δ smaller as the tensors converge. The same strategy canthen be employed during the optimization of tensors y ( s ) L , y ( s ) R and w ( s ) in the other intermediate steps of the TNRiteration to ensure that the gauge on A ( s +1) is fixed in away compatible with A ( s ) . ∗ [email protected] G. Evenbly and G. Vidal, arXiv:1412.0732. M. Levin and C. P. Nave, Phys. Rev. Lett. 99, 120601(2007). H. C. Jiang, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett.101, 090603 (2008). Z.-C. Gu, M. Levin, and X.-G. Wen, Phys. Rev. B 78,205116 (2008). Z.-Y. Xie, H.-C. Jiang, Q.-N. Chen, Z.-Y. Weng, and T.Xiang, Phys. Rev. Lett. 103, 160601 (2009). Z.-C. Gu and X.-G.Wen, Phys. Rev. B 80, 155131 (2009). H.-H. Zhao, Z.-Y. Xie, Q.-N. Chen, Z.-C. Wei, J. W. Cai, and T. Xiang, Phys. Rev. B 81, 174411 (2010). Z.-Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, andT. Xiang, Phys. Rev. B 86, 045139 (2012). B. Dittrich, F. C. Eckert, and M. Martin-Benito, New J.Phys. 14 035008 (2012). A. Garcia-Saez and J. I. Latorre, Phys. Rev. B 87, 085130(2013). For a review of the renormalization group see: M.E. Fisher,Rev. Mod. Phys. 70, 653 (1998). P. Di Francesco, P. Mathieu, and D. Senechal,
ConformalField Theory (Springer, 1997). M. Henkel,
Conformal Invariance and Critical Phenomena (Springer, 1999). K.G. Wilson, Phys. Rev. B 4, 3174 (1971). K.G. Wilson,Phys. Rev. B 4, 3184 (1971). K.G. Wilson, Rev. Mod. Phys.47, 773 (1975). G. Evenbly, R. Myers, and G. Vidal,
Local scale transfor-mations on the lattice with tensor network renormalization ,in preparation. G. Vidal, Phys. Rev. Lett. 99, 220405 (2007). G. Vidal, Phys. Rev. Lett. 101, 110501 (2008). G. Evenbly and G. Vidal, arXiv:1502.05385. G. Evenbly and G. Vidal, Phys. Rev. B 79, 144108 (2009). V. Giovannetti, S. Montangero, and R. Fazio, Phys. Rev.Lett. 101, 180503 (2008). R.N.C. Pfeifer, G. Evenbly, and G. Vidal, Phys. Rev. A79, 040301(R) (2009). G. Evenbly, P. Corboz, and G. Vidal, Rev. A 81, 010303(R)(2010). G. Evenbly and G. Vidal, Quantum Criticality with theMulti-scale Entanglement Renormalization Ansatz, chap-ter 4 in the book Strongly Correlated Systems. Nu-merical Methods, edited by A. Avella and F. Mancini(Springer Series in Solid-State Sciences, Vol. 176 2013),arXiv:1109.5334. J. C. Bridgeman, A. O’Brien, S. D. Bartlett, and A. C.Doherty, Phys. Rev. B 91, 165129 (2015). M. Suzuki, Phys. Lett. A, 146, 6, (1990) 319-323. M.Suzuki, J. Math. Phys. 32, 2 (1991) 400-407. A. T. Sornborger and E. D. Stewart, Phys. Rev. A, 60(1999) 156-1965. L. de Lathauwer, B. de Moor, and J. Vandewalle, SIAM J.Matrix Anal. Appl., 21, 1253 (2000). G. Evenbly, R. N. C. Pfeifer, V. Pico, S. Iblisdir, L. Tagli-acozzo, I. P. McCulloch, and G. Vidal, Phys. Rev. B 82,161107(R) (2010). P. Silvi, V. Giovannetti, P. Calabrese, G. E. Santoro, andR. Fazio, J. Stat. Mech. (2010) L03001. G. Evenbly and G. Vidal, Phys. Rev. B 91, 205119 (2015). G. Evenbly and G. Vidal, J Stat Phys (2014) 157:931-978. Y.-L. Lo, Y.-D. Hsieh, C.-Y. Hou, P. Chen, and Y.-J. Kao,Phys. Rev. B 90, 235124 (2014). G. Evenbly and G. Vidal, in preparation. M. Fannes, B. Nachtergaele, and R. F. Werner, Commun.Math. Phys. 144, 443 (1992). S. Ostlund and S. Rommer, Phys. Rev. Lett. 75, 3537(1995). G. Vidal, Phys. Rev. Lett., 91, 147902 (2003). G. Vidal, Phys. Rev. Lett., 93, 040502 (2004). S. Singh, R. N. C. Pfeifer, and G. Vidal, Phys. Rev. A 82,050301 (2010). L. Cincio, J. Dziarmaga, and M. M. Rams Phys. Rev. Lett.100, 240603 (2008). G. Evenbly and G. Vidal, Phys. Rev. Lett. 102, 180406(2009). F. Verstraete and J. I. Cirac, arXiv:cond-mat/0407066. F. Verstraete, J.I. Cirac, and V. Murg, Adv. Phys. 57,143(2008).43