Scaling dimensions from linearized tensor renormalization group transformations
SScaling dimensions from linearized tensor renormalization group transformations
Xinliang Lyu, ∗ RuQing G. Xu, and Naoki Kawashima † Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan (Dated: February 17, 2021)We show a way to analyze a renormalization group (RG) fixed point in tensor space: write down thetensor RG equation, linearize it around a fixed-point tensor, and diagonalize the resulting linearizedRG equation to obtain scaling dimensions. The tensor RG methods have had a great success inproducing accurate free energy compared with the conventional real-space RG schemes. However,the above-mentioned canonical procedure for the fixed-point analysis has not been implemented forgeneral tensor-network-based RG schemes. We extend the success of the tensor methods further toextraction of scaling dimensions through analyzing a fixed-point tensor. This approach is benchmarkedin the context of the Ising models in 1D and 2D. The proposed method accomplishes the canonicalRG prescription for the tensor RG methods.
I. INTRODUCTION
The renormalization group (RG) is a powerful tech-nique for studying physical systems where fluctuationsin all scales of length are important [1]; the most famousexample in statistical mechanics is critical phenomena.The main idea behind the RG is to study how a physicalsystem changes as we go from one length scale to another.Conventional RG schemes, such as (cid:15) -expansion [2] andblock-spin methods [3–7], aim at a map from the Hamil-tonian of the short length scale to that of the longer one,such that the partition function is unchanged [8]. Themap is known as an RG equation. A well-behaved RGequation exhibits fixed points, each corresponding to aconformal field theory (CFT) [9, 10]. A critical system isdescribed by a fixed point. By linearizing the RG equationaround the critical fixed point, universal properties likescaling dimensions of the critical system can be extracted.This canonical RG prescription of analyzing a fixed pointalso provides a theoretical framework to understand uni-versality in critical phenomena. However, for a systematicstudy with high-precision, the Hamiltonian may not bethe most efficient representation of the system.Recently, ideas from quantum information have stimu-lated a novel type of RG methods in tensor space. Theyare versatile numerical RG schemes whose approximationsare controlled by an integer, χ , called the bond dimension .The RG equation is a map from a tensor encapsulatingthe Boltzmann weights of local configurations at a shortlength scale to a new tensor at a longer one. The first real-ization of this new paradigm is the tensor renormalizationgroup (TRG) [11], followed by many variations [12–17].These TRG-type techniques have excellent performance incalculations of free energy. For example, the higher-ordertensor renormalization group (HOTRG) [14] estimatesthe free energy of the 2D Ising model with error of order10 − within a few minutes in a desktop computer. The ∗ [email protected] † [email protected] estimation error decreases exponentially as χ increases,while the computational cost only grows polynomially.With all of their success in calculations of free energy,however, the TRG-type techniques encounter obstaclesin the fixed-point analysis. Early attempts [18–21] showthat if the bond dimension χ of the TRG is larger than8, the tensor will never flow to the critical fixed point ofthe 2D Ising model; this imposes a very strong restrictionon the bond dimension in the fixed-point analysis. For χ = 2 , ,
4, either using the TRG or the HOTRG, theestimated scaling dimension of the energy density operatoris similar to the old potential moving tricks [18–20], andthat of the spin operator is more than a factor of 2 largerthan the exact value [21].Fortunately, in recent ten years, people have developedmany tricks to solve the problem of the unsatisfactorytensor RG flows. In 2009, Gu and Wen [22] was thefirst to deal with this problem. They followed Levin’ssuggestion [11, 23] and focused on a toy model calledcorner double-line (CDL) tensors, which represent systemswith only local correlations. They showed that the CDLtensors are fixed points of the TRG, indicating that thelocal correlations at the smaller length scales will becarried to the larger ones. A crude algorithm was proposedto filter out the CDL tensors and the problem of the tensorRG flows was partially solved, followed up by an improvedalgorithm in 2017 [24]. From 2015 to 2017, several similarmethods were proposed [25–27]. All of these advancedTRG-type techniques successfully produced critical fixed-point tensors.With a critical fixed-point tensor in hand, Gu andWen [22] pointed out that the scaling dimensions can beextracted by diagonalizing a transfer matrix constructedfrom the fixed-point tensor according to a well-known2D CFT theorem [28]. Later, Evenbly and Vidal usedthe tensor network renormalization (TNR) [25, 26] toimplement local scale transformation that maps a planeto a cylinder [29]; the spectrum of eigenvalues of a transfermatrix on the cylinder gives scaling dimensions. Thesemethods have been applied to calculate scaling dimensionsfrom then on, while the fixed-point analysis in tensor space a r X i v : . [ c ond - m a t . s t a t - m ec h ] F e b RGtransformations Linearize the RGequation around afixed point ScalingdimensionsTensor RGFreeenergy Filter out allshort-rangephysics Afixed-pointtensorGaugefixing A 2D CFTtheorem[22, 28]Construct thelocal scaletransformation[29]
Figure 1. Different ways to extract scaling dimensions usingtensor RG methods. The proposed method in this papercorresponds to the path indicated by the thick arrows. has never followed up.In this paper, we provide the missing piece of analyzinga fixed point in tensor space at a general bond dimension(see Fig. 1). After laying down the general framework forthe canonical RG prescription for the tensor RG methodsin Sec. II A, we point out two technical obstacles, local cor-relations and gauge redundancy, in Sec. II B. The HOTRGis combined with a recently-developed technique, graph-independent local truncation (GILT) [30] in Sec. II C, togenerate correct tensor RG flows that will go to a criticalfixed point at a general bond dimension. We call thismethod GILT-HOTRG. In Sec. II D, we show that mostgauge redundancy in the tensor description is automati-cally fixed during GILT-HOTRG, leaving only tractablesign ambiguities. The linearized RG equation for theGILT-HOTRG is easy to implement and has a simplepictorial representation; in practice, it can be generatedby automatic differentiation once the GILT-HOTRG isimplemented. The scaling dimensions can be extractedfrom this linearized RG equation. In Sec. III, the canoni-cal RG prescription in tensor space is benchmarked with1D and 2D classical Ising models. We conclude in Sec. IV.
II. RENORMALIZATION GROUP IN TENSORNETWORK LANGUAGE
TRG-type methods start with the fact that partitionfunctions of all classical statistical models can be rewrittenas tensor network models [11]. Take the square lattice 2DIsing model as a concrete example. The partition functionis Z = (cid:88) { σ ( r ) } e K (cid:80) (cid:104) i,j (cid:105) σ i σ j , (1)where σ i is the shorthand for the spin variable σ ( r i )located at lattice point r i and can take values ±
1, and K = J/k B T . In this paper, we measure temperature inunits of J/k B so it becomes a dimensionless number. Thepartition function in Eq. (1) can be rewritten as a tensor A AA Aσ i σ j σ k σ l . . . . . . ...... Figure 2. Representation of the local Boltzmann weightin terms of tensors. The dots are where the spin variableslocate. They form a square lattice slanted by 45 ◦ . The largercircles are tensors A encoding the Boltzmann weight of theconfigurations of the four surrounding spin variables. Thesquare lattice formed by N copies of A is the tensor networkrepresentation of the partition function in Eq. (3). network by defining a tensor A σ i σ j σ k σ l ≡ e K ( σ i σ j + σ j σ k + σ k σ l + σ l σ i ) = Aσ i σ j σ k σ l . (2)Each index of this tensor can take two values ± χ = 2. Itis now possible to rewrite the partition function of the 2DIsing model in Eq. (1) as the tensor product of N copiesof A , with all their indices summed over (Fig. 2) Z = (cid:88) { σ ( r ) } N (cid:79) x =1 A σ i ( x ) σ j ( x ) σ k ( x ) σ l ( x ) ≡ tTr (cid:32) N (cid:79) x =1 A (cid:33) . (3)The coarse graining of the tensor network resemblesthe conventional block-spin methods. We replace a patchof, say, four copies of the original tensor A with onecoarse-grained tensor A c , such that the partition functionis approximately described by a coarser tensor networkmade of N/ A c Z ≈ tTr N/ (cid:79) x =1 A c . (4)The specific procedure for obtaining A c from A will bediscussed later. The map A RG −−→ A c or A c = T ( A ) (5)is the tensor RG equation. A. General framework
We first define the canonical
RG prescription in tensorspace. To this end, it is helpful to start with a reviewof the old approach in Hamiltonian space (we follow thedetailed review [21] and textbook [31] closely).It will be convenient to explain in terms of a spe-cific physical system: classical system spin variables σ ∈ { +1 , − } on a lattice, with general short-rangedinteractions. The Hamiltonian (or energy) of the sys-tem can be parameterized by a set of coupling constants K = { K j } , each of which couples to a possible short-ranged interaction term s j ( r ), H = (cid:88) r (cid:88) j K j s j ( r ) . (6)For example, if K is the magnetic field, s ( r ) = σ ( r )is the spin variable at lattice point r ; if K is thenearest neighbor interaction along x direction, s ( r ) = σ ( r ) σ ( r + a ˆ e x ), where ˆ e x is the unit vector along x di-rection and a is the lattice constant. A conventional RGtransformation maps the old Hamiltonian H to a new one H (cid:48) with the same form as Eq. (6) but characterized by aset of new coupling constants K (cid:48) = { K (cid:48) j } . The map fromthe old Hamiltonian to the new one H RG −−→ H (cid:48) is thenparametrized explicitly as the transformation from theold coupling constants to the new ones, K (cid:48) = T old ( K ) . (7)We require that the RG transformation should preservethe partition function of the system and should exhibita fixed-point Hamiltonian H ∗ parameterized by couplingconstants K ∗ , such that K ∗ remains unchanged underthe RG transformation, K ∗ = T old ( K ∗ ) . (8)The linearized RG equation around K ∗ is defined in thefollowing way. We perturb the coupling constants aroundthe fixed point K p = K ∗ + δ K and perform the RGtransformation defined in Eq. (7), K (cid:48) p = T old ( K p ). Thenew coupling constants K (cid:48) p after the RG transformationshould be close to K ∗ by continuity, so K (cid:48) p = K ∗ + δ K (cid:48) .The linearized RG equation around K ∗ is a matrix R old telling us how δ K (cid:48) is related to δ K , δK (cid:48) i = (cid:88) j R old ij δK j . (9)The matrix R old has right and left eigenvectors { ψ α } , { φ α } with the same set of eigenvalues { λ α } , (cid:88) j R oldij ψ αj = λ α ψ αi and (cid:88) i φ αi R oldij = λ α φ αj . (10)The linear combinations of δK i according to the com-ponents of the left eigenvector φ α are known as scalingfields h α = (cid:88) i φ αi δK i , (11) while the linear combinations of interaction terms s j ( r )according to the components of the right eigenvector ψ α are known as scaling operators o α ( r ) = (cid:88) j s j ( r ) ψ αj . (12)Under the RG transformation with rescaling factor b fora system in dimension d , the scaling fields and the scalingoperators transform in a simpler way with( h α ) (cid:48) = b d − x α h α and ( o α ) (cid:48) = b x α o α , (13)where x α are the scaling dimensions of the scaling op-erators o α ( r ). Equations (9) to (11) and (13) give therelation between the scaling dimensions { x α } and theeigenvalues { λ α } of the linearize RG equation, b d − x α = λ α . (14)Next, we move on to the tensor approach of the canon-ical RG prescription. In the tensor RG approach, weskip the Hamiltonian description of the system. Instead,we use a tensor network made of copies of tensor A torepresent the partition function Z of the system. Thetensor RG equation is a map from the tensor A to thecoarser tensor A c , as is shown in Eq. (5). We claim thatthe components of the tensor A can be thought of assome proxies of the coupling constants K (this claim washinted in Ref. [22]).To see why this claim is reasonable, note that we canmap the partition function of the system with Hamiltonianin Eq. (6) to a tensor network using the method introducedin Ref. [11]. Each component of the initial tensor A isthe Boltzmann weight of a given local configuration anddepends on the coupling constants, A ( i ) = f ( i ) ( K ) , (15)where we group all legs of A to form a single index, A ( i ) ≡ A i i i i . After coarse graining, the components of A c are still functions of K but with different functionalforms, ( A c ) ( i ) = ( f c ) ( i ) ( K ) . (16)Now, we require that each component of the coarser tensor A c should have the same functional form as that of A ,but with different coupling constants K (cid:48) , f ( i ) ( K (cid:48) ) = ( f c ) ( i ) ( K ) , ∀ ( i ) . (17)In the old Hamiltonian approach, we need to solve Eq. (17)for K (cid:48) in terms of K , which defines the RG equation fromthe old K to the new K (cid:48) . However, in the tensor approach,it is enough to know the existence of such K (cid:48) . CombineEq. (16) and Eq. (17), we have( A c ) ( i ) = f ( i ) ( K (cid:48) ) . (18)At the fixed point, K = K (cid:48) = K ∗ , equations (15) and(18) give A ∗ RG −−→ A ∗ or A ∗ = T ( A ∗ ) . (19)Take the total derivative of tensors A and A c in Eqs. (15)and (18) and set K = K (cid:48) = K ∗ , δA ( i ) = (cid:88) n (cid:0) ∂ ( n ) f ( i ) (cid:1)(cid:12)(cid:12)(cid:12) K = K ∗ δK n , (20)( δA c ) ( i ) = (cid:88) n (cid:0) ∂ ( n ) f ( i ) (cid:1)(cid:12)(cid:12)(cid:12) K (cid:48) = K ∗ δK (cid:48) n . (21)Equations (20) and (21) give the transformation law be-tween the coupling-constant description and tensor de-scription of the canonical RG prescription, with ∂ ( n ) f ( i ) evaluated at K ∗ being the change of basis matrix. Underthis transformation, the linearized RG equation in Eq. (9)becomes ( δA c ) ( i ) = (cid:88) j R ( i )( j ) δA ( j ) , (22)which defines the linearized RG equation in tensor space.Since Eqs. (9) and (22) are the same linear transforma-tion in two different representations, we can equally welldiagonalize the matrix R ( i )( j ) and find scaling dimensionsaccording to Eq. (14).In Sec. III A, we will use the 1D Ising model as aconcrete example to demonstrate the general argumentabove. B. Technical obstacles
There are two major obstacles for the fixed-point anal-ysis in tensor space. They prevent us from obtaining afixed-point tensor satisfying Eq. (19). The two obsta-cles are the problem of local correlations and the gaugeredundancy in tensor network language.Levin and Nave anticipated the first obstacles whenlooking for fixed points of the RG equation of theTRG [23]. One of the earliest numerical evidence forthe peculiar tensor RG flows of the 2D Ising model wasprovided by Hinczewski and Berker [18]. Their resultsindicate that the TRG-type techniques have difficulty inintegrating out all the local correlations at short distances,so physics at the lattice scale is carried all the way to thephysics at larger ones. This shortcoming of the TRG-typetechniques makes identification of both non-critical andcritical fixed-point tensors very difficult.To understand how the problem of local correlationsat the lattice scale arises in the TRG-type techniques,let us examine the physical picture of the tensor RGtransformation. We focus on a concrete example of atensor network made of 4 × A shown in Fig. 2 with periodic boundary condition. Thegeneral picture of a tensor RG transformation is similarto the conventional block-spin methods. For example, we A c A c A c A c RG −−→ ACCCC all spins insideare summedover ⇐≡ A CDL (a)(b)Figure 3. (a) The block-tensor transformation A → A c . Thespins shared by two tensors A are summed over according toEq. (24). The squares are larger after the decimation. (b) Theorigin of the CDL tensors. When the black square becomeslarge enough, the spins on one edge are far away from thoseon another, except for the spins around the four corners. Thecorrelations among the corner spins give rise to the CDLtensors, containing physics at the lattice scale. block a square of four tensors by contracting legs betweenthem and group every two legs in the same side. Call thenew tensor A c , Z × = RG = A c A c A c A c , (23)where A c ≡ A . (24)It is enlightening to put the original spin variables backinto the tensor network to get a more physical picture ofwhat is happening under such a block-tensor RG transfor-mation. We refrain from drawing legs of A and the dashedlines of the spin lattice in Fig. 2, and surround copies of A with squares on whose sides the spin variables sit. Thebig picture for the block-tensor transformation in Eq. (23)is shown schematically in Fig. 3(a). The process is similarto the decimation in the conventional approaches. Afterthe spin variables shared by every two A tensors formingthe same A c are summed over, we are left with four biggersquares, with two spin variables sitting on each side ofeach square. When the squares become large enough asthe block-tensor transformation goes on, we expect that,roughly speaking, the spin variables on different edges T = 0 T = ∞ T = T c T = 0 T = ∞ No criticalfixed pointThe criticalfixed point(a) (b)Figure 4. Schematic RG flows of the 2D Ising model. Eachpoint on the dashed line represents the lattice model at a giventemperature and is the starting point of an RG transformation.The solid lines with arrows represent different RG flows. (a)The correct RG flow. There are one T = 0 fixed point, one T = ∞ fixed point and one critical fixed point. (b) The RGflows generated by the TRG and the HOTRG. The two trivialfixed points become two fixed lines, and the critical fixed pointdisappears. are far away from each other and thus uncorrelated. Theonly exception is for the spin variables around the fourcorners. We can use a matrix C in Fig. 3(b) to capturethe correlations around the corners; the matrix C mustcontain physics at the scale of the original lattice constant.Since the spin variables around different corners are faraway from each other, the tensor A CDL corresponding tothis black square should factorize into the tensor productof four corner matrices C . A tensor with the structure of A CDL is called a CDL tensor.The CDL tensors are fixed points of the RG equationsof the TRG [22, 23, 25, 30] and the HOTRG [32]. Thisshows that the TRG and the HOTRG have difficultyin integrating out the local interactions among the spinvariables around the corners. If we start with two temper-atures T (cid:54) = T , both larger than the critical temperature T c of the 2D Ising model, either of these two methodswill generate tensors flowing to two different CDL tensors A CDL1 (cid:54) = A CDL2 , as a natural consequence of the fact thatthese CDL tensors depend, directly, on the bare inter-action constants. At criticality, the previous numericalcalculations indicate that we will never reach a criticalfixed-point tensor [18, 25]. Their calculations suggesttensor RG flows shown in Fig. 4(b), where the low- andhigh-temperature fixed points turn into two fixed linesand the critical fixed point disappears. By comparison,the correct RG flow is shown in Fig. 4(a). We will intro-duce a way to solve the problem of CDL tensors for theHOTRG in Sec. II C.The second obstacle that prevents us from achievingEq. (19) is that the tensor network representation of thepartition function in Eq. (3) has gauge redundancy. Iftwo tensors ˜ A and A are related through the invertible matrices S x , S y by the gauge transformation˜ A ijkl = (cid:88) m,np,q A mnpq ( S x ) im ( S y ) jn (cid:0) S − x (cid:1) pk (cid:0) S − y (cid:1) ql , (25a)or pictorially as ˜ A = AS x S y S − x S − y , (25b)The two tensor networks formed by A and ˜ A representthe same partition function Z . Equation (25) defines aequivalence relation where all the elements in an equiv-alence class represent a same partition function Z . Thegauge redundancy makes the correspondence between atensor A and the partition function Z no longer one-to-one. In the following, we refer to the equivalence classdefined by the gauge transformation in Eq. (25) as [ A ].In comparison, in the conventional Hamiltonian approach,we fix the form of the new Hamiltonian H (cid:48) to be the sameas the old H in Eq. (6), with H (cid:48) characterized by a set ofnew coupling constants K (cid:48) . This gives a one-to-one corre-spondence between a set of coupling constants K = { K j } and a Hamiltonian H (or a partition function Z ).The gauge redundancy makes the canonical RG pre-scription in tensor space less straightforward than thatin Hamiltonian space. Even if we have reached a repre-sentation tensor A ∗ of the fixed-point equivalence class[ A ∗ ], the tensor RG equation could bring this tensor toanother representation of [ A ∗ ], A ∗ RG −−→ ˜ A ∗ or ˜ A ∗ = T ( A ∗ ) . (26)In general, we must fix the gauge of the tensor during atensor RG transformation by choosing a preferred set ofbases, so that the fixed-point tensor is manifestly fixed,as is shown in Eq. (19). We will show how the gauge isfixed for a HOTRG-like scheme in Sec. II D. C. GILT-HOTRG
In this subsection, we present a HOTRG-like schemeto solve the first technical obstacle. Compared with thestate-of-the-art TRG-type methods [22, 24–27, 30, 33–36]that can generate correct tensor RG flows, our schemecan be most easily generalized to dimensions higher than2 and is convenient for the subsequent gauge fixing andlinearization procedure. The graph-independent localtruncation (GILT) [30] is performed to filter out the prob-lematic local correlations before the coarse graining of theHOTRG. We call this scheme GILT-HOTRG.The key feature of the GILT is that it is a stand-aloneprocedure to filter out the local correlations and does Q localinsertion split Q absorption Q r Q l C CC C
Figure 5. The process of the GILT. Four copies of C matricesare unknown inner structure of the adjacent 4-leg tensors.They are drawn explicitly to make the demonstration clearer.In the first step, a low-rank matrix Q is inserted into a bond.Then we split Q into two pieces using singular value decom-position. The low-rank matrix Q is constructed so that itcuts the legs of the corner matrices C during the splitting.Finally, the pieces of the matrix Q are absorbed into the twoneighboring tensors. The original GILT paper [30] presents anice way to determine the low-rank matrix Q . not change the geometry of a given tensor network, soit is very flexible. It has been shown that the TRGcombined with GILT is able to generate correct tensorRG flows for the 2D Ising model [30] and the 2D φ theory [37]. Figure 5 summarizes the basic process of theGILT. The loop containing four matrices C inside theplaquette represents the local correlations (see Fig. 3(b)and imagine putting four CDL tensors together to form aplaquette). The first step, which is the most crucial one,is to insert a low-rank matrix Q into the leg we wish totruncate. The remaining two steps are exact. We split Q into two pieces using singular value decomposition andabsorb the two pieces into the adjacent two A tensors.The bond dimension of the leg is smaller and the localcorrelations on this leg are filtered out.The low-rank matrix Q is determined by examining theenvironment E of the bond and performing the singularvalue decomposition, E ≡ svd = V † U s , (27)where we refrain from drawing the unknown C matricesin the plaquette. The environment E of the bond shouldbe thought of as a linear map from the vector space ofall the legs with ingoing arrows to that of all the legswith outgoing arrows. We can use the tensor U and thediagonal matrix s in Eq. (27) to construct the low-rankmatrix Q . To this end, we first define a vector t by contracting two ingoing legs of the tensor U , t ≡ U . (28)Then, we perform a soft truncation of the vector t accord-ing to t (cid:48) i = t i s i s i + (cid:15) , (29)where s i are the singular values and (cid:15) gilt is the hyper-parameter of the GILT. Equation (29) says that the com-ponents of the vector t will be set to very small valuesif the corresponding singular values s i are much smallerthan (cid:15) gilt . The justification for the truncation in Eq. (29)can be found in Ref. [30]. The low-rank matrix Q is con-structed from the tensor U † and the truncated vector t (cid:48) as Q ≡ U † t (cid:48) . (30)It is proved in Ref. [30] that the matrix Q determined inthis way is able to filter out the loop of four C matricesshown in Fig. 5.Next, we move on to explain the HOTRG. The block-tensor transformation in Eqs. (23) and (24) is exact butnot practical, since the bond dimension grows exponen-tially in the original lattice size. The HOTRG is anapproximate tensor RG transformation, which can keepthe bond dimension from growing. For the HOTRG in thevertical direction, we aim at the following approximationof a local patch of two copies of A put together vertically, AAχχw w † χχ ˜ χ ≈ AAχχ , (31)where w is an isometric tensor to be determined and w † itshermitian conjugate. The isometry w is a linear mapping: V ˜ χ → V χ ⊗ V χ , where V χ denotes a χ -dimensional vectorspace, and the isometry satisfies w † w = . We will latersee that the isometric condition of tensor w makes thegauge fixing in the HOTRG easier. It is shown in Ref. [14,26] that a good approximation can be achieved if theisometry w is a collection of ˜ χ eigenvectors correspondingto the first ˜ χ largest eigenvalues of the χ -by- χ positivesemi-definite matrix M M † , with the matrix M defined as M = Mχχ ≡ AAχχ . (32)We use the approximation to replace all pairs of A ten-sors in the tensor network representation of the partition AA localinsertion Q A Q A Q B Q B Q Ar Q Al Q Br Q Bl split Q A , Q B absorption(a) (b) (c) (d)Figure 6. The plaquettes and bonds where the GILT is applied to for the subsequent HOTRG coarse graining. (a) Two copiesof A in the center will be coarse grained vertically. The problematic loops of local correlations are drawn explicitly in theplaquettes to make the demonstration clearer. They are unknown inner structure of the main tensor A . (b) Copies of low-rankmatrices Q A , Q B , determined by the GILT [30], are inserted into the bonds to catch the legs of the loops. (c) Q A , Q B are splitusing singular value decomposition. The GILT ensures the legs of the loops do not leak out. (d) The pieces of Q A , Q B matricesare absorbed into the copies of tensor A . The subsequent HOTRG will be applied on the patch of tensors in the dashed circle. function Z × in Eq. (23) to get Z × ≈ A = A (cid:48) , (33)where in the second step, we contract two A tensors and w, w † in the dashed circle to get a coarser tensor A (cid:48) , A (cid:48) ≡ AA ww † . (34)Notice in the approximation step in Eq. (33), we movethe two leftmost w tensors to the right because we havea periodic boundary condition. Equation (34) defines theHOTRG coarse graining in the vertical direction. Weusually choose ˜ χ ≤ χ max in Eq. (31) to prevent the bonddimension from growing.It is shown in Ref. [32] that the HOTRG in the verticaldirection transforms the A CDL in Fig. 3(b) in the following way, ∝ , (35)which means that although the HOTRG can detect andproject out four inner C matrices, it can do nothing aboutthe four outer C matrices. Therefore, the GILT should beapplied to filter out these four outer C matrices before theHOTRG coarse graining. To this end, we apply the GILTto the plaquettes where the loops of local correlationsare drawn explicitly in Fig. 6 and insert two low-rankmatrices Q A , Q B into the upper and lower bonds for eachplaquette. The legs of the unwanted C matrices will betruncated after the splitting of Q A , Q B . Finally, we applythe ordinary HOTRG in the vertical direction to the localpatch of tensors in the dashed circle in Fig. 6 to get thecoarser tensor A (cid:48) A (cid:48) ≡ AA ww † Q Al Q Ar Q Bl Q Br . (36)In this way, we can remove all horizontal legs of C ma-trices: a half of them by the GILT and the other half bycontraction in the HOTRG. We repeat the similar GILTand the HOTRG on A (cid:48) in the horizontal direction. Thecoarse graining steps in two directions together define theRG equation of the GILT-HOTRG, A c = AA ww † Q Al Q Ar Q Bl Q Br AAv † vQ (cid:48) Bu Q (cid:48) Bd Q (cid:48) Au Q (cid:48) Ad . (37)The computational cost of the GILT-HOTRG is O ( χ ),the same as the HOTRG.The coarse graining defined in Eq. (37) is able to sim-plify the A CDL tensor in Fig. 3(b) to a single number,
GILT-HOTRG −−−−−−−−−→ . (38)Equation (38) shows that the GILT-HOTRG can success-fully filter out the local correlations among the spin vari-ables around the corners at the lattice scale (see Fig. 3(b)).Since the CDL tensors are no longer fixed points for theRG equation of the GILT-HOTRG, the peculiar fixedlines in Fig. 4(b) generated by the HOTRG will collapseto fixed points; we expect the RG equation of the GILT-HOTRG is able to exhibit the critical fixed point tensorshown schematically in Fig. 4(a). D. Gauge fixing and linearization for theGILT-HOTRG
We show how the gauge is fixed and give the explicitexpression of the linearized RG equation for the GILT-HOTRG in this subsection.Part of the gauge can be fixed if the physical model pos-sesses a global internal symmetry. The global symmetrycan be incorporated into the tensor network representa-tion of the model [38–40]; it is a generalization of Schur’slemma from matrices to general tensors. For the 2D Isingmodel, Z symmetry can be imposed. Each index of thetensor A breaks into even and odd sectors. Half of thegauge is fixed since A is in the bases where the statesin the even sector transform trivially and the states inthe odd sector is multiplied by − A can be fixed by going to the diagonal bases of thetensor. We show how the S x gauge redundancy in Eq. (25)is fixed. The S y one can be dealt with in the same way.Given a tensor A , we first contract its two vertical legs to produce a transfer matrix N x , N x = A . (39)We then find the eigenvalue decomposition of this matrix, N x = W − x W x λ , (40)where λ is the diagonal matrix encoding eigenvalues. Thegauge fixing transformation in the horizontal direction isdefined by acting the invertible matrix W x and its inverseon the horizontal legs of the tensor A , A horizontal −−−−−−−→ gauge fixing AW − x W x . (41)To see why the gauge fixing procedure in Eqs. (39) to (41)defines a preferred set of bases, let us examine how thetensor ˜ A in Eq. (25) transforms under this gauge fixingprocedure. The contraction of two vertical legs of ˜ A annihilates S y and S − y in the right hand side of Eq. (25b);the resultant ˜ N x is related to N x through ˜ N x = N x S x S − x . (42)It is straightforward to see that the matrix ˜ W x comingfrom eigenvalue decomposition of the matrix ˜ N x is relatedto W x through ˜ W x = W x S x d x , (43)where d x is a diagonal matrix coming from phase ambigu-ities of eigenvectors, with its diagonal entries to be phasesfor general complex matrices. For a real symmetric N x ,the diagonal entries of d x are ±
1. After the horizontalgauge fixing, the tensor ˜ A becomes ˜ A horizontal −−−−−−−→ gauge fixing AW − x W x d x d x S y S − y . (44)Compare Eq. (41) with Eq. (44), we see that the gaugeredundancies in two horizontal legs are fixed except thephase ambiguities. For 2D classical statistical modelswith spatial reflection symmetries, for example, the 2DIsing model, the real matrix N x can be made symmetric,so the phase ambiguities become sign ambiguities.The gauge fixing procedure described in Eqs. (39)to (41) is general for all TRG-type techniques. However,this procedure is not necessary for the GILT-HOTRGapplied to systems with spatial reflection symmetrieslike the 2D Ising model, since the RG equation of theGILT-HOTRG has a preferred set of bases. As a result,the gauge redundancy in Eq. (25) collapses into phaseambiguities (or sign ambiguities for real tensors) in theGILT-HOTRG. To make things as simple as possible, wefocus on real tensors in the following discussions. Thegeneralization to complex tensors is straightforward.Write the tensor RG equation in Eq. (37) schematicallyas A c = T ( A ). For two real tensors A, ˜ A that are relatedby the gauge transformation defined in Eq. (25) wherewe further restrict S x , S y to be orthogonal matrices, thenew tensors produced by the GILT-HOTRG accordingto Eq. (37), A c = T ( A ) , ˜ A c = T ( ˜ A ) are equal up to signambiguities, (cid:16) ˜ A c (cid:17) ijkl = ( A c ) ijkl ( d x ) i ( d y ) j ( d x ) k ( d y ) l , (45)where d x , d y are vectors with components ±
1. Imaginethat we manage to fix the sign ambiguities, then theGILT-HOTRG ensures T ( A ) = T ( ˜ A ) . (46)Since the orthogonal matrices S x , S y are arbitrary, equa-tion (46) says that the whole equivalence class [ A ] willbe mapped into the same tensor A c . This means thatthe GILT-HOTRG, after incorporating the sign fixingstep, will choose a preferred set of bases. It is worth tomention that the TRG has a similar property [21]. Fora fixed-point tensor, equation (46) indicates that we canstart with any representation ˜ A ∗ of the equivalence class[ A ∗ ], and the GILT-HOTRG will bring ˜ A ∗ to the properbases; further GILT-HOTRG coarse graining will satisfyEq. (19), T ( ˜ A ∗ ) = T (cid:16) T ( ˜ A ∗ ) (cid:17) ≡ A ∗ . (47)Let us prove the property of the tensor RG equation ofthe GILT-HOTRG in Eq. (45) and explain how the signis fixed to have Eq. (46). We focus on the equivalencerelation defined as ˜ A = AS x S y S Tx S Ty , (48)where S x , S y are orthogonal matrices. It is sufficient toconsider such orthogonal changes of gauge if we restrictto the representations of the equivalence class [ A ] withspatial reflection symmetries [26], A kjil = (cid:88) j (cid:48) l (cid:48) ( O y ) jj (cid:48) ( O y ) ll (cid:48) A ij (cid:48) kl (cid:48) (49a)and A ilkj = (cid:88) i (cid:48) k (cid:48) ( O x ) ii (cid:48) ( O x ) kk (cid:48) A i (cid:48) jk (cid:48) l , (49b) where O x , O y are orthogonal matrices, also with O x = O y = , and the legs’ order convention is as per Eq. (2).It can be shown that, if we start with a tensor withreflection symmetry, the GILT-HOTRG will preserve thereflection symmetry and will rotate the tensor into theset of bases where O x , O y become diagonal, with theirdiagonal entries ± A is fed into the right hand side of Eq. (36), the ˜ A (cid:48) weobtain in the left hand is related with the original A (cid:48) by ˜ A (cid:48) = A (cid:48) d x S y d x S Ty , (50)which means that the gauge redundancy in the horizontallegs will be fixed with only sign ambiguities left during thefirst half of the GILT-HOTRG in the vertical direction.It follows immediately that the full GILT-HOTRG willgive Eq. (45).Let us first figure out the correct ˜ Q A , ˜ Q B matricesin Fig. 6. The environment in Eq. (27) is multiplied byseveral orthogonal matrices, which will not change the sin-gular values, so ˜ s i = s i . It is easy to check that the tensor U in the singular value decomposition becomes (the signambiguities coming from the singular value decompositiondoes not matter here) ˜ U = US Tx S x . (51)The vector ˜ t is thus the same as the original t by itsdefinition in Eq. (28), which further gives ˜ t (cid:48) i = t (cid:48) i sincethe tilde version of the right hand side of Eq. (29) is thesame as the original version. Finally, equation (30) gives˜ Q A (30) = ˜ U † ˜ t (cid:48) = U † S x S Tx t (cid:48) = Q A S x S Tx . (52)Equation (52) means that the low rank matrix Q A trans-forms in a nice way when we perform a gauge transforma-tion of the tensor A defined in Eq. (48). If the singularvalues of Q A do not have degeneracy , after splitting of Q A , we have Q Ar , Q Al transform like (the sign ambigui-ties coming from singular value decomposition of Q A and˜ Q A would kick in and contribute to d x in Eq. (50), butthey are not drawn explicitly in the equation below) Q Ar S x ˜ Q Ar = and ˜ Q Al = Q Al S Tx . (53)0The S x , S Tx matrices that Q Ar , Q Al pick up will cancelthose acting on the A tensor when ˜ Q Ar , ˜ Q Al are con-tracted with the ˜ A tensor in Eq. (48). The same argu-ment works for Q B . Equation (53) indicates that all the S x , S Tx matrices acting on the four horizontal legs of thelocal patch in Eq. (36) will be canceled by the low-rankmatrices used in the GILT process. The above analysisshows that during the first half of the GILT-HOTRG, thegauge in the horizontal legs will be fixed with only signambiguities left, since the GILT favors the bases chosenby the singular value decompositions of Q A , Q B .However, there is one more twist. In practice, we ob-serve that the low-rank matrices are projection operators,which are highly degenerated. As a result, the gauge re-dundancy in the degenerate subspace will leak out, whichwill be seen by the subsequent HOTRG process. Luckily,the HOTRG has a similar feature as the GILT process.It favors bases where the positive semi-definite matrix M M † (see the definition of matrix M in Eq. (32)) isdiagonal (the sign ambiguities coming from eigenvaluedecomposition of M M † would similarly kick in here andcontribute to d x in Eq. (50)). It is straightforward tosee that the isometry w will pick up the suitable S x , S Tx matrices to cancel out the gauge transformation leakingout from the GILT process. There are still concerns aboutwhether degeneracy occurs in eigenvalues of M M † . Ourresult in Fig. 9(b) shows, a posteriori, that the potential degeneracy does not cause any problem for the 2D Isingmodel at criticality.The sign ambiguities d x , d y in Eq. (45) can be deter-mined by comparing the sign of the components of ˜ A c and A c . For example, upon making sure ( ˜ A c ) and( A c ) are both positive, set j = k = l = 1 in Eq. (45)to have ( ˜ A c ) i = ( A c ) i ( d x ) i . (54)The relative sign of ( ˜ A c ) i and ( A c ) i determines ( d x ) i .However, this sign fixing method breaks down if both( ˜ A c ) i and ( A c ) i vanish, which occurs as long asthere is a symmetry. This is the reason why we firstfix part of the gauge by exploiting the global internalsymmetry of the physical model. Then, we can applyEq. (54) in each degenerate sectors of the tensor. Thedetailed implementation of the sign fixing procedure for Z symmetric tensors can be found in the source code of thispaper (see Appendix A). This completes the argumentsfor Eqs. (45) and (46).After reaching the fixed-point tensor A ∗ in Eq. (47),the next step is to linearize the RG equation of the GILT-HOTRG in Eq. (37). We substitute A = A ∗ + δA intothe right hand side of Eq. (37) and collect terms that arefirst order in δA to get δA c , δA c = δAA ∗ A ∗ A ∗ + A ∗ A ∗ δAA ∗ + two similar terms . (55)The result resembles the product rule for taking the dif-ferentials in calculus. Equation (55) provides a simplepictorial representation of the linearized tensor RG equa-tion R in Eq. (22) for the GILT-HOTRG. In practice,after the fixed-point tensor A ∗ , the pieces of low-rank ma-trices Q A , Q B and the isometric tensors w, v in Eq. (37)are determined, automatic differentiation can linearizeEq. (37) around A ∗ and generate Eq. (55) for us. Thereare many libraries that support automatic differentiation,including PyTorch [42] and JAX [43]. III. BENCHMARKS
We use the classical Ising model in 1D and 2D todemonstrate how to carry out the canonical RG prescrip- tion in tensor space. The Ising model in 1D serves as aconcrete example to elucidate the general argument inSec. II A. The Ising model in 2D provides more nontrivialbenchmark results for our method.
A. The Ising Model in 1D
The Ising model in 1D has an exact real-space RGtransformation realized via decimation. Even better, thedecimation has a natural tensor network representation.This makes the Ising model in 1D a nice example to seethe relation between the old and the new approaches ofthe canonical RG prescription.1The partition function is Z = (cid:88) { σ j } exp (cid:34) N (cid:88) i =1 H ( σ i , σ i +1 ) (cid:35) , (56)where the local interactions involve the nearest-neighborterm at most H ( σ , σ ) = g + h σ + σ ) + Kσ σ . (57)The decimation process is shown in Fig. 7. It is realizedby summing over all the even-numbered spins and thenrenumber the remaining odd-numbered spins. σ σ σ A c A c A c A c σ (cid:48) σ (cid:48) A A σ σ σ σ σ σ A A A A A Aσ (cid:48) σ (cid:48) σ (cid:48) Figure 7. The decimation for the 1D Ising model. The blackdots are spin variables. The spins on even sites σ , σ , . . . aresummed over and the remaining spins σ , σ , . . . are renamed σ (cid:48) , σ (cid:48) , . . . to become new spin variables. In the tensor networklanguage, this decimation is nothing but a matrix multiplica-tion of two transfer matrices to form a coarse-grained matrix A c = AA . We denote σ (cid:48) i = σ i − , s i = σ i and sum over all s -spinsin the partition function in Eq. (56) to have Z = (cid:88) { σ (cid:48) j } (cid:88) { s j } exp N/ (cid:88) i (cid:2) H ( σ (cid:48) i , s i ) + H (cid:0) s i , σ (cid:48) i +1 (cid:1)(cid:3) , (58)from which we can define the effective local interaction H (cid:48) throughexp [ H (cid:48) ( σ (cid:48) , σ (cid:48) )] = (cid:88) s = ± exp [ H ( σ (cid:48) , s ) + H ( s, σ (cid:48) )] , (59)where the effective local interaction has the same form asthe old one in Eq. (57) but with new coupling constants g (cid:48) , h (cid:48) , K (cid:48) , H (cid:48) ( σ , σ ) = g (cid:48) + h (cid:48) σ + σ ) + K (cid:48) σ σ . (60)The partition function can be fully described by the new σ (cid:48) -spins, Z = (cid:88) { σ (cid:48) j } exp N/ (cid:88) i =1 H (cid:48) (cid:0) σ (cid:48) i , σ (cid:48) i +1 (cid:1) . (61) Equations (57), (59) and (60) together define the RGequation that maps the old coupling constants ( g, h, K )to the new coupling constants ( g (cid:48) , h (cid:48) , K (cid:48) ). The explicitexpression of the RG equation can be found in Kar-dar’s textbook [44]. The RG equation has two fixedpoints, one for high-temperature phase and the otherfor low-temperature phase. Let us focus on the high-temperature fixed point here, where the coupling con-stants are g ∗ = log (1 / , h ∗ = 0 , K ∗ = 0. Thelinearized RG equation around this fixed point gives δg (cid:48) = 2 δg, δh (cid:48) = δh, δK (cid:48) = 0 × δK . The matrix R isin its diagonal form with eigenvalues 2 , , δg, δh, δK respectively.Next, we translate the above decimation process intotensor network language. We first define the tensor A sitting on the bond connecting two spins shown in Fig. 7as A σ σ = exp [ H ( σ , σ )] . (62a)After using the expression for H in Eq. (57), we have A = (cid:18) exp ( g + h + K ) exp ( g − K )exp ( g − K ) exp ( g − h + K ) (cid:19) , (62b)which is the familiar transfer matrix. Each component ofthe tensor A is a function of coupling constants g, h, K , asis claimed in Eq. (15). The partition function in Eq. (56)can be rewritten as Z = (cid:88) { σ j } N (cid:79) i =1 A σ i σ i +1 . (63)The decimation in the tensor network language is a multi-plication of two old A matrices to form a new A c matrix, A c = AA. (64)In terms of the new A c matrix, the partition function is Z = (cid:88) { σ (cid:48) j } N/ (cid:79) i =1 ( A c ) σ (cid:48) i σ (cid:48) i +1 . (65)Equation (64) is the RG equation in the tensor networklanguage. Now, each component of A c is a function ofcoupling constants g, h, K but with different functionalform, as is claimed in Eq. (16). If we further require that A c should have the same form as A in Eq. (62) but with H replaced by H (cid:48) , new coupling constants g (cid:48) , h (cid:48) , K (cid:48) canbe solved in terms of the old ones, which is what we doin the conventional approach. The advantage of usingthe tensor network language is that the RG equationin Eq. (64) suffices for the canonical RG prescription intensor space. First, let us set the coupling constants inEq. (62b) to be the high-temperature fixed point g ∗ =log (1 / , h ∗ = 0 , K ∗ = 0 to get the fixed-point tensor, A ∗ = 12 (cid:18) (cid:19) . (66)2It can be checked that A ∗ A ∗ = A ∗ . The linearized versionof Eq. (64) around this fixed-point tensor is δA c = δAA ∗ + A ∗ δA = IδAA ∗ + A ∗ δAI, (67)where in the last equal sign, we add two identity matrices.Write Eq. (67) in its component form, we have ( δA c ) ab = (cid:80) α,β I aα ( δA ) αβ ( A ∗ ) βb +( A ∗ ) aα ( δA ) αβ I βb . We can readoff the matrix of the linearized RG equation as R ( ab )( αβ ) = ( δA c ) ab ( δA ) αβ = I aα ( A ∗ ) βb + ( A ∗ ) aα I βb , (68)where we group two indices a, b as a single index ( ab ),and α, β as ( αβ ). If we put the grouped index into thefollowing order,(11) → , (12) → , (21) → , (22) → , (69)the matrix takes the following value R = / / / / / /
20 1 / / . (70)This matrix R in Eq. (70) is a symmetric, and we can findits eigenvalues and eigenvectors: λ = 2 , v = (1 , , , T ; λ = 1 , v = (1 , , , − T ; λ = 1 , v = (0 , , − , T and λ = 0 , v = (1 , − , − , T . The eigenvalues are thesame as what we get in the conventional method.The relation between the canonical RG prescription intensor space and the Hamiltonian space can be clarified bynoticing that the relation between the coupling constantsand the tensor A is given in Eq. (62b). We perturb thecoupling constants around the fixed point, g p = log (1 / δg, h p = δh, K p = δK , substitute them into the right handside of Eq. (62b) and Taylor expand to get the perturbedtensor, A p = A ∗ + 12 δg (cid:18) (cid:19) + 12 δh (cid:18) − (cid:19) + 12 δK (cid:18) − − (cid:19) + higher-order terms . (71)We can read off δA = A p − A ∗ as δA = 12 δg (cid:18) (cid:19) + 12 δh (cid:18) − (cid:19) + 12 δK (cid:18) − − (cid:19) , (72)which is Eq. (20) in practice. Recall the order conventionin Eq. (69), we see the correspondence v ↔ δg , v ↔ δh and v ↔ δK . B. The Ising Model in 2D
There is no exact RG transformation for the Isingmodel in 2D, so we will use the GILT-HOTRG developed A ( n ) (a) = 30 Gilt-HOTRG10 +10 +10 +10 n A ( n ) (b) = 12 HOTRG 10 +10 +10 +10 Figure 8. The RG flows of the tensor norms (cid:107) A ( n ) (cid:107) attemperatures near the estimated critical temperature T [ χ ] c .Different markers represent different deviations | ∆ T | from T [ χ ] c . Blue solid lines are for ∆ T <
T >
0. (a) For the GILT-HOTRG with χ = 30 , (cid:15) gilt =6 × − , two trivial fixed points are isolated and the criticalfixed point can be reached. It corresponds to the schematicRG flows in Fig. 4(a). (b) For the HOTRG with χ = 12, wehave fixed lines and there is no exhibition of a critical fixedpoint. It corresponds to the schematic RG flows in Fig. 4(b). in Sec. II C to generate an RG flow in tensor space. Thepartition function is given in Eq. (1) and we translate thepartition function into a tensor network in Fig. 2. Let usdenote the initial tensor in Eq. (2) as A (0) . To prevent arapid grow of the magnitude of the tensor during the RGtransformation, we pull out the Frobenius norm of thetensor, A (0) = (cid:107) A (0) (cid:107)A (0) , to define a normalized tensor A (0) . The normalized tensor A (0) will be fed into the RGequation of the GILT-HOTRG in Eq. (37) and we denotethe output coarse-grained tensor as A (1) , from whichthe norm (cid:107) A (1) (cid:107) is pulled out and the normalized tensor A (1) is defined the same way as the previous step. Theprocess can be repeated so we will have A ( n ) = (cid:107) A ( n ) (cid:107)A ( n ) at the n -th step. The RG flow in tensor space can beconveniently visualized by examining the evolution of the3 S i n g l u a r v a l u e the largest singular value is normalized to 1 (a) (b) with sign fixing14 16 18 20 22 24 26 28 3010 (c) without sign fixing0.0 0.2 0.4 0.6 0.8 1.0RG step n ( n + )( n ) Figure 9. The RG flows of (a) singular values defined inEq. (73) and (b) the difference between the normalized tensors, (cid:107)A ( n +1) −A ( n ) (cid:107) with sign fixing and (c) without, all at temper-ature T [30] c for the GILT-HOTRG with χ = 30 , (cid:15) gilt = 6 × − . norms (cid:107) A ( n ) (cid:107) as the RG step n increases.The RG flows of the norms (cid:107) A ( n ) (cid:107) indicate the GILT-HOTRG is capable of generating a correct RG flow forthe 2D Ising model in tensor space shown schematicallyin Fig. 4(a). For example, for bond dimension χ = 30and the hyper-parameter of the GILT process (cid:15) gilt =6 × − , Fig. 8(a) shows several RG flows of the tensornorms (cid:107) A ( n ) (cid:107) at different temperatures. For a given bonddimension χ , there is an estimated critical temperature T [ χ ] c at which the tensor hits the critical surface andwill flow to the critical fixed-point tensor ( A [ χ ] ) ∗ cr . The T [ χ ] c can be determined using the bisection method, andfor χ = 30 the difference between the estimated value T [30] c and the exact T c , | T [30] c − T c | /T c , is of order 10 − .At temperatures off by ∆ T = ± − from T [30] c , thetensor flows to the high- and low-temperature trivialfixed-point tensors respectively before it comes near to( A [30] ) ∗ cr . As | ∆ T | becomes smaller to order of 10 − , thetensor will stay in the vicinity of the critical fixed-pointtensor ( A [30] ) ∗ cr for a while and then flow away to one Table I. The scaling dimensions for the relevant and marginaloperators of the 2D Ising model at criticality from the canonicalRG prescription and from the transfer matrix method `a laGu and Wen [22], both using the GILT-HOTRG with χ =30 , (cid:15) gilt = 6 × − at RG step n = 22.Exact 0.125 1 1.125 1.125 2 2 2 2 RGpres.
Trans.mat. of the two trivial fixed-point tensors. If | ∆ T | becomessmaller further to 10 − , the tensor will stay longer near( A [30] ) ∗ cr . By comparison, the RG flow of (cid:107) A ( n ) (cid:107) generatedby the HOTRG with bond dimension χ = 12 [45] isdisplayed in Fig. 8(b). The RG flow shows that theHOTRG has difficulty in exhibiting a critical fixed-pointtensor or producing isolated trivial fixed-point tensors. Itis interesting to mention that the RG flow generated bythe TRG has a similar behavior [18] for bond dimensions χ > (cid:107) A ( n ) (cid:107) gives a critical fixed-point tensor ( A [30] ) ∗ cr at the estimatedcritical temperature T [30] c , we plot the singular values s ( n ) of tensors A ( n ) defined as A ( n ) svd = U V † s ( n ) . (73)The RG flow of the singular values in Fig. 9(a) indicatesthat we indeed reach a non-trivial fixed-point tensor. Thefixed-point tensor is manifestly fixed numerically afteradding the sign fixing step in the GILT-HOTRG, whichcan be confirmed by plotting the Frobenius norm of thedifference between the normalized tensors at successiveRG steps (cid:107)A ( n +1) − A ( n ) (cid:107) , see Fig. 9(b). The norm ofthe difference starts to decay systematically at RG step n = 14, goes all the way down to the order ∼ − at n = 23 and then increases when the tensor begins to flowaway from the critical fixed point. By comparison, weshow the RG flow of (cid:107)A ( n +1) − A ( n ) (cid:107) without sign fixingin Fig. 9(c); the sign ambiguities in Eq. (45) prevent usfrom achieving a manifestly-fixed-point tensor, except atRG step n = 22, where the tensor happens to have allsigns correct by accident.We use the automatic differentiation implemented inJAX [43] to generate the linearized tensor RG equation R in Eq. (55) at RG steps n = 14 , , . . . ,
28, when thetensor is very close to the critical fixed-point tensor. Thescaling dimensions are extracted from the eigenvalues ofthe matrix R according to Eq. (14), where b = 2 , d =2. In Fig. 10, we show the first few scaling dimensions.The dashed lines are the exact values [46]. For χ = 30,4
14 16 18 20 22 24 26 28RG step n S c a li n g d i m e n s i o n s Figure 10. The scaling dimensions of the 2D Ising model fromthe canonical RG prescription using the GILT-HOTRG with χ = 30 , (cid:15) gilt = 6 × − . Dashed lines are the exact values. the RG prescription in tensor space gives correct scalingdimensions up to 2 . n =14 and 28 are unreliable since (cid:107)A ( n +1) −A ( n ) (cid:107) is of order 1(see Fig. 9(b)). The results for n = 15 , , . . . ,
27 indicatesthat the scaling dimensions from the RG prescription intensor space are reliable as long as the values of (cid:107)A ( n +1) −A ( n ) (cid:107) have order of or smaller than 10 − .In Table I, we show the scaling dimensions for all rele-vant and marginal operators at RG step n = 22, comparedwith the results obtained by Gu and Wen’s method [22].Both methods have similar accuracy for scaling dimen-sions less or equal to 1 . . Z symmetry of thetensors [38, 39] when generating the RG flow in tensorspace. There are three reasons. Only if the Z symmetryof the tensor is imposed will the low-temperature fixed-point tensor be stable under the RG. Otherwise, it willflow to the high-temperature fixed point eventually due tonumerical errors, which will make the bisection search forthe estimated critical temperature T [ χ ] c less convenient.The second merit of symmetric tensors is that half ofthe gauge redundancy can be automatically fixed (seeSec. II D), making the sign fixing procedure in the GILT-HOTRG easier. The third reason is to speed up thecomputations. However, we roll back to ordinary tensorswhen performing the RG prescription in tensor space, since the perturbations around the fixed-point tensor donot have to preserve Z symmetry (for example, the spinoperator).The second remark is about the improvement of theaccuracy as the bond dimension χ increases. There aretwo sources of approximation errors in the above com-putations. One comes from the truncations of the CDLtensors during the GILT that is necessary for produc-ing the critical fixed point. This error is controlled bythe hyper-parameter (cid:15) gilt . The other source is the legsqueezing step during the HOTRG to prevent the growof the bond dimension. This error can be reduced by in-creasing the bond dimension χ . In general, for a given χ ,the (cid:15) gilt should be as small as possible provided that theGILT-HOTRG can exhibit a critical fixed-point tensor.In practice, we tried χ = 10 , ,
30, and (cid:15) gilt goes downfrom 6 × − to 6 × − and further to 6 × − . Theestimated scaling dimensions converge to the exact resultsin this process.The third remark is about the overall multiplicationconstant in front of the fixed-point tensor. After reachingthe critical fixed point, the RG from n -th step to ( n +1)-thstep is the mapping A ∗ → A ∗ = (cid:107) A ∗ (cid:107)A ∗ . The shape of A ∗ is fixed but its magnitude is still changing under theRG transformation. It has been shown in Ref. [22] that thefixed-point tensor with correct magnitude is simply givenby A ∗ inv = (cid:107) A ∗ (cid:107) − / A ∗ , and we will have A ∗ inv → A ∗ inv under the RG transformation. Our numerical results haveconfirmed this statement.The final remark is that the problem of local correla-tions could be removed by other methods [22, 24–27, 33–36] other than GILT. For example, the TNR [25, 26]is known to be capable of exhibiting critical fixed-pointtensors with its RG equation similar to that of the GILT-HOTRG, and there is a method to fix its gauge [26].Considering the unprecedented accuracy of the TNR, theestimation of the scaling dimensions might be much bet-ter. We develop the canonical RG prescription in tensorspace using the GILT-HOTRG in this paper in order toprepare for the further applications to 3D systems. IV. SUMMARY AND DISCUSSIONS
In this paper, we show how to analyze an RG fixed pointin tensor space. The general procedure is summarizedas follows: reach a fixed-point tensor using a TRG-typeRG equation, fix the gauge redundancy to make the fixed-point tensor manifestly fixed, linearize the RG equationaround this fixed-point tensor and finally calculate thescaling dimensions from the eigenvalues of this linearizedtensor RG equation. In practice, we propose the GILT-HOTRG that carries out this canonical RG prescriptionin tensor space. The benchmark results of the 2D classicalIsing model are comparable with the existing method. Inour future work, we will generalize the GILT-HOTRGand apply the canonical tensor RG prescription to 3Dsystems, where there are few practical tensor network5methods to extract scaling dimensions efficiently.
ACKNOWLEDGMENTS
We thank Satoshi Morita, Shumpei Iino, TakuhiroOgino, Yuan Yao and Takeo Kato for fruitful discus-sions and insightful suggestions, and Glen Evenbly andGuifre Vidal for explanations regarding the TNR andother tensor network methods. We also thank MarkusHauru for clarifying the implementation of the GILT. X.L.and R.G.X. are grateful to the support of the Global Sci-ence Graduate Course (GSGC) program of the University of Tokyo. This work is financially supported by MEXTGrant-in-Aid for Scientific Research (B) (19H01809). Thenumerical computations were performed on computers atthe Supercomputer Center, the Institute for Solid StatePhysics (ISSP), the University of Tokyo.
Appendix A: Source Code
The source code of this paper can be found atgithub.com/brucelyu/tensorRGflow. It can be used toreproduce all the results in Sec. III B for the 2D classicalIsing model. [1] K. G. Wilson, The renormalization group and criticalphenomena, Rev. Mod. Phys. , 583 (1983).[2] K. G. Wilson and M. E. Fisher, Critical exponents in 3.99dimensions, Phys. Rev. Lett. , 240 (1972).[3] L. P. Kadanoff, Scaling laws for ising models near T c ,Physics Physique Fizika , 263 (1966).[4] L. P. Kadanoff, Variational principles and approximaterenormalization group calculations, Phys. Rev. Lett. ,1005 (1975).[5] A. Migdal, Recursion equations in gauge field theories1975 sov. phys, JETP , 743 (1975).[6] L. P. Kadanoff, Notes on migdal’s recursion formulas,Annals of Physics , 359 (1976).[7] T. Niemeijer and J. M. J. van Leeuwen, Wilson theoryfor spin systems on a triangular lattice, Phys. Rev. Lett. , 1411 (1973).[8] T. L. Bell and K. G. Wilson, Nonlinear renormalizationgroups, Phys. Rev. B , 3935 (1974).[9] J. Polchinski, Scale and conformal invariance in quantumfield theory, Nuclear Physics B , 226 (1988).[10] Y. Nakayama, Scale invariance vs conformal invariance,Physics Reports , 1 (2015), scale invariance vs con-formal invariance.[11] M. Levin and C. P. Nave, Tensor renormalization groupapproach to two-dimensional classical lattice models, Phys.Rev. Lett. , 120601 (2007).[12] Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, andT. Xiang, Second renormalization of tensor-network states,Phys. Rev. Lett. , 160601 (2009).[13] H. H. Zhao, Z. Y. Xie, Q. N. Chen, Z. C. Wei, J. W. Cai,and T. Xiang, Renormalization of tensor-network states,Phys. Rev. B , 174411 (2010).[14] Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, andT. Xiang, Coarse-graining renormalization by higher-ordersingular value decomposition, Phys. Rev. B , 045139(2012).[15] D. Adachi, T. Okubo, and S. Todo, Anisotropic tensorrenormalization group, Phys. Rev. B , 054432 (2020).[16] D. Kadoh and K. Nakayama, Renormalization group ona triad network (2019), arXiv:1912.02414 [hep-lat].[17] S. Morita and N. Kawashima, Global optimization oftensor renormalization group using the corner transfermatrix (2020), arXiv:2009.01997 [cond-mat.stat-mech].[18] M. Hinczewski and A. N. Berker, High-precision thermody-namic and critical properties from tensor renormalization- group flows, Phys. Rev. E , 011104 (2008).[19] K.-I. Aoki, T. Kobayashi, and H. Tomita, Do-main wall renormalization group analysis oftwo-dimensional ising model, International Jour-nal of Modern Physics B , 3739 (2009),https://doi.org/10.1142/S0217979209053357.[20] Y. Meurice, Accurate exponents from approximate tensorrenormalizations, Phys. Rev. B , 064422 (2013).[21] E. Efrati, Z. Wang, A. Kolan, and L. P. Kadanoff, Real-space renormalization in statistical mechanics, Rev. Mod.Phys. , 647 (2014).[22] Z.-C. Gu and X.-G. Wen, Tensor-entanglement-filteringrenormalization approach and symmetry-protected topo-logical order, Phys. Rev. B , 155131 (2009).[23] M. Levin, Real space renormalization group and the emer-gence of topological order (2007), talk at the IPAM work-shop “Topological Quantum Computing”.[24] S. Yang, Z.-C. Gu, and X.-G. Wen, Loop optimizationfor tensor network renormalization, Phys. Rev. Lett. ,110504 (2017).[25] G. Evenbly and G. Vidal, Tensor network renormalization,Phys. Rev. Lett. , 180405 (2015).[26] G. Evenbly, Algorithms for tensor network renormaliza-tion, Phys. Rev. B , 045117 (2017).[27] M. Bal, M. Mari¨en, J. Haegeman, and F. Verstraete,Renormalization group flows of hamiltonians using tensornetworks, Phys. Rev. Lett. , 250602 (2017).[28] J. L. Cardy, Operator content of two-dimensional con-formally invariant theories, Nuclear Physics B , 186(1986).[29] G. Evenbly and G. Vidal, Local scale transformationson the lattice with tensor network renormalization, Phys.Rev. Lett. , 040401 (2016).[30] M. Hauru, C. Delcamp, and S. Mizera, Renormalizationof tensor networks using graph-independent local trunca-tions, Phys. Rev. B , 045111 (2018).[31] J. Cardy, Scaling and Renormalization in StatisticalPhysics , Cambridge Lecture Notes in Physics (CambridgeUniversity Press, 1996) pp. 28–60.[32] H. Ueda, K. Okunishi, and T. Nishino, Doubling of entan-glement spectrum in tensor renormalization group, Phys.Rev. B , 075116 (2014).[33] K. Harada, Entanglement branching operator, Phys. Rev.B , 045124 (2018). [34] G. Evenbly, Gauge fixing, canonical forms, and optimaltruncations in tensor networks with closed loops, Phys.Rev. B , 085155 (2018).[35] L. Ying, Tensor network skeletonization, Multi-scale Modeling & Simulation , 1423 (2017),https://doi.org/10.1137/16M1082676.[36] H.-Y. Lee and N. Kawashima, Tensor-ring de-composition with index-splitting, Journal of thePhysical Society of Japan , 054003 (2020),https://doi.org/10.7566/JPSJ.89.054003.[37] C. Delcamp and A. Tilloy, Computing the renormaliza-tion group flow of two-dimensional φ theory with tensornetworks, Phys. Rev. Research , 033278 (2020).[38] S. Singh, R. N. C. Pfeifer, and G. Vidal, Tensor networkdecompositions in the presence of a global symmetry,Phys. Rev. A , 050301 (2010).[39] S. Singh, R. N. C. Pfeifer, and G. Vidal, Tensor networkstates and algorithms in the presence of a global u(1)symmetry, Phys. Rev. B , 115125 (2011).[40] S. Singh and G. Vidal, Tensor network states and algo-rithms in the presence of a global su(2) symmetry, Phys.Rev. B , 195114 (2012).[41] This is because the pieces of low-rank matrices and theisometric tensors in the RG equation of the GILT-HOTRGin Eq. (37) will inherit the reflection symmetry of theinput tensor A .[42] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury,G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison,A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai,and S. Chintala, Pytorch: An imperative style, high-performance deep learning library, in Advances in NeuralInformation Processing Systems 32 , edited by H. Wallach,H. Larochelle, A. Beygelzimer, F. d ' Alch´e-Buc, E. Fox,and R. Garnett (Curran Associates, Inc., 2019) pp. 8024–8035.[43] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson,C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. Van-derPlas, S. Wanderman-Milne, and Q. Zhang, JAX: com-posable transformations of Python+NumPy programs(2018).[44] M. Kardar,