Tensor-network algorithm for nonequilibrium relaxation in the thermodynamic limit
aa r X i v : . [ c ond - m a t . s t a t - m ec h ] D ec Tensor-network algorithm for nonequilibrium relaxation in thethermodynamic limit
Yoshihito Hotta Department of Physics, The University of Tokyo, Komaba, Meguro, Tokyo 153-8505 ∗ (Dated: October 23, 2018) Abstract
We propose a tensor-network algorithm for discrete-time stochastic dynamics of a homogeneoussystem in the thermodynamic limit. We map a d -dimensional nonequilibrium Markov process to a( d + 1)-dimensional infinite tensor network by using a higher-order singular-value decomposition.As an application of the algorithm, we compute the nonequilibrium relaxation from a fully magne-tized state to equilibrium of the one- and two- dimensional Ising models with periodic boundaryconditions. Utilizing the translational invariance of the systems, we analyze the behavior in thethermodynamic limit directly. We estimated the dynamical critical exponent z = 2 . ∗ [email protected] . INTRODUCTION It is important to develop algorithms to analyze stochastic processes using tensor net-works. Stochastic processes often appear in statistical mechanics. Monte Carlo methodsare most often used to simulate stochastic processes. Density-matrix renormalization group(DMRG) is also sometimes used to analyze them [1–6]. We develop an algorithm to analyzestochastic processes using tensor networks in this paper. Tensor-network algorithms aregeneralization of DMRG and transfer-matrix methods to higher dimensions [7–9] and canhandle models in two and higher dimensions straightforwardly. We combine our algorithmwith the nonequilibrium-relaxation method [10, 11] to evaluate critical exponents. The com-putational time of our algorithm does not depend on the system size when the system ishomogeneous, whereas the computational time of Monte Carlo simulation generally dependson the system size.Monte Carlo methods are stochastic processes that are often used in studies of statisticalmechanics. Although Monte Carlo methods have advantages, such as wide applicabilityand easiness of implementation, they also have drawbacks, e.g. , the dependence of com-putational complexity on the system size. As another drawback, equilibrium Monte Carloanalysis of critical phenomena become extremely difficult as the system approaches a crit-ical point because of the divergence of the relaxation time. The nonequilibrium-relaxationmethod [10, 11], on the other hand, determines critical exponents including dynamical onesby observing relaxation from an ordered state to an equilibrium state. This method is es-pecially suitable for systems with large fluctuation and long relaxation, e.g. , frustrated andrandom systems [12].Other than Monte Carlo simulations, DMRG is also popular, having been very successfulin one-dimensional quantum systems [13]. Recently, developments in the field of quantuminformation have stimulated extensions of DMRG to higher-dimensional systems. Tensor-network algorithms are such extension [7–9]. One of the reasons why tensor-network algo-rithms have called attention is that they can handle systems with large degrees of freedomwith small computational cost as long as the system is homogeneous. For example, staticcritical exponents of two- and three-dimensional Ising models have been obtained in highaccuracy by tensor renormalization group methods [14–17].DMRG studies of stochastic processes [1–6] have been limited to one-dimensional systems2ntil recently. T. H. Johnson et al. studied nonequilibrium stochastic processes in one andtwo dimensions using a tensor-network algorithm called time-evolving block decimation [18–20]. It discretizes the time of the dynamics of a finite system, using the Suzuki-Trotterdecomposition [21]. They showed that their algorithm can compute in high accuracy large-variance observables that strongly depend on the time-evolving path of configuration, whileMonte Carlo methods require a large number of samples for such variables.The data structure of tensor networks represents the spatial structure of the system andits correlation. It is thus suited to computation of time evolution of systems with spatialcorrelations. Dynamical critical phenomena are examples of such systems. The dynamicalcritical exponent is a quantity that characterizes dynamical critical phenomena. We areusually interested in dynamical critical phenomena for systems with dimensions greater thantwo because the dynamical critical exponents take nontrivial values there. Their analyticalcalculation is usually intractable, and we need to rely on numerical methods.In the present paper, we propose a tensor-network algorithm for discrete-time Markovchains in d -dimensional infinite homogeneous systems. Representing the probability dis-tribution with a tensor-network state and the transition probability with a tensor-networkoperator, we map d -dimensional nonequilibrium processes to ( d + 1)-dimensional infinitetensor networks. While other tensor-network algorithms for dynamics usually make use ofthe Suzuki-Trotter decomposition [18–20], we construct a tensor-network operator of thetransition probability in an entirely different way. We construct a tensor-network operatorfor a sublattice-flip update, using a higher-order singular-value decomposition [22]. Takingadvantage of the homogeneity of the systems, we treat infinite systems directly just as theinfinite time-evolving block decimation algorithm [23, 24]. The correlation length divergesin a critical system, for which we need to study a large system. Our algorithm is espe-cially suitable for the study of dynamical critical phenomena because the computationalcomplexity of our algorithm does not depend on the system size.We analyze nonequilibrium relaxation of the magnetization of the one- and two-dimensionalIsing models as an application of our algorithm. In particular, we determine the dynamicalcritical exponent z of the two-dimensional Ising model. Our algorithm of time evolutionparticularly goes well with the nonequilibrium-relaxation method, for which one prepares alarge system and compute time evolution for a relatively short time. Our method has com-mon features; it also prepares a systems so large that it can be treated as the thermodynamic3imit and computes the time evolution for a short period.This paper is organized as follows. In Sec. II, we explain notation and the model that weuse as an example of the algorithm that we propose. We consider the Glauber dynamics [25]in discrete time throughout this paper. We borrow notations from quantum mechanics, ex-pressing the probability distribution as a ket and the transition probability as an operator.In Sec. III, we explain how to construct a tensor-network operator for the transition proba-bility, using a higher-order singular-value decomposition. We calculate the relaxation of themagnetization from the all-spin-up state to the equilibrium with various bond dimensionsand compare the results with an analytic result. In Sec. IV, we analyze a system at the criti-cal point and estimate the dynamical critical exponent z using the nonequilibrium-relaxationmethod. II. MODEL AND NOTATION
Throughout this paper, we focus on kinetic Ising models, whose definition we give inthis section; we can easily generalize our algorithm to any sublattice-update dynamics withnearest-neighbor interactions on a bipartite graph. We first define the update rule of spinsand next explain how to describe a Markov process, borrowing the notation of quantummechanics and using diagrams of tensor networks.Unlike continuous time evolution, we do not need to perform the Suzuki-Trotter decom-position for the time evolution; we instead construct an operator of time evolution using thehigher-order singular-value decomposition.
A. Kinetic Ising model
The Glauber dynamics [25] is a kind of kinetic Ising model whose transition rule is aheat-bath type. We consider the Glauber dynamics in discrete time although the originalGlauber dynamics is in continuous time. The Glauber dynamics in a computer simulation isusually the one in discrete time because such dynamics is simulated by Monte Carlo methodsin many cases. Our strategy is to implement heat-bath algorithms using tensor networks.We explain the Glauber dynamics taking the one-dimensional case as an example; gen-eralizing it to higher dimensions is straightforward. Let us consider a one-dimensional spin4hain of length 2 N whose Hamiltonian is given by H = − X i =1 , ··· , N σ i σ i +1 . (1)We use periodic boundary conditions throughout this paper.We update a single spin with the following heat-bath-type transition probability: w ( σ i → σ ′ i ) = e βσ ′ i ( σ i − + σ i +1 ) P σ ′′ i = ± e βσ ′′ i ( σ i − + σ i +1 ) . (2)Here, σ denotes a spin variable, which takes the values ± i is the spin that we try toupdate, and σ i and σ ′ i are the values of the spin i at the previous and new time steps,respectively. We adopt the two-sublattice multi-spin-flip dynamics, in which we divide thewhole system into two sublattices and flip all the spins on each sublattice simultaneously.In the odd time steps, the transition probability of the whole system is given as follows: w ( σ , · · · , σ N ; σ ′ , · · · , σ ′ N ) = Y i =1 , , ··· , N − e βσ ′ i ( σ i − + σ i +1 ) P σ ′′ i = ± e βσ ′′ i ( σ i − + σ i +1 ) Y i =2 , , ··· , N δ σ i ,σ ′ i , (3)where δ ij is Kronecker’s delta. The above transition probability means that spins at evensites are frozen and act as a heat bath to odd spins during the time evolution. This is asufficient condition for relaxation to the equilibrium. In the even time steps, roles of spinsat odd and even sites are exchanged. The transition probability becomes w ( σ , · · · , σ N ; σ ′ , · · · , σ ′ N ) = Y i =2 , , ··· , N e βσ ′ i ( σ i − + σ i +1 ) P σ ′′ i = ± e βσ ′′ i ( σ i − + σ i +1 ) Y i =1 , , ··· , N − δ σ i ,σ ′ i . (4)We obtain the probability distribution by applying the transition operators (3) and (4)alternatively. B. Diagramattic representation of the Markov chain
Consider a spin chain of length M ( M ∈ N ) and denote the spin variables as σ =( σ , · · · , σ M ). We express the probability distribution P ( σ ) of a spin configuration borrowingthe notation of quantum mechanics: P ( σ ) = h σ | P i . (5)5 a) (b)(c) FIG. 1. (Color online) Diagrams in the tensor-network algorithm for a one-dimensional system.(a) A matrix-product-state representation of the probability distribution. (b) A matrix-product-operator representation of the transition probability. The squares and rhombuses are Y and X tensors defined in Eqs. (14) and (13), respectively. (c) A diagrammatic representation of the timeevolution (11). We now describe the ket of the probability distribution using a matrix-product state [8]: | P i = X σ Tr (cid:2) A [1] σ · · · A [ M ] σ M (cid:3) | σ · · · σ M i . (6)We can represent this matrix-product state as a diagram in Fig. 1 (a). For example, considerthe fully magnetized state, in which all spins point up. We can represent it as a product ofmatrices of 1 × A [ i ] σ i =+1 = 1 , A [ i ] σ i = − = 0 ( i = 1 , · · · , M ) . (7)For another instance, let us consider a fully magnetized state whose spins are all up ordown with the same probability. In this case, the state is more complex than the previous6xample, and we cannot express it as a product of 1 × × A [ i ] σ i =+1 = 2 − /M , A [ i ] σ i = − = 2 − /M ( i = 1 , · · · , M ) (8)Generally speaking, the more complex a state becomes, the larger matrices we need toexpress it as a matrix-product state. The dimensionality of matrices to express a state iscalled the bond dimension of a matrix-product state; the bond dimension of the first exampleis thus one and that of the second example is two.Next, consider the transition probability W ( σ → σ ′ ) from a configuration σ to a newconfiguration σ ′ ; it is a rank-2 M tensor, which we can represent as a matrix-product oper-ator: ˆ W = X σ , σ ′ | σ ′ ih σ ′ | ˆ W | σ ih σ | := X σ , σ ′ W ( σ → σ ′ ) | σ ′ ih σ | . (9)Figure 1 (b) is a diagrammatic representation of this matrix-product operator. Combin-ing it with the matrix-product-state representation of the probability distribution, we canrepresent the time evolution of a probability distribution in the form | P ( t + 1) i = ˆ W | P ( t ) i , (10)which is followed by | P ( t ) i = ˆ W t | P (0) i . (11)The corresponding diagram is Fig. 1 (c).In a two-dimensional system, the probability distribution and the transition probabilitybecome a tensor-network state and a tensor-network operator, respectively (Fig. 2). We canrepresent any transition probabilities with tensor-network operators in principle. However,writing down its form is not a trivial problem in practice and requires ingenuity. For thetwo-sublattice multi-spin-flip dynamics, we can write down the tensor-network operatorexplicitly as we will explain in the following two sections. III. ONE-DIMENSIONAL KINETIC ISING MODEL
We numerically analyze the one-dimensional kinetic Ising model in this section. Theupdate rule is the one explained in Sec. II A. We prepare all the spins to be σ i = 1 at the7 a) (b)(c) FIG. 2. (Color online) Diagrams in the tensor-network algorithm for a two-dimensional system.(a) A tensor-network-state representation of the probability distribution. (b) A tensor-network-operator representation of the transition probability. (c) Time evolution in a single time step,which updates half of the spins (Eq. (10)). initial time and observe the relaxation to equilibrium. N. Ito et al. derived an asymptoticform of the relaxation of the magnetization analytically [26, 27]. We calculate it numericallyand compare the result with their asymptotic form.
A. Transition matrix as matrix-product operator
We use the transition probabilities given by the rank-4 N tensors (3) and (4). We firstdecompose the local transition probability by using the higher-order singular-value decom-position [22, 28] as w ( σ i → σ ′ i ) = e /T × σ ′ i ( σ i − + σ i +1 ) P σ ′′ i = ± e /T × σ ′′ i ( σ i − + σ i +1 ) = X α,β,γ =1 , S αβγ U σ ′ i α V ( L ) σ i − β V ( R ) σ i +1 γ , (12)where S is a rank-3 tensor called the core tensor, while U , V ( R ) , and V ( L ) are 2 × UV V
FIG. 3. A diagrammatic representation of the higher-order singular-value decomposition (HOSVD)of a local transition probability Eq. (12). using a diagram in Fig. 3. We next define the following rank-4 tensors: X σ ′ j σ j pq := V ( R ) σ j p V ( L ) σ j q δ σ ′ j σ j , (13) Y σ ′ i βγ = Y σ ′ i σ i βγ := X α =1 , S αβγ U σ ′ i α . (14)We let Y have an index σ i because we need Y to have both indices σ i and σ ′ i in the definitionof a matrix-product operator below. We obtain matrix-product-operator representations ofthe whole system by lining up X and Y (Fig. 1 (b)). For instance, in the odd time steps, W ( σ → σ ′ ) = Tr[ Y σ ′ σ X σ ′ σ Y σ ′ σ X σ ′ σ · · · ] . (15)In the even time steps, the order of X and Y in the trace is interchanged. We thus representthe time evolution by piling up the two matrix-product operators alternatively to the initialmatrix-product state (Fig. 1 (c)). We set the initial state to be the all-spin-up state, whichis a matrix-product state of bond dimension D = 1. The bond dimensionality of the matrix-product operator is two.We calculate | P ( t + 1) i from | P ( t ) i using Eq. (10). We can represent | P ( t + 1) i as amatrix-product state since | P ( t ) i is also a matrix-product state and ˆ W is a matrix-productoperator. Our initial state | P (0) i is the matrix-product state of Eq. (7). We can representthe procedure of time evolution graphically. Suppose that Fig. 1 (a) is the state at t = 0.Applying the matrix-product operator of Fig. 1 (b), we obtain the state at t = 1. Repeatingthe procedure, we can temporally evolve the stochastic process. For example, Fig. 1 (c)is the state at t = 4. In numerical computation, we contract vertical bonds every timewe apply a matrix-product operator to a state. Therefore, states are always expressed asmatrix-product states. The shape of the state is the same as in Fig. 1 (a) but all tensors9re not the same for t ≥ t exactly as aproduct of matrices of size 2 t . However, as the amount of memory is limited, of course, weneed to perform approximation; we truncate the bond dimensions at a dimensionality D byusing the singular-value decompositions and keep the D largest singular values. When thedimensionality of every connected bond is less than D , we say that the tensor network hasbond dimensions D .We explain this procedure in detail here, following Ref. [8]. Let us consider approximatingthe bond connected by the two tensors A [ i ] and A [ i +1] in Eq. (6). The product beforeapproximation is D ′ X j =1 A [ i ] σ s i,j A [ i +1] σ t j,k . (16)The dimensions of the indices σ s and σ t are two, and we assume that the dimensions of theindices i, j, and k are D ′ ( D ′ > D ). Our task is to approximate the contraction (16), whichis the summation of D ′ terms, by a summation of D terms. First, we rewrite the tensorswith three indices to tensors with two indices (matrices): A [ i ] σ s i,j = A ′ [ i ]( i,σ s ) ,j , A [ i +1] σ t j,k = A ′ [ i +1] j, ( k,σ t ) . (17)Then the contraction (16) becomes a multiplication of the new two matrices: A ′ [ i ] A ′ [ i +1] . Wenow approximate the product of two rank- D ′ matrices by a product of two rank- D matricesas follows. We first decompose A ′ [ i ] A ′ [ i +1] by using the singular-value decomposition: A ′ [ i ] A ′ [ i +1] = U Λ V † . (18)Here Λ is a diagonal matrix of size D ′ and we assume that the eigenvalues are sorted in thenon-ascending order. We keep only the D largest eigenvalues of Λ and omit the other smalleigenvalues: U Λ V † ≈ ˜ U ˜Λ ˜ V † , (19)where ˜Λ is the diagonal matrix of size D . We left out the last D ′ − D columns of U and V ,10nd defined the two new matrices ˜ U and ˜ V . We now approximate A ′ [ i ] and A ′ [ i +1] as follows: A ′ [ i ] ≈ ˜ A ′ [ i ] := ˜ U p ˜Λ , (20) A ′ [ i +1] ≈ ˜ A ′ [ i +1] := p ˜Λ ˜ V † . (21)We reshape the matrices ˜ A ′ [ i ]( i,σ s ) ,j and ˜ A ′ [ i +1] j, ( k,σ t ) to tensors with three indices again.˜ A ′ [ i ]( i,σ s ) ,j = ˜ A ′ [ i ] σ s i,j , (22)˜ A ′ [ i +1] j, ( k,σ t ) = ˜ A ′ [ i +1] σ t j,k , (23)where i and k runs over 1 , · · · , D ′ , j runs over 1 , · · · , D , and σ s and σ t takes ±
1. Thecontraction of the D ′ terms are finally approximated by the contraction of the D terms: D ′ X j =1 A [ i ] σ s i,j A [ i +1] σ t j,k ≈ D X j =1 ˜ A [ i ] σ s i,j ˜ A [ i +1] σ t j,k . (24)The computational complexity of the time evolution is of O ( D ) because the singular-valuedecomposition takes CPU time of O ( mn ) for an m × n ( m ≥ n ) matrix.We finally calculate the average magnetization at the odd sites and that at the even sitesseparately. Let us consider calculating the average of σ as a representative of the odd spins.Suppose that the state at time t is | P ( t ) i = P σ Tr[ A σ B σ A σ B σ · · · ] | σ i . We first computethe marginal distribution of | σ i : P ( σ ; t ) := X σ , ··· ,σ N Tr[ A σ B σ A σ B σ · · · ] . (25)The average magnetization of σ is then given by h σ i = X σ = ± σ P ( σ ; t ) = Tr[( A +1 − A − ) X σ B σ X σ A σ X σ B σ · · · ] . (26)We can represent this equation as the diagram shown in Fig. 4. Each open circle denotes P σ = ± A σ , each solid circle denotes P σ = ± B σ , and the solid rhombus denotes A +1 − A − .We repeat a sufficient number of renormalization until the result converges. We explainthe procedure of renormalization in detail. In the first step, we define C = ( A +1 − A − ) X σ = ± B σ , (27) E = X σ a = ± A σ a X σ b = ± B σ b . (28)11 IG. 4. (Color online) A diagrammatic representation of the average of the spins at odd sites. Thedenominator is the norm of probability. The right figure shows the definition of each symbol.
In the second step, namely the renormalization step, we update C and E as C ← CE, (29) E ← EE. (30)The sum of probabilities P σ Tr[ A σ B σ A σ B σ · · · ] deviates from unity as time evolves be-cause of truncations by singular-value decompositions. To preserve the normalization of theprobability, we divided the magnetization by the norm of the probability distribution attime t as in Fig. 4. The average magnetization at odd sites is h σ odd i = Tr C Tr E . (31)We repeat this renormalization procedure until the average magnetization h σ odd i converges.The calculation of the magnetization at even sites is similar. The magnetization of thewhole system is the average of these two averaged magnetizations. The evaluation of thediagram in Fig. 4 takes CPU time of O ( D ). The total computational complexity of ouralgorithm is thus of O ( D ).We show the results of the calculation in Fig. 5. The broken line in the figure is the ana-lytic asymptotic form [26, 27]; the magnetization decays exponentially with the correlationtime ξ t = 1log(coth(2 /T )) . (32)As the bond dimension D increases, the range of time for which we can calculate the magne-tization precisely also expands. We were able to do so up to around t ≈
100 in one dimensionwith D = 1024.We update half of the tensors at each step of time. An advantage of our algorithm is thatwe can update half of the system at the same time, making use of the translational variance12 IG. 5. (Color online) The relaxation of the magnetization of the one-dimensional Ising model.The horizontal axis indicates the number of the time steps, while the vertical axis indicates themagnetization. We changed the maximum bond dimension D . The broken line is the analyticasymptotic form [11]; the magnetization decays exponentially with the correlation time ξ t definedin Eq. (32). of the system. We just need to update a single tensor of a sublattice because the tensors ofthe same sublattice are all the same. In Monte Carlo simulation, on the other hand, flippinga half of the system takes the CPU time that increases linearly in the system size. IV. TWO-DIMENSIONAL KINETIC ISING MODEL
We numerically analyze the two-dimensional kinetic Ising model in this section. In par-ticular, by using the nonequilibrium-relaxation method, we estimate the dynamical criticalexponent z of the two-dimensional Ising model.13 . Transition operator as tensor-network operator The update rule of a single spin in the one-dimensional Glauber dynamics, Eq. (2),changes in the two-dimensional case to w ( σ i → σ ′ i ) = e /T × σ ′ i ( σ L + σ R + σ D + σ U ) P σ ′′ i = ± e /T × σ ′′ i ( σ L + σ R + σ D + σ U ) (33)= X α,β,γ,δ,ǫ =1 , S αβγδǫ U σ ′ i α V ( L ) σ L β V ( R ) σ R γ V ( D ) σ D δ V ( U ) σ U ǫ , (34)where the subscripts L, R, U, and D represent spins to the left, right, up, and down of the spin σ i , respectively. As in the one-dimensional case, we perform the higher-order singular-valuedecomposition (HOSVD) and define local transition operators: X σ ′ j σ j pqrs := V ( R ) σ j p V ( L ) σ j q V ( U ) σ j r V ( D ) σ j s δ σ ′ j ,σ j , (35) Y σ ′ i βγδǫ := Y σ ′ i σ i βγδǫ = X α =1 , S αβγδǫ U σ ′ i α . (36)A diagrammatic representation of the transition operator of the whole system consists ofthe local transition operators X and Y . The bond dimensionality of the transition operatoris two; see Fig. 2 (b). We calculate the time evolution by stacking up the tensor-networkoperators for each of odd and even time steps alternatively (Fig. 2 (c)).We calculate the time evolution of a state by contracting the state and tensor-networkoperators from the bottom layers. The state at t = 0 is shown in Fig. 2 (a). Applying thetensor-network operator of Fig. 2 (b) to this state, we can evolve the dynamics by one step(Fig. 2 (c)). We then contract vertical bonds of Fig. 2 (c) and obtain the tensor-networkstate at t = 1. The shape of the tensor network state at t = 1 is the same as Fig. 2 (a),but all tensors are not the same. The lattice consists of two sublattices; tensors of eachsublattice share the common tensor. By repeating the same procedure, we can calculatethe states of t ≥ | P ( t ) i . During the calculation of the time evolution, we truncate bondindices by singular-value decompositions if the bond dimensionality exceeds D . We can dothese singular-value decompositions in the time of O ( D ) with a little ingenuity [29].We finally compute the magnetization by repeating renormalization of the tensor networkthat contains an “impurity tensor” (the solid rhombus in Fig. 4) until convergence as inthe one-dimensional case. We also divide the magnetization, which corresponds to thenumerator of Fig. 4, by the norm of | P ( t ) i (the denominator of the same figure) as in the14ne-dimensional case. We use the algorithm by C. Wang et al. [29], in which one truncatesbond dimensions by singular-value decompositions during renormalization, whereas the otherparts of the procedure are the same as the tensor renormalization group with the higher-ordersingular-value decomposition [16]. The calculation of the average magnetization by tensorrenormalization group takes the time of O ( D ) [29] and the total computational complexityof our algorithm in two dimensions is also of O ( D ). B. Nonequilibrium relaxation
We observe the relaxation from the all-spin-up initial state to the equilibrium state. Theinitial state is all-spin-up, which is a two-dimensional tensor-network state of the bonddimension D = 1. The asymptotic behavior of the magnetization depends on which phasethe system is in. In our case, since we do not apply a magnetic field, the asymptotic decaybecomes as follows: h σ i = e − t/ξ t T > T c ,t − λ m T = T c ,m eq + ce − t/ξ t T < T c , (37)where T c = 2 . ξ t is the relaxation time, m eq is the spontaneous magnetization, c is aconstant, and λ m is the dynamical critical exponent that characterizes the power-law decayof the magnetization at the critical point. It is related with the standard critical exponentsas [10, 11] λ m = βzν . (38)In the two-dimensional Ising model, the critical exponents β and ν have been obtainedanalytically at 1 / z from the decay ofthe magnetization at the critical point. This method is called the nonequilibrium-relaxationmethod [10, 11].We calculated the relaxation of systems in the high-temperature phase and at the criticalpoint (Fig. 6). In the high-temperature phase, the magnetization decays exponentially intime with the correlation time ξ t , while at the critical point ξ t diverges, and the magnetizationshows a power-law decay. To calculate the critical exponent z precisely, we calculated “local15 a) (b) FIG. 6. (Color online) Relaxation of the magnetization from the all-spin-up state in the two-dimensional Ising model. The horizontal axis indicates the time steps and the y -axis indicates themagnetization, while D denotes the bond dimensionality of tensors. (a) A semi-logarithmic plot for T = 3 > T c under no magnetic field. (b) A logarithmic plot at the critical point T = T c ≈ . exponents” and extrapolated the results to the limit of the infinite time [10, 11]. We definethe local exponents by λ m ( t ) := − d log m ( t ) d log t ≈ t ∆ t (cid:18) m ( t − ∆ t ) m ( t ) − (cid:19) , (39) z ( t ) := βνλ m ( t ) . (40)We choose ∆ t = 1 and fit the series z ( t ) to z ( t ) = a/t + z, (41)where a is a constant and z is the final estimate of the critical exponent when the numbersof times steps are extrapolated to infinity. We did the extrapolation by the Bayesian linearregression [30] (Fig. 7). The intersection of the line of the best fit and the y -axis is thepoint estimate of z . We used the 75 % credible interval at the origin of the x -axis as theuncertainty of our estimate of z .We calculated z for various bond dimensions ranging from D = 2 to D = 20 and carriedout the analysis described as above. In the extrapolation of the local exponent z ( t ), wedid not use the first two data points corresponding to t = 0 ,
1. We used five data points( t = 2 , · · · ,
6) for
D <
17 and six data points ( t = 2 , · · · ,
7) for D ≥
17. As Fig. 816
IG. 7. Extrapolation of the local exponent z ( t ) to t → ∞ according to Eq. (41). We performedthe Bayesian linear regression. The line of the best fit is the maximum a posteriori solution andthe shaded area is the 75 % credible interval. The intersection of the line of the best fit and the y -axis is the final estimate of the critical exponent z . The data is for the bond dimension of tensors D = 20. shows, the estimates of z converges as D increases. We estimated the critical exponents z to be 2 . D = 20. The nonequilibrium-relaxation method with a MonteCarlo method [10] estimated z to be z = 2 . z to be z = 2 . V. CONCLUSION
We have proposed an algorithm to compute sublattice-flip dynamics on an infinite bi-partite graph. We split local transition probabilities into tensors that depend only on asingle spin variable by a higher-order singular-value decomposition and constructed a tensor-network-operator representation of the transition probability of the whole system. We canapply this approach to any sublattice-update dynamics with nearest-neighbor interactionson a bipartite graph. We treat an infinite system utilizing translational invariance andobtained the magnetization in the thermodynamic limit directly without system-size ex-17
IG. 8. Estimates of the dynamical critical exponent z with various bond dimensions D . The errorbars indicate the 75 % credible intervals of the predictive distribution at 1 /t = 0 in Eq. (41), i.e. ,the shaded area at the origin of the x -axis in Fig. 7. trapolation. Instead of extrapolating the system size to infinity, we did the bond-dimensionextrapolation. We are also able to apply our algorithm to an open boundary system inprinciple. The system, however, becomes inhomogeneous, and hence we will need to keep allthe tensors that lie on all the sites with the computational costs depending on the systemsize. Therefore, studying dynamics in the thermodynamics limit directly is only possiblewhen we adopt the periodic boundary condition.Our algorithm goes together well with the nonequilibrium-relaxation method. In thenonequilibrium-relaxation method, one prepares a large system and simulate it for a rathersmall number of time steps [11]. In calculation of the time evolution with tensor networks,we cannot compute dynamics for an arbitrary long time because an error due to singular-value decompositions accumulates during the time evolution. We can, however, update aninfinite number of spins at once utilizing translational invariance. We calculated the timeevolution of an infinite system for a short period indeed with good precision and were ableto determine the critical exponent z .We estimated only z among critical exponents because the other exponents have been an-alytically known. The nonequilibrium-relaxation method, however, can calculate the other18ritical exponents as well as the critical temperature, and thus we can use it even for sys-tems for which analytical calculation is intractable. The nonequilibrium-relaxation methodcombined with our algorithm is a new direction of study of critical phenomena with tensornetworks.A shortcoming of our algorithm at present is that we can utilize it for a limited rangeof time. This problem is serious in two dimensions; we obtained the results only for t <
10 because it became difficult to increase bond dimensions further as the dimensionalityincreased. The accuracy of our estimate of the critical exponent z is worse than the estimateof the nonequilibrium-relaxation method with a Monte Carlo simulation [10] because we useda smaller number of time steps. In order to compute longer and to improve the accuracy ofthe estimate, we need to develop a better scheme to truncate bond dimensions during thetime evolution. ACKNOWLEDGMENTS
I am grateful to my supervisor Prof. Naomichi Hatano for reading this manuscript. I alsothank Prof. Tota Nakamura for introducing me to the nonequilibrium-relaxation method.The work is supported by Advanced Leading Graduate Course for Photon Science. [1] Y. Hieida, Journal of the Physical Society of Japan , 369 (1998).[2] E. Carlon, M. Henkel, and U. Schollw¨ock, The European Physical Journal B - Condensed Matter and Complex Systems , 99 (1999).[3] E. Carlon, M. Henkel, and U. Schollw¨ock, Physical Review E , 036101 (2001).[4] Z. Nagy, C. Appert, and L. Santen, Journal of Statistical Physics , 623 (2002).[5] A. Kemper, A. Schadschneider, and J. Zittartz, Journal of Physics A: Mathematical and General , L279 (2001).[6] K. Temme and F. Verstraete, Physical Review Letters , 210502 (2010).[7] R. Or´us, Annals of Physics , 117 (2014).[8] U. Schollw¨ock, Annals of Physics , 96 (2011).[9] F. Verstraete, V. Murg, and J. I. Cirac, Advances in Physics , 143 (2008).[10] N. Ito, Physica A: Statistical Mechanics and its Applications , 591 (1993).[11] Y. Ozeki and N. Ito, Journal of Physics A: Mathematical and Theoretical , R149 (2007).
12] Y. Ozeki and N. Ito, Physical Review B , 024416 (2001).[13] U. Schollw¨ock, Reviews of Modern Physics , 259 (2005).[14] M. Levin and C. P. Nave, Physical Review Letters , 120601 (2007).[15] H. H. Zhao, Z. Y. Xie, Q. N. Chen, Z. C. Wei, J. W. Cai, and T. Xiang,Physical Review B , 174411 (2010).[16] Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and T. Xiang,Physical Review B , 045139 (2012).[17] Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang,Physical Review Letters , 160601 (2009).[18] T. H. Johnson, S. R. Clark, and D. Jaksch, Physical Review E , 036702 (2010).[19] T. Johnson, T. J. Elliott, S. Clark, and D. Jaksch,Physical Review Letters , 090602 (2015).[20] G. Vidal, Physical Review Letters , 040502 (2004).[21] M. Suzuki, Progress of Theoretical Physics , 1454 (1976).[22] L. De Lathauwer, B. De Moor, and J. Vandewalle,SIAM Journal on Matrix Analysis and Applications , 1253 (2000).[23] G. Vidal, Physical Review Letters , 070201 (2007).[24] R. Or´us and G. Vidal, Physical Review B , 155117 (2008).[25] R. J. Glauber, Journal of mathematical physics , 294 (1963).[26] N. Ito, Progress of Theoretical Physics , 682 (1990).[27] N. Ito and T. Chikyu, Physica A: Statistical Mechanics and its Applications , 193 (1990).[28] T. G. Kolda and B. W. Bader, SIAM Review , 455 (2009).[29] C. Wang, S. Qin, and H. Zhou, Physical Review B , 174201 (2014).[30] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis (Taylor &Francis, Boca Raton, 2013).[31] B. Dammann and J. D. Reger, Zeitschrift f¨ur Physik B Condensed Matter , 97 (1995)., 97 (1995).