Inverse problem for Ising connection matrix with long-range interaction
IInverse problem for Ising connection matrix with long-range interaction L.B. Litinskii and B.V. Kryzhanovsky
Center of Optical Neural Technologies, Scientific Research Institute for System Analysis RAS, Moscow, Russia Nakhimov ave, 36-1, Moscow, 117218, Russia E-mail: [email protected], [email protected]
Abstract
In the present paper, we examine Ising systems on d -dimensional hypercube lattices and solve an inverse problem where we have to determine interaction constants of an Ising connection matrix when we know a spectrum of it eigenvalues. In addition, we define restrictions allowing a random number sequence to be a connection matrix spectrum. We use the previously obtained analytical expressions for the eigenvalues of Ising connection matrices accounting for an arbitrary long-range interaction and supposing periodic boundary conditions. Keywords : Ising connection matrix, long-range interaction, eigenvalues, inverse problem, Kronecker product.
1. Introduction
In papers [1–3], we calculated eigenvalues of Ising connection matrices defined on d -dimensional hypercube lattices (
1, 2,3... d = ). To provide the translation invariance we imposed periodic boundary conditions. In our calculations, we accounted for interactions not only with the nearest spins but with distant spins too. In papers [1, 2] we analyzed isotropic interactions. The general case of anisotropic interactions we discussed in [3]. We succeeded to obtain analytical expressions for the eigenvalues of the above-described Ising connection matrices. For the d -dimensional system, the eigenvalues are polynomials of the degree d in the eigenvalues for the one-dimensional system with long-range interaction (see [2, 3]). The coefficients of these polynomials are the constants of interaction between spins. In the present paper, we solve an inverse problem formulated as follows. Suppose we know the spectrum of an Ising connection matrix and we have to answer two questions. Firstly, is it possible to restore the interaction constants that define the connection matrix whose spectrum matches the given one? Secondly, when a sequence of random numbers may be the spectrum of some connection matrix? In Section 2, we obtain the answers to these questions for the one-dimensional Ising system. Then, we use the obtained results to analyze the two- and three-dimensional systems in Sections 3 and 4, respectively. Discussion and conclusions are in Section 5. In this section, we also discuss the possibilities to use the eigenvalues of the Ising connection matrices when calculating the partition functions. There is extensive literature on the inverse Ising problem ; see, for example, a rather full review published in [7]. When solving the inverse Ising problems the authors examine how with the aid the statistical inference method they can estimate the parameters of the Ising system - interaction constants and external magnetic fields - when they know empirical characteristics of a large number of random spin configurations. We would like to emphasize that although as in the papers cited in [7] we also restoring the parameters of the Ising systems, the setting of the problem and the method of its solution differ significantly. In our approach we inverse the exact formulas that express the connection matrix eigenvalues in terms of its matrix elements. However, when using the statistical inference method the input data are the observables such as magnetizations, correlations, and so on. The solution tools are also different. They are the Boltzmann equilibrium distribution, the principle of the maximal likelihood, the Bayes theorem and so on.
2. One-dimensional Ising model 1)
A one-dimensional Ising system is a linear chain of L interacting spins. To provide a translation invariance, let us close the chain in a ring. Then the last spin is also the nearest neighbor of the first spin. This means that each spin has two (on the left and right) nearest neighbors, two next nearest neighbors (the distance to which is twice as large), two next-next nearest neighbors, and so on. To be specific, we suppose that L is odd: L l = + . Consequently, each spin has l pairs of the neighbors. Since we have in mind to discuss multidimensional lattices, we use the term “coordination spheres” to describe these pairs: first coordination sphere, second coordination sphere … l -th oordination sphere. In the beginning of the next Section, we will give a general definition of the coordination spheres. By ( ) k J we denote a connection matrix that defines the interaction of each spin only with the spins from the k -th coordination sphere. For example, it is easy to see that the matrices (1) J and (2) J have the form = J , = J . ( ) k J is a symmetric matrix with the ones at the k -th and ( ) L k − -th diagonals that are parallel to the main diagonal. We use the set of matrices { } ( ) l k J to write down the Ising connection matrix A that accounts for interactions with spins belonging to all the coordination spheres. Let k w be a constant of interaction with spins from the k -th coordination sphere. Then (1) (2) ... ( ) l w w w l = ⋅ + ⋅ + + ⋅ A J J J . (1) When there is no interaction with the spins from the k -th coordination sphere, the corresponding constant k w in Eq. (1) is equal to zero. The matrices ( ) k J are circulants: each next row of such a matrix is obtained by a cyclic shift of the previous row one position to the right. All the circulants have the same set of the eigenvectors that may have complex coordinates [5, 6]. In the general case, the eigenvalues of the circulant matrices can also be complex. However, since in our problem the matrices ( ) k J are symmetric, their eigenvalues are real. By { } ( ) L k α α λ = we denote the eigenvalues of these matrices. It can be shown that [2, 3] ( ) 2 cos( ) k k α α λ ϕ= ⋅ , where L α πϕ α= − ,
1, 2,..., L α= , and k l = . The first eigenvalue of each matrix ( ) k J is equal to 2, and other eigenvalues are twice degenerate:
1, 2,.., : k l ∀ = ( ) 2 k λ = , and ( ) ( ) L k k α α λ λ + − = , where l α= + . (2) Consequently, for each k (if we do not take into account the first eigenvalue), the sequence ( ), ( ),.. ( ), ( ),.., ( ), ( ) l l L L k k k k k k λ λ λ λ λ λ + + − is mirror-symmetrical about its middle (see Fig. 1). In what follows, we repeatedly use this symmetry property. Fig. 1. Eigenvalues { } ( ) , 1, 2,..., 41 k L α λ α= = of matrices ( ) k J when k = ∗ and
20 ( ) k = ∆ . Vertical line in the middle shows explicitly mirror symmetry of graphs. The eigenvector (1) f with equal coordinates corresponds to the first eigenvalue ( ) 2 k λ = : (1) (1)1 ( ) ( ) k k λ⋅ = ⋅ J f f , where (1)
11 /1 L = f . We can choose the two eigenvectors ( ) α f and ( 2 ) L α+ − f corresponding to a degenerate eigenvalue ( ) ( ) L k k α α λ λ + − = to be real. It is convenient to write them as ( )( ) ( )1( )( ) ( )2( )( 2 )1( 2 )( 2 ) ( 2 )2( 2 )
2, cos ( 1)2, sin ( 1) jL LLL LjLL ff f jLfff f jL f ααα α αα ααα α αα ϕϕ + −+ −+ − + −+ − = = − = = − ff , where l α= + . (3) Since the eigenvectors of all the matrices ( ) k J are the same, it is easy to write down the eigenvalues of the connection matrix (1) ( ) ( ) ( ) ( ) l w w w l L α α α α λ λ λ λ α= ⋅ + ⋅ + + ⋅ = A . (4) The expression (4) is a generalization of the formula obtained previously in [4]. The spectrum of the eigenvalues of the connection matrix A cannot be a set of arbitrary numbers. It has a structure defined by the properties of the summands in Eq. (4). First, because the equalities (2) hold for each k , the spectrum of the eigenvalues ( ) { } L α α λ = A has to be mirror-symmetrical about its middle (without accounting for the first eigenvalue). Then we have the equalities ( ) ( ) L α α λ λ + − = A A , l α= + . (5) econd, due to the zero-valued elements at the diagonals of all the matrices ( ) k J the sum of the eigenvalues of the matrix A has to be equal to zero. This means that
11 0 02 ( ) 2 ( ) l αα λ λ += = − ∑ A A . (6) Consequently, only l numbers ( ) λ A , ( ) λ A , … ( ) l λ + A of the set (4) can be arbitrary; the other eigenvalues are expressed through these numbers with the aid of the equalities (5) and (6). Let us analyze the inverse problem. Suppose we know a spectrum { } L α α λ = of a connection matrix of a one-dimensional Ising system (for example, obtained experimentally). Of course, the sequence { } L α α λ = satisfies the equalities (5) and (6). What are the connections k w between the spins that provide this spectrum? To determine the unknowns k w , we have to solve the system (4) with the known left-hand side: ( ) ( ) ( ) l w w w l L α α α α λ λ λ λ α= ⋅ + ⋅ + + ⋅ = . (7) We can obtain the answer in an explicit form. Let us generate an L -dimensional vector Λ whose coordinates are the eigenvalues of the experimental spectrum { } L α α λ = . We also generate L -dimensional vectors ( ) k Λ whose coordinates are the eigenvalues of the matrices ( ) k J :
11 22 11 22 ( )( ), ( ) ( )( )( ) ll ll LL kkk kkk λλ λλ λλ λλ λλ ++ ++ = =
Λ Λ ,
1, 2,.., k l = . (8) Then we can rewrite the system of equations (7) in the vector form: (1) (2) ... ( ) l w w w l = ⋅ + ⋅ + + ⋅ Λ Λ Λ Λ . It is evident that the vectors ( ) k Λ and the eigenvectors ( 1) k + f are collinear: ( 1) ( ) ~ k k + Λ f ,
1, 2,..., k l = . Consequently, we can calculate the weights k w as scalar products of the vectors Λ and ( ) k Λ : ( ) , ( )2 k kw L = Λ Λ ,
1, 2,..., k l = . (9) By doing that, we solve the inverse problem in the one-dimensional case.
3. Two-dimensional Ising model 1)
In this case, the spins are in the nods of a square lattice of the size
L L × . As previously, we set L l = + and assume periodic boundary conditions. Then each spin has l pairs of neighbors along both the horizontal and the vertical axes. In addition, there are neighbors that are not on the same horizontal or vertical axes as the given spin. The set of spins equally interacting with the given spin belongs to the same coordination sphere. In the case of an isotropic interaction, the coordination spheres consist of spins equally distant from the given spin. Then we can numerate the coordination spheres in the ascending order of distances to the given spin. In the anisotropic case the interaction constants but not the distances define the spins belonging to the given coordination sphere. When analyzing multidimensional Ising systems, we first have to distribute spins between the coordination spheres. This step is simple in the one-dimensional case: the pair of spins that are equidistant from the given spin belongs to the same coordination sphere. In the case of two-dimensional lattice, to describe the interaction between the spins spaced by m steps along the vertical axis and by k steps along the horizontal axis we introduce the interaction constant ( , ) w m k ; the values of m and k change independently from 0 to l . If the interaction is anisotropic ( , ) ( , ) w m k w k m ≠ ; in the isotropic case ( , ) ( , ) w m k w k m ≡ . The difference between the coordination spheres in the isotropic and anisotropic cases influences symmetry properties of the spectrum. Let us make a few necessary comments. Since there is no a self-action in the system, we always have (0, 0) 0 w = . It is convenient to introduce a unit ( ) L L × -dimensional matrix (0) (1,1,..,1) = J diag . This matrix completes the set of matrices { } ( ) lk k = J . All the eigenvalues of the matrix (0) J are equal to one. With the aid of these eigenvalues, we define the L -dimensional vector = Λ , which completes the set (8) of the vectors ( ) k Λ : { } ( ) lk k = Λ . In subsection 2), we solve the inverse problem in the case of anisotropic interaction. The isotropic interaction is a subject of the last subsection of this Section. In paper [3] we showed that a ( )
L L × -dimensional matrix B that described the interactions ( ) { } , 0 , lk m w m k = between spins had a block-circulant form and it eigenvectors were the pairwise Kronecker products of the eigenvectors ( ) α f defined by Eq. (3). Exactly as in the one-dimensional case, the set of the eigenvectors of the matrix B does not depend on the interaction constants and the eigenvalues of this matrix obtained in [3] are ( , ) ( ) ( ) l lm k w m k m k αβ α β µ λ λ = = = ⋅ ⋅ ∑ ∑ , , 1, 2,..., L α β = . (10) Let us write Eq. (10) in the vector form using the above-introduced L -dimensional vectors ( ) k Λ (see Eq. (8)). With the aid of these vectors we generate L -dimensional vectors ( , ) m k Λ that are the Kronecker products of the vectors ( ) m Λ and ( ) k Λ : ( ) ( )( ) ( )( , ) ( ) ( ) ( ) ( )( ) ( )( ) ( ) ll L m km km k m k m km km k λλλλλ ++ = ⊗ = ΛΛΛ Λ Λ ΛΛΛ , , 0,1,.., m k l = . (11) The vectors ( , ) m k Λ are mutually orthogonal. Let us define an L -dimensional vector Μ whose coordinates are the eigenvalues αβ µ defined by Eq. (10):
11 1 21 2 11 1 21 2 1 ( .. , .. ,..., .. , .. ,..., .. )
L L l l L l l L L LL µ µ µ µ µ µ µ µ µ µ ++ + + + = Μ . (12) Now we can rewrite the set of equalities (10) in the vector form ( , ) ( , ) (0,1) (0,1) ... (0, ) (0, )(1, 0) (1, 0) ... ( , 0) ( , 0) ...... ( , 0) ( , 0) ... ( , ) ( , ). l lm k w m k m k w w l lw w l l w l l w l l l l = = = ⋅ = ⋅ + + ⋅ ++ ⋅ + + ⋅ + + ⋅ + + ⋅ ∑ ∑ Μ Λ Λ ΛΛ Λ Λ Λ (13) Since (0, 0) 0 w = , in this equation the term (0, 0) (0, 0) w ⋅ Λ is absent. Equation (13) allows us to solve easily the two-dimensional inverse problem. Namely, we have to determine the interaction constants { } ( , ) w m k that provide a known eigenvalues spectrum { } , 1 L αβ αβ µ = . For example, it might be an experimental spectrum. Let us write an L -dimensional column vector Μ of the form (12) using the “experimental” spectrum components { } , 1 L αβ αβ µ = and let us take into account the mutual orthogonality of the vectors ( , ) m k Λ (11). Then the desired interaction constants are the scalar products of the L -dimensional vectors: ( ) , ( , )( , ) ( , ) m kw m k m k = Μ ΛΛ , , 0,1,... m k l = . (14) Now let us discuss another question. In the same way as in the one-dimensional problem, not any sequence of the numbers { } , 1 L αβ αβ µ = can be a spectrum of a connection matrix: the symmetry properties of the L -dimensional vectors ( , ) m k Λ impose rather severe restrictions on the values of these numbers. Firstly, from Eq. (13) it follows that the sum of the numbers αβ µ has to be equal to zero: L L αβα β µ = = = ∑ ∑ . (15) Secondly, in the one-dimensional problem the set of the eigenvalues (excluding the first eigenvalue) is mirror-symmetrical about its middle for each m l = : ( ) ( ) i L i m m λ λ + − = , where i l = + . From Eq. (11), which defines the L -dimensional vectors ( , ) m k Λ as the products of the eigenvalues ( ) i m λ by the vectors ( ) k Λ , it is evident that their last l L ⋅ coordinates copy the preceding l L ⋅ ones. Consequently, the same has to be true for the sequence of the numbers { } , 1 L αβ αβ µ = . Then it is necessary that the numbers that constituting the spectrum satisfy the equalities L αβ αβ µ µ + − = , where l L α β= + = . In other words, the last l L ⋅ terms of the sequence of the numbers αβ µ are not free parameters. Thirdly, since the last l coordinates of each L -dimensional vector ( ) k Λ are a mirror image of the preceding l coordinates, not all the first ( 1) l L + ⋅ coordinates of any vector ( , ) m k Λ are different. Consequently, the same has to be true for the given sequence { } , 1 L αβ αβ µ = : the last l terms of the first group of it L terms have to be a mirror image of the preceding l terms; the last l terms of the second group of it L terms have to be a mirror image of the preceding l terms, and so on. Finally, for the last ( 1) l + -th group consisting of L terms of the sequence { }
1, 1 Ll β β µ + = the equalities
1, 1, 2 l i l L i µ µ + + + − = ( i l = + ) have to be fulfilled. This means that by symmetry reasons only ( 1) l + numbers { }
11 1 l β β µ += , { }
12 1 l β β µ += , { }
13 1 l β β µ += ,…., { }
11 1 ll β β µ ++ = (16) of the sequence { } , 1 L αβ αβ µ = may be independent parameters. We can rewrite Eq. (15) using only the terms of the sequence (16): l l l l β α αββ α α β µ µ µ µ + + + += = = = + + + = ∑ ∑ ∑ ∑ . (17) This equation allows us to express µ , through the other ( 2) l l + independent numbers αβ µ from the sequence (16). Consequently, the number of the independent values αβ µ equals exactly to the number of the orthogonal vectors ( , ) n m Λ taking part in the expansion (13). Finally, let us discuss briefly a two-dimensional Ising system with an isotropic interaction. Evidently, we again can use the equations (13), (14), and (17), however, now the number of various interaction constants ( , ) w m k is not ( 2) l l + but much less: their number is equal to ( 3) / 2 l l + . This means that the same has to be the number of independent terms in the given sequence { } , 1 L αβ αβ µ = that represents the spectrum of an isotropic connection matrix. Let us without proving write down the formulas that replace Eqs. (16) and (17) when the interaction between spins is isotropic. After removing all the numbers αβ µ that due to the symmetry reasons copy the coordinates of the vector M (see Eq. (12)), in place of (16) we obtain the sequence { }
11 1 l β β µ += , { }
12 2 l β β µ += , { }
13 3 l β β µ += ,…., { } ll l β β µ += , l l µ + + , (18) that includes only ( 1)( 2) / 2 l l + + numbers. Next, when the interaction is isotropic we can rewrite the general requirement (15) as l l l l β αα αββ α α β α µ µ µ µ + + += = = = + + + + = ∑ ∑ ∑ ∑ , and calculate µ with the aid of this equation. As a result, we obtain the correct answer: in the sequence (18) the number of independent values is equal to ( 3) / 2 l l + .
4. Three-dimensional Ising model 1)
We consider a system of spins at the nods of a cubic lattice of the size
L L L × × ( 2 1)
L l = + assuming periodic boundary conditions. Then each spin has l pairs of neighbors that are situated along the three independent coordinate axes. In addition, the spins have neighbors that are not on the same coordinate axes as the given spin. The spins equally interacting with the given spin constitute a coordination sphere. Let ( , , ) w n m k be a constant of interaction between spins shifted with respect to each other by a distance n along the first axis, by a distance m along the second axis, and by a distance k along the third axis. When the interaction is anisotropic, there are ( 1) 1 l + − independent interaction constants { } , , 0 ( , , ) ln m k w n m k = , where − appears since there is no self-interaction and (0, 0, 0) 0 w = . In the case of an isotropic interaction, the number of various constants ( , , ) w n m k is equal to ( 1)( 2)( 3) / 6 1 l l l + + + − . In paper [3], we showed that the ( ) L L × -dimensional connection matrix C defined by the interaction constants ( ) { } , , 0 , , ln k m w n m k = is a block-circulant. It eigenvectors αβγ F are the Kronecker products of the eigenvectors ( ) α f (see Eq. (3)): ( ) ( ) ( ) α β γαβγ = ⊗ ⊗ F f f f , , , 1, 2,..., L α β γ = . The vectors αβγ F constitute a full set of the eigenvectors of any connection matrix of the three-dimensional Ising system and they do not depend on the type of the interaction constants ( ) { } , , 0 , , ln k m w n m k = . Let us write down the eigenvalues of the matrix C obtained in [3]: ( , , ) ( ) ( ) ( ) l l ln m k w n m k n m k αβγ α β γ µ λ λ λ = = = = ⋅ ⋅ ⋅ ∑ ∑ ∑ , , , 1, 2,..., L α β γ = . (19) We use the above-introduced L -dimensional vectors ( , ) m k Λ (see Eq. (11)) to generate L -dimensional vectors ( , , ) n m k Λ that are the Kronecker products of the vectors ( ) n Λ and ( , ) m k Λ ( ) ( , )( ) ( , )( , , ) ( ) ( , ) ( ) ( , )( ) ( , )( ) ( , ) ll L n m kn m kn m k n m k n m kn m kn m k λλλλλ ++ = ⊗ = ΛΛΛ Λ Λ ΛΛΛ , , , 0,1,..., n m k l = . (20) The vectors ( , , ) n m k Λ are mutually orthogonal. Let us define an L -dimensional vector Μ whose coordinates are the eigenvalues (19):
111 1 211 2 111 1 211 2 11 ( .. , .. , ..., .. , .. ,..., .. )
LL LL l l LL l l LL L LLL µ µ µ µ µ µ µ µ µ µ ++ + + + = Μ . (21) Then we can rewrite the set of equations (19) in the vector form: ( , , ) ( , , ) (0, 0,1) (0, 0,1) ... (0, , ) (0, , )(1, 0, 0) (1, 0, 0) ... (1, , ) (1, , ) .... ( , 0, 0) ( , 0, 0) .... ( , , ) ( , , ). l l ln m k w n m k n m k w w l l l lw w l l l l w l l w l l l l l l = = = = ⋅ = ⋅ + + ⋅ ++ ⋅ + + ⋅ + + ⋅ + + ⋅ ∑ ∑ ∑ Μ Λ Λ ΛΛ Λ Λ Λ (22) The equation (22) allows us to solve the inverse problem and calculate the interaction constants ( , , ) w n m k that define the given set of the eigenvalues { } , , 1 L αβγ αβγ µ = of the connection matrix. Indeed, let us transform this “experimental” spectrum { } , , 1 L αβγ αβγ µ = into an L -dimensional column-vector Μ of the form (21) and use the mutual orthogonality of the vectors ( , , ) n m k Λ . Then we obtain the required interaction constants as the scalar products of the L -dimensional vectors: ( ) , ( , , )( , , ) ( , , ) n m kw n m k n m k = Μ ΛΛ , , , 0,1,... n m k l = . (23) This formula solves the problem of restoring of the interaction constants corresponding to the given spectrum. Not any sequence of the numbers { } , , 1 L αβγ αβγ µ = can represent the spectrum of a three-dimensional Ising connection matrix. To start with, the equality L L L αβγα β γ µ = = = = ∑ ∑ ∑ (24) has to be hold. As in the two-dimensional problem, the cases of anisotropic and isotropic interactions differ significantly. When the interaction is anisotropic, it is easy to list the values αβγ µ where we exclude the numbers repeated due the symmetry reasons. This list contains ( 1) l + values { }
1, , 1 l αβγ αβγ µ + = . (compare with Eq. (16)). Because of Eq. (24), independent values in this list are one less. For example, we can express µ in terms of the other independent values: l l l l l l l β γ α βγ α γ αβ αβγβ γ α βγ αγ αβ αβγ µ µ µ µ µ µ µ µ + + + + + + += = = = = = = = − + + − + + − ∑ ∑ ∑ ∑ ∑ ∑ ∑ . (25) The symmetry reasons allow us to restore all the other numbers αβγ µ . Consequently, the number of independent values αβγ µ equals exactly to the number of the basic, так vectors ( , , ) n m k Λ (see Eq. (20)), which enter the sum (22) with nonzero coefficients. hen the interaction is isotropic, due to symmetry restrictions only ( 1)( 2)( 3) / 6 l l l + + + values αβγ µ may be independent. They are { }
11 2, 1 l βγ β γ β µ += = + , { }
12 3, 1 l βγ β γ β µ += = + ,…., l l l µ + + + . In addition, because of Eq. (24) this number is less by one. The same as we have done previously (see Eq. (25)), we can define, for example, µ . Then using the remaining independent values with the aid of the symmetry reasons we restore all the other numbers αβγ µ . Thus, in the given “experimental” set of eigenvalues there must be ( 1)( 2)( 3) / 6 1 l l l + + + − independent values and this number exactly matches the number of various coefficients ( , , ) w n m k in the expansion (22).
5. Discussion and conclusions
Connection matrices define the most important characteristics of Ising systems - such as the energies of the states and their distribution, the free energy, and all the macroscopic properties defined by the free energy. All these functions are crucially dependent on the connection matrix whose main characteristics are its eigenvalues and eigenvectors. In papers [1-3], we obtained the expressions for the eigenvalues of the Ising connection matrix ( ) , 1
Nij i j A = = A with an arbitrary long-range interaction in terms of its matrix elements. In the present paper, we solve the inverse problem: we suppose that we know the matrix spectrum and we have to determine the interaction constants providing this spectrum. We would like to note that the statement of the problem itself is not obvious. The point is that usually to calculate matrix elements of a matrix we have to know not only the eigenvalues but also all its eigenvectors. Indeed, let { } α λ and ( ) { } ( ) ( ) ( )1 2 , ,... f f α α α α+ = f be eigenvalues and eigenvectors of a symmetric matrix ( ) ij A = A , respectively. Then its matrix elements are [5]: ( ) ( )1 , , 1, 2,... ij i j A f f i j α ααα λ = = = ∑ . (26) On the other hand, at the beginning of each Section we recall that all connection matrices of any d -dimensional Ising model are circulants and, consequently, all these matrices have the same set of the eigenvectors [6]. In other words, their eigenvectors are known by default. However, our analysis shows that when calculating the matrix elements the internal symmetry of the problem allows us not to use Eq. (26) but much more simpler and convenient formulas (see Eqs. (9), (14), and (23)). In addition, using the symmetry reasons, we obtain the number and positions of independent values in a given sequence that allows it to be the spectrum of some connection matrix. The problem is solved for the d -dimensional hypercube lattices and an arbitrary long-range interaction. The following question arises: may the connection matrix eigenvalues be useful when calculating the partition function ( ) ( ) ( ) ( ) ... .... N i i k k
N i
Z e e e e β β β β= = = + + + + ∑ As s As s As s As s ? (27) The first thing that comes to mind is to improve the transfer-matrix method for the partition function calculation using the expansion (26) of the matrix A in terms of the eigenvectors and the eigenvalues we obtained. Apparently, this is a hopeless idea. The basis of the transfer-matrix method is a transition from N spin variables to a larger their number N + . However, an increase of the dimension of the problem leads to a complex restructuring of the eigenvalues and the eigenvectors of the matrix A in each term of Eq. (27). On the other hand, the connection matrix eigenvalues themselves may be useful when calculating the partition functions. Indeed, let us expand each exponential in (27) into a formal Taylor series. Now let us rearrange the summands, combining in one sum the terms with the same power β . We showed that for matrices with zero diagonals the equation ( ) ( ) ( )
42 , , , ... 2 1 Tr Tr Tr ...2! 3! 3
N N N
N NN i i i i i ii i i Z β ββ β β β = = = ≡ + ⋅ + + + = + + + + ∑ ∑ ∑ As s As s As s A A A , (28) is true [8]. Here Tr means the trace of the matrix. Since Tr 0 = A the summand Tr β A in the right-hand side of Eq. (28) is equal to zero too. Then the partition function is a series in the powers of the inverse temperature β with the coefficients Tr Nk kii λ = = ∑ A that are the sums of the powers of the connection matrix eigenvalues. In our case, he eigenvalues i λ are determined by the equations (4), (10) or (19). Note, beginning from k = the expressions for the sums ( ) , ki ii = ∑ As s become more complex including not only Tr k A but also some additional terms. Our arguments show that the eigenvalues of the Ising connection matrix may be useful when calculating the partition function. Acknowledgements
Funding: This work was supported by Russian Basic Research Foundation gran
References [1]
Litinskii L, Kryzhanovsky B 2020
Physica A
Kryzhanovsky B V, Litinskii L B 2019
Doklady Physics J. of Phys. A: Math. and Theor. [4] Dixon J M, Tuszynski J A Nip M L A 2001 Physica A
Advances in Physics .3 197 [8] Kryzhanovsky B, Litinskii L (unpublished result).3 197 [8] Kryzhanovsky B, Litinskii L (unpublished result)