Investigation of Finite-size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions
Investigation of Finite-size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions
Boris Kryzhanovsky , Magomed Malsagov and Iakov Karandashev * Scientific Research Institute for System Analysis, Russian Academy of Sciences, Moscow, Russian Federation * Correspondence: [email protected] Received: date; Accepted: date; Published: date
Abstract:
We analyze changes in the thermodynamic properties of a spin system when it passes from the classical two-dimensional Ising model to the spin glass model, where spin-spin interactions are random in their values and signs. Formally, the transition reduces to a gradual change in the amplitude of the multiplicative noise (distributed uniformly with a mean equal to one) superimposed over the initial Ising matrix of interacting spins. Considering the noise, we obtain analytical expressions that are valid for lattices of finite sizes. We compare our results with the results of computer simulations performed for square
N L L lattices with linear dimensions L = 50÷1000. We find experimentally the dependencies of the critical values (the critical temperature, the internal energy, entropy and the specific heat) as well as the dependencies of the energy of the ground state and its magnetization on the amplitude of the noise. We show that when the variance of the noise reaches one, there is a jump of the ground state from the fully correlated state to an uncorrelated state and its magnetization jumps from 1 to 0. In the same time, a phase transition that is present at a lower level of the noise disappears. Keywords:
Ising model, noisy connections, ground state, free energy, internal energy, magnetization, specific heat, entropy, critical temperature
1. Introduction
Calculation of the partition function is an essential of statistical physics and informatics. A few conceptual models allow exact solutions [1-6]. Among these a 2D Ising model [7], though simple, deserves special attention because of its importance for investigating critical effects. Having contributed a lot to the development of the spin glass theory, the Edwards-Anderson model [8] and Sherrington-Kirkpatrick model [9] are also worth mentioning. However, there are not many models that permit exact solutions. This is the reason why numerical methods are mostly used for tackling complex systems. Of them, two methods are most suitable for our purpose. The first is the Monte-Carlo method [10, 11]. It enables us to analyze a system and determine its critical parameters quite accurately [12-16]. The thorough consideration of the method can be found in papers [17, 18]. Unfortunately, the method needs a great deal of computations and does not allow direct calculation of the free energy. The second method uses the approach [19, 20], which has recently given rise to the fast algorithm [21, 22] that finds the free energy by computing the determinant of a matrix. The algorithm is popular because it allows the user to compute the free energy quite accurately and at the same time determine the energy and configuration of the ground state of a system. The methods of statistical physics help researchers to understand the behavior of complex neural nets and evaluate the capacity of neural-net storage systems [23-28]. The machine learning and computer-aided image processing need fast calculations of the partition function of specific interconnect matrices [29, 30]. The realization of Hinton’s ideas [31, 32] gave rise to algorithms of deep learning and image processing [33-36]. Based on the optimization of the free energy of a spin (neuron) system, the algorithms, from the formal viewpoint, comes down to the optimization of the spin correlation in neighboring layers or within a single layer of a neural net. It should be understood that the system has a phase transition because the spin correlation grows abruptly at the critical point (the correlation length becomes nearly as great as the size of the whole system). In this case the optimization of the neural network becomes temperature dependent, which makes the learning algorithm almost impracticable. The aim of the paper is to study the properties of a finite spin system whose Hamiltonian is defined as a quadratic functional (1). The functional is often used in machine learning and image processing. Quantities i s may stand for either pixel class (object/background) in an image [35], or the neuron activity indication in a Bayes neural network [36]. We will use the physical notation calling quantities i s spins. The model under consideration has two limiting cases. The conventional 2D Ising model with regular interconnections presents the first case; the Edwards-Anderson model is the second case. The properties of our model lie somewhere in between. We introduce adjusting parameters in functional (1), which allows us to go from the 2D Ising model to Edwards-Anderson model in a smooth manner and investigate the thermodynamic characteristics of the system in the transient state. To avoid misunderstanding, let us point out two things. First, our interest is finite systems. For this reason, there is an expected discrepancy with Onsager results obtained at N . Second, we cannot use the results of the spin glass theory to the full because the finite system under consideration is ergodic: it does not have multiple phase transitions caused by frustrations and provide self-averaging [37, 38].
2. Essential expressions, the equation of state
Let us consider a system described by the Hamiltonian: N ij i ji j
E J s sN . (1) This system consists of N Ising spins i s i N , positioned at the nods of a planar grid, the nods being numbered by index i . Only interactions with four nearest neighbors are considered. Spin-to-spin interactions ij J are random and defined as (1 ) ij ij J J , (2) where ij is a random zero-mean variable evenly distributed over the interval [ , ] ij . We have chosen the even distribution of ij to be able to control ij J : when , all interactions are positive ij J . For the sake of simplicity, we assume that J . Our interest is the free energy of the system: f ZN , (3) where the partition function ( ) N E SS
Z e is defined as a sum over all possible configurations S , and kT is the reverse temperature. The knowledge of the free energy makes it possible to compute the basic measurable parameters of the system: fU ,
22 2 f ,
22 2 fC , (4) where free energy U E is the ensemble average at given ,
22 2
E E is the variance of energy, and C is the specific heat. The properties of the system depend on the dimension of the system N and adjusting parameter . Unfortunately, we cannot allow for the effect of the both parameters simultaneously, so we consider the contribution of each separately.
1) The effect of the finite grid dimension.
Let us consider how the fact of the grid having a finite dimension affects its properties. Let us take as the starting point. In this case the behavior of the system can be described by the expression (see reference [39]) which is true for finite systems: f ln 2 1ln cosh ln 1 1 cos ,2 21 sinh 1 22 tanh K 1 ,1 sinh cosh4 coth K K 1 tanh 2 tanh 1 K ,2(1 ) z dzU z z zJ z a z a z (5) where
22 2 2221 2 2 4 2
J zz Lz L za p a p p z z (6) Here K K and K K are full elliptical integrals of the first and second type correspondingly: /2 /22 2 1/2 2 2 1/21 2 K ( ) (1 sin ) , K ( ) (1 sin ) o o d d . (7) Expressions (5)-(7) are the well-known Onsager solution [7], which is true for N , modified for the case of finite N . Though true for N , the expressions agree well with the experimental data even at relatively small grid dimensions L . As could be expected, when N , formulae (6) give p , a , , and expressions (5) turn into well-known ones [7]. Expressions (5) agree excellently with experimental data: the relative error is less than 0.2% at L . With the growing L , the error decreases rapidly and is within the limits of experimental error at L . By way of comparison Figures 3, 6, and 7 gives the plots of function (5) for L . Expressions (5) allow the N -dependences of the critical values of the reverse temperature, internal energy and energy variance of the system:
11 ,12 1 ,2.4 ln 0.5 , ccc
LU LL (8) where is the critical value for L [7].
2) The effect of noise.
Let us consider the random character of quantities ij J . Let D E be the number of states of energy E . Then the sum of states can be presented as ( ) N EE
Z D E e . Passing from summation to integration, we get (to within an insignificant constant): [ ( ) ] ~ N E E
Z e dE , (9) where ln E D E N . Applying the saddle-point method to integral (9), we get exp
Z Nf , where ( )( ) ( ), d Ef E E dE . (10) The first expression in (10) defines the free energy, the second determines E at the saddle point where the derivative of function E E turns to zero. The form of spectral function E is known only for the single-dimensional Ising model. That is why we turn to the so-called n-vicinity method [28] to calculate the spectral function. The idea of the method is to divide the whole space of N states into N classes ( n vicinities) and approximate the energy distribution in each class by a corresponding Gaussian. In brief the approach is as follows. Let us denote the ground-state configuration as S . Let class n be a set of configurations n S that differs from S in that they have n spins directed oppositely to the spins in S . The number of configurations in the class is equal to the number of compositions of N in n , all configurations having the same (relative) magnetization m m N n N S S . The distribution of state energies within the n-vicinity was shown [28] to follow the normal distribution n D E :
1( ) exp 22 mn mm
N E END E Nn , (11) where , 2(1 ) 1 , 1 / 2 m m h
E E m m m . (12) Here E is the ground state energy, h is the variance of ground-state local fields. In this case we have / (1 ) h , where / 3 is the variance of interconnections ij J . The sought-for distribution D E is found by summing n D E over all n . Using the Stirling formula and passing from summation to integration with respect to variable m n N , we get for D E : D( ) ( ) 2 1
N NF m Enn n
N dmE D E e m , (13) where ( )1( , ) ln (1 )ln(1 ) (1 )ln(1 )2 mm E EF m E N m m m m . (14) If we evaluate integral (13) by the saddle-point method, for the spectral function we get , E F m E , where m is the solution of equation , 0 F m E m . Let us combine (13)-(14) and (9)-(10). Then the free energy can be written as ( ) ( , ) , f F m E E (15) where variables m m and E E are derived from the equations: m m mm m m E E E E E Emm m . (16) It is easy to notice that set of equations (16) is solvable when m . Correspondingly, when the values of are less than certain critical value c , (16) and (12) gives us m E , m and E , the free energy taking the form ln f N . The phase transition occurs when allows yet another solution to (16) at m . Note that substituting the second equation from (16) into the first one allows us to eliminate variable E . Doing things this way and performing several transformations, we obtain the equation of state that holds only one variable m : m mm m , (17) where r . Here we introduced adjusting coefficient r to allow for the finite grid dimension: r when L , r giving the excellent agreement with experiments at L . The critical temperature is defined as value c at which there is a nontrivial solution of (17). This solution has to be found by a numerical method: when c , we find m that satisfies (17) and compute the corresponding value of energy m m E E . Substitution of these values in (15) yields the corresponding value f . Unfortunately, the n-vicinity method has an essential fault: it is applicable only when the condition / 4ln 2 ij ij J N J holds. In our case this condition works when (1 ) ln 2 1 , that is when . For such relatively small values of formulae (15)-(17) gives c and f that predict the experimental results well (see Fig.1 and Fig.2).
3) Evaluating the spectral density.
The algorithm we use allows us to compute function f f and its derivatives. In turn, this allows us to investigate how energy distribution ( ) exp ( ) D E N E varies with the noise amplitude. Indeed, it is easy to derive from formulae (10) the equation for the spectral function: ( ) ( ), dfE E f E d (18) and its derivatives
122 2 2 , d fd ddE dE d . (19) Note that ( ) E is entropy up to a constant, and equations (18) are well-known Legendre transformations, which are applicable for analyzing the spectral density of finite-dimension models [40, 41]. It follows from these equations that when varies from to , E changes from 0 to E , and for each value of we have a pair of values of E and ( ) E . In so doing we determine the form of function ( ) E and its derivatives. The plots of function ( ) E and its derivatives presenting experimental data are given in Fig. 6-7. Looking at Figure 7, we can see that the minimum of function / d dE at point E changes into the maximum as the noise amplitude grows. Let us find at which it occurs. It can be noticed that with E the entropy can be presented as the series:
1( ) ln 2 , 2 2 (1 )2 4!
J ijJ J
E EE J , (20) where / J E is the fourth cumulant, which in our case is described by the expression [28]:
94 5 6 /5 J . (21) From (20)-(21) it follows that in the center point of the curve ( E ) quantity / d dE is determined by expression: E ddE (22) and the fourth derivative / / JE d dE changes its sign at c , when : c . (23)
3. The experiment description
We make an intensive use of the Kasteleyn-Fisher algorithm [19, 20] here to compute the free energy of the 2D square spin system. The algorithm gives exact results because the finding of the partition function is reduced to computation of the determinant of a matrix generated in accordance with the model under consideration. The algorithm permits us to exactly calculate the free energy of a spin system for an arbitrary planar graph with arbitrary links in a polynomial time. More information about the algorithm can be found in [21]. In the paper we use the realization [22] of the algorithm that can give the same results in a shorter time. Using this algorithm, we were able to examine the behavior of free energy ( ; ) f f and its derivatives for a few lattices of different dimensions
N L L . Additionally, paper [22] offers the algorithm for searching the ground state. This algorithm helped us to investigate the energy and magnetization of the ground state as functions of noise amplitude.
Let us point out that the both algorithms we use are only applicable to planar lattices. It means that we considered only lattices with free boundary conditions because lattices with periodic boundary conditions do not belong to a planar graph. The length of the lattice varied from L to L . Most of the plots present the results for L . The free energy is computed to 15-digit accuracy after the decimal point. Because we use the finite-difference method to compute the derivatives, the number of digits after the decimal point is about twice as less for U and four times as less for ( ) . With large grid dimensions L and with the computation error becomes too big and the plots of second derivatives start exhibiting oscillations. It is interesting to notice that introduction of little noise into grid interconnections allows us to decrease these oscillations.
4. Experimental results
In the experiments we calculate the free energy and its derivatives and find the ground-state configuration and energy. The accent is given to the finding of the critical point and corresponding quantities. The location of the maximum of curve ( ) is used to find the critical temperature. Most important experimental data are presented in Figures 1-7 and Table 1.
Table 1.
The energy of ground state E and its magnetization M , critical values c , c f , c U and c for different noise amplitudes. E M с с f c U с
0 -1.995 1 0.442 -0.6931 -1.978E-05 12.958 0.1 -1.995 1 0.443 -0.6931 -1.986E-05 11.427 0.2 -1.995 1 0.444 -0.6932 -0.0101 12.566 0.3 -1.995 1 0.445 -0.6932 -0.0103 11.627 0.4 -1.996 1 0.452 -0.6933 -0.0211 11.476 0.5 -1.994 1 0.454 -0.6934 -0.0324 10.666 0.6 -1.993 1 0.459 -0.6936 -0.0447 9.719 0.7 -1.994 1 0.465 -0.6939 -0.0581 8.328 0.8 -1.996 1 0.476 -0.6946 -0.0849 7.642 0.9 -1.996 1 0.484 -0.6957 -0.1143 6.518 1.0 -1.993 1 0.503 -0.6979 -0.1599 5.603 1.1 -1.996 0.9998 0.515 -0.7010 -0.2109 4.656 1.2 -1.995 0.9987 0.536 -0.7065 -0.2815 3.629 1.3 -1.994 0.9943 0.562 -0.7156 -0.3747 2.775 1.4 -1.996 0.9839 0.591 -0.7327 -0.5107 1.998
1) The free and internal energy.
Experimental dependencies ( ) f f and ( ) U U are shown in Figures 1-2. It is seen from Fig. 1 that the curves go down with because the ground-state energy grows. When noise is small ( ), the curves of free energy ( ) f and internal energy ( ) U almost merge (Fig. 1-2). When the curves ( ) U demonstrate a cusp (Fig. 2) which corresponds to the phase transition. When ~ 1.7 , the cusp disappears and the further increase of noise changes only the asymptotic behavior of curves ( ) f and ( ) U according to (26). Figure 1.
Free energy ( ) f at different noise amplitudes
0; 0.4; 0.8; 1.2; 1.6; 2.0; 2.5; 3 . Lower curves corresponds to greater values of . The red marks indicate the values that are found by the n-vicinity method with the aid of formulae (15)-(17) at zero noise amplitude. The grid dimension L . (a) (b) Figure 2. (a)
Internal energy ( ) U at different noise amplitudes [0,1.7] spaced by 0.1 intervals. The red marks indicate the values that are found by the n-vicinity method with the aid of formulae (15)-(17) at zero noise amplitude. (b) [1.8,3.0] spaced by 0.1 intervals, the lower curves correspond to greater . The grid dimension L .
2) The energy variance.
Curves ( ) are shown in Fig. 3. It should be noted that because the n-vicinity method gives a piecewise-linear approximation of the energy variance, the red marks in Fig. 3 indicates values obtained by using the generalization of Onsager solution to a finite-dimension case according to formula (5). The formula gives a perfect agreement with experimental data, yet it is applicable only in a zero-noise case. The behavior of curves ( ) near point is quite expectable for any : when , the energy variance is equal to and, according to (20), grows gradually in proportion to noise variance / 3 . With great the behavior of curves ( ) is much dependent on . It is seen in Fig. 3 that the energy variance peaks corresponding to the phase transition are observed only at . The peaks become lower with growing and move to the right at the same time. When , the peaks disappear at all, only the maximum at remains. It is interesting that all the curves in Fig. 3 (a) have the common intersection point near . We could not find out why it is so. The intersection moves to the right slowly with the growing noise amplitude. (a) (b) Figure 3.
The energy variance ( ) at different noise amplitudes : (a) [0,1.7] , and (b) it changes by 0.1 intervals in range [1.8,3.0] . The red marks indicate values produced by formula (5). The grid dimension L .
3) The critical temperature.
The critical temperature is defined by the location of the maximum of curve ( ) or by the presence of a cusp on it. Fig. 4 shows how the variance peak location and height vary with the growing noise. Holding true only for , the numerical solution of the equation of state (17) gives с that agrees with the experimental data perfectly. For greater it is possible to use the approximate expression resulting from the experiment: с c , (24) where c is the zero-noise critical value resulted from (8). The peak height lowers linearly with the growing noise amplitude: с с , (25) where с is the variance at defined in (8). It follows that if , с falls to zero. It means that when , the variance peak disappears and we can say that the critical temperature is zero. (a) (b) Figure 4. (a)
The critical temperature с and (b) energy variance at the critical temperature c as functions of noise amplitude . The solid lines correspond to formulae (24)-(25). L . The ground state.
The results we obtained testify that when the noise amplitude (at ), the quality of the system changes. The ground state configuration experiences the most noticeable changes (see Fig. 5). Clear that with zero noise the ground state is fully correlated, i.e. all spins are the same i s . The situation keeps as long as all matrix elements ij J , i.e. . However, (see Fig. 5) the ground-state energy proved to remain almost the same for as big as . Then it starts decreasing gradually and comes to an asymptotic value [42]: E , (26) corresponding to the energy of the ground state in the Edwards-Anderson model. The ground-state magnetization changes stepwise from 1 to 0 when the noise deviation comes close to unit . (a) (b) Figure 5. (a)
Energy E and (b) magnetization M of the ground state of the system as a function of noise amplitude. L .
5) The entropy.
The change of the ground-state configuration and energy results in a change of energy distribution density ( ) E . The curves of ( ) E and its derivatives are shown in Fig. 6-7. The disappearance of the phase transition is easy to notice if we look at the curve of the second derivative / d dE . It is seen in Fig. 7 (a) that the sink in the middle of the curve ( E ) rises with growing and, according to (23) the minimum of / d dE at E turns into a maximum
10 of 13 when . The peaks at points c E U separate with growing ( c U E ) and become lower like / c d dE until full disappearance at . When , curve / d dE has a noticeably convex shape, and the phase transition peaks disappear. Moreover, in this case function / d dE is well described by the expression: J d E EEdE E . (27) Formula (27) gives good approximation of experimental data (accurate to 0.5% over the energy interval E E ). (a) (b) Figure 6. (a)
Spectral density ( ) E and (b) its first derivative for some noise amplitudes
0; 0.3; 0.7; 1.1; 1.5; 1.8; 2.2; 2.5; 3 . The marks show the zero-noise curve. The grid dimension L . (a) (b) Figure 7 . The second derivative of spectral density ( ) E at (a) [0,1.7] and (b) [1.8,3] , the reading spacing is 0.1. The marks denote the zero-noise curve (a) and the curve for resulted from (27) (b) . The grid dimension L .
5. Conclusions
In the paper we have considered the Ising model on a two-dimensional grid with noise-polluted interconnections. In the limiting case N such system demonstrates the following properties: with low noise the system have all characteristics of conventional Ising model, with high noise it turns into the Edwards-Anderson spin glass model. The goal of our experiments was to
11 of 13 observe the transition between these two limiting cases in the finite-dimension system N . It proved that when the noise is weak ( ), the behavior of the system is much like the behavior of the conventional Ising model. We expected that with heavy noise ( ), the behavior of the system would be like that of the Edwards-Anderson model. However, the experimental results are significantly different from the expectation. It turned out that even when the noise is relatively weak ( ~ 1 ), the system undergoes considerable changes. First, when ~ 1 , the energy spectrum ( ) D E changes radically (it is clearly seen in Fig. 7): the curves of / d dE has a two-humped form at , and with become simply convex. Moreover, the ground-state magnetization changes to zero when . It means that when the threshold value is surpassed, the ground-state configuration goes off the initial state by distance of N in the Hamming’s terms. In other words, the system undergoes a zero-temperature phase transition. The transition is followed by the change of the ground-state energy from E J to asymptotic value (26). Second, the experimental relation between the critical temperature and noise divergence differs greatly from the well-known [8] expression c i kT J , which in our terms takes the form:
32 2(1 ) c . (28) We can see that the classical theory predicts that c should fall with the growing deviation of noise. Moreover, expression (28) predicts finite values of c for any large . The experiment yields the opposite result: in accordance with (24) c grows in proportion with . The experiment also shows that c grows with and when it reaches its maximum c , the phase transition disappears at ( ). It can be said conceptually that when the threshold value is surpassed, the jump c T occurs. In our opinion, the difference between the experiment and theoretical predictions is due to the finite dimension of the system. First, the finite system is ergodic and even at low temperatures does not have spontaneous magnetization, which can be tested easily with the help of Monte-Carlo algorithm. Second, the self-averaging principle used for building the theory for N is not realizable for finite N . Additionally, the use of terms “critical temperature” and “phase transition” is not quite correct in description of finite-dimension systems. Finite-dimension grids are of interest in image processing and machine learning. In our paper the grid dimensions were N L L with
25 1000 L . If we consider a planar grid as a model of a flat pixel image, such dimensions are very popular. The main conclusion that can be drawn from our results is that the learning algorithms based on free energy optimization are temperature insensitive in the most popular condition of because there is no observable phase transition in this case. Acknowledgments:
The work was supported by Russian Foundation for Basic Research (RFBR Project 18-07-00750).
References R.J.Baxter. Exactly solved models in statistical mechanics. London: Academic Press (1982). 2.
H.Stanley. Introduction to phase transitions and critical phenomena. Clarendon Press. Oxford. 1971. 3.
R.Becker, W.Doring. Ferromagnetism. Springer Verlag, Berlin, 1939. 4.
K.Huang, Statistical Mechanics, Wiley, New York, 1987. 5.
R.Kubo. An analytic method in statistical mechanics. Busserion Kenkyu 1 (1943) 1–13. 6.
J.M.Dixon, J.A.Tuszynski, P.Clarkson, From Nonlinearity to Coherence, Universal Features of Nonlinear Behaviour in Many-Body Physics, Clarendon Press, Oxford, 1997.
12 of 13 L.Onsager. Crystal statistics. A two-dimensional model with an order–disorder transition. // Phys. Rev. 65 (3–4). 117–149. 1944. 8.
S.F.Edwards, P.W.Anderson. Theory of spin glasses. J.Phys.F: Metal Phys., vol.5, 965 (1975). 9.
D. Sherrington, P.Kirkpatrick. Solvable model of a spin-glass. Phys. Rev. Lett. 35 (26), 1792 (1975). 10.
N.Metropolis, S. Ulam. The Monte Carlo Method. J. of Am. Stat. Association 44 № 247, 335—341 (1949). 11.
George S. Fishman. Monte Carlo: concepts, algorithms, and applications. Springer, 1996 12.
Alex F Bielajew. Fundamentals of the Monte Carlo method for neutral and charged particle transport. 2001 13.
W. M. C. Foulkes, L. Mitas, R. J. Needs and G. Rajagopal. Quantum Monte Carlo simulations of solids. Reviews of Modern Physics 73 (2001) 33. 14.
J.W. Lyklema. Monte Carlo study of the one-dimensional quantum Heisenberg ferromagnet near = 0. Phys. Rev. B 27 (5) (1983) 3108–3110. 15.
M.Marcu, J.Muller, F.-K.Schmatzer. Quantum Monte Carlo simulation of the one-dimensional spin-S xxz model. II. High precision calculations for S = ½. // J. Phys. A. 18 (16). 3189–3203. 1985. 16.
R. Häggkvist , A. Rosengren, PH Lundow , K. Markström, D. Andren, P. Kundrotas. On the Ising model for the simple cubic lattice. Advances in Physics, v. 56,
K.Binder. Finite Size Scaling Analysis of Ising Model Block Distribution Functions.Z.Phys.B, Condensed Matter 43, 119-140 (1981). 18.
K.Binder, E.Luijten. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models. Physics Reports 344 (2001) 179-253. 19.
Kasteleyn P. Dimer statistics and phase transitions. // J. Math. Phys. 4(2). 1963. 20.
Fisher M. On the dimer solution of planar Ising models. // J. Math. Phys. 7(10). 1966. 21.
Karandashev Ya.M., Malsagov M.Yu. Polynomial algorithm for exact calculation of partition function for binary spin model on planar graphs. // Optical Memory & Neural Networks (Information optics). v. 26. No. 2, 2017. 22.
N.Schraudolph and D.Kamenetsky. Efficient exact inference in planar Ising models. // In NIPS. 2008. https://arxiv.org/abs/0810.4401. 23.
D. Amit, H. Gutfreund and H. Sompolinsky. Statistical Mechanics of Neural Networks Near Saturation. Annals of Physics, 173 (1987), 30-67. 24.
G.A. Kohring. A High Precision Study of the Hopfield Model in the Phase of Broken Replica Symmetry.
J. of Statistical Physics, , 1077 (1990). 25. J.L. van Hemmen, R. Kuhn. «Collective Phenomena in Neural Networks». in Models of Neural Networks. E. Domany, J.L van Hemmen and K. Shulten, Eds. Berlin: Springer, 1992. 26.
O.C. Martin, R. Monasson, R. Zecchina. Statistical mechanics methods and phase transitions in optimization problems. Theoretical Computer Science, 265(1-2, pp. 3-67 (2001) . 27.
I.Karandashev, B.Kryzhanovsky, L.Litinskii. Weighted patterns as a tool to improve the Hopfield model. // Phys. Rev. E. 85. 041925 (2012). 28.
B.V. Kryzhanovsky, L.B. Litinskii. Generalized Bragg-Williams Equation for Systems with Arbitrary Long-Range Interaction. Doklady Mathematics, Vol. 90, 784 (2014). 29.
J.S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282–2312, 2005. 30.
M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51(7):2313–2335, 2005. 31.
Hinton, G. E. and Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 – 507 (2006). 32.
G.E. Hinton, S. Osindero, Y. The. A fast learning algorithm for deep belief nets. Neural Computation, v. 18, pp.1527-1554 (2006). 33.
Y. LeCun, Y. Bengio, G. Hinton. Deep learning // Nature 521, 436 (2015). 34.
H. W. Lin, M. Tegmark. Why does deep and cheap learning work so well? // Journal of Stat. Phys., Vol. 168, pp 1223–1247 (2017). 35.
C. Wang, N. Komodakis, N. Paragios. Markov random field modeling, inference & learning in computer vision & image understanding: A survey. Preprint to Elsevier (2013). 36.
Krizhevsky, A. and Hinton, G.E. Using Very Deep Autoencoders for Content-Based Image Retrieval. European Symposium on Artificial Neural Networks ESANN-2011, Bruges, Belgium
13 of 13
A.N. Gorban, P.A. Gorban, G. Judge. Entropy: The Markov Ordering Approach. Entropy 12(5):1145-1193 (2010) 38.
V.S. Dotsenko. Physics of the spin-glass state. Physics-Uspekhi, , No 6, pp.455-485 (1993). 39. I.M. Karandashev, B.V. Kryzhanovsky, M.Yu. Malsagov. The Analytical Expressions for a Finite-Size 2D Ising Model. Optical Memory and Neural Networks, Vol. 26, No. 3, pp. 165–171 (2017). 40.
R. Häggkvist, A.Rosengren, D.Andrén, P.Kundrotas, P.H. Lundow, K. Markström. Computation of the Ising partition function for 2-dimensional square grids. // Phys Rev E, Vol. 69, No 4 (2004). 41.
P.D. Beale. Exact distribution of energies in the two-dimensional Ising model. // Phys. Rev. Lett. , pp.78-81 (1996). 42., pp.78-81 (1996). 42.