Rényi Information flow in the Ising model with single-spin dynamics
aa r X i v : . [ c ond - m a t . s t a t - m ec h ] N ov R´enyi Information flow in the Ising model with single-spin dynamics
Zehui Deng
Physics Department, Beijing Normal University, Beijing 100875, China
Jinshan Wu
School of Systems Science, Beijing Normal University, Beijing 100875, China
Wenan Guo ∗ Physics Department, Beijing Normal University, Beijing 100875, China andState Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,Chinese Academy of Science, Beijing 100190, China (Dated: September 3, 2018)The n -index R´enyi mutual information and transfer entropies for the two-dimensional kinetic Isingmodel with arbitrary single-spin dynamics in the thermodynamic limit are derived as functions ofensemble averages of observables and spin-flip probabilities. Cluster Monte Carlo algorithms withdifferent dynamics from the single-spin dynamics are thus applicable to estimate the transfer en-tropies. By means of Monte Carlo simulations with the Wolff algorithm, we calculate the informationflows in the Ising model with the Metropolis dynamics and the Glauber dynamics, respectively. Wefind that, not only the global R´enyi transfer entropy, but also the pairwise R´enyi transfer entropypeaks in the disorder phase. PACS numbers: 05.20.-y, 89.70.Cf, 89.75.Fb, 75.10.Hk
I. INTRODUCTION
Information theory has found its fruitful applicationsrecently [1–3] in the study of phase transitions and crit-ical phenomenon which, traditionally, are studied by us-ing measures based on two-point correlation functions.This may not be surprising considering the concept ofentropy, which was first used by Shannon to quantifyinformation[4], has its roots in thermodynamics. Mutualinformation (MI) has proved to be a powerful tool fordetermining the thermal and quantum phase transitionsand their universality classes without knowledge of orderparameter [3, 5–11]. (In the context of quantum criticalphenomena, the classical Shannon entropy is related tothe Von Neumann entropy, and the MI is related to theentanglement entropy[5].)Besides physical systems, there are other complexsystems with interacting agents, such as stock mar-kets, crowd dynamics or traffic flow, also have phase-transition-like phenomena. In such more general com-plex systems, there might not be a well-defined orderparameter, or even a well-defined driven parameter astemperature to the Ising model. To predict, or evenidentify, phase-transition-like behaviors in these systemsis very important, but hard. MI is therefore very usefulin studying such systems, e.g., Vicsek’s particle swarmmodel[12], random Boolean networks [13] and financialmarkets[14]. R´enyi entropy [15] and corresponding mu-tual entropy, as the extensions of the Shannon entropyand mutual entropy, also play important roles in thesemethods. ∗ [email protected] On the other hand, with human civilization divesdeeper and deeper into the era of big data, time-seriesdata in the complex systems become more readily acces-sible. In principle, time-series data before and after thecritical point should have different qualitative features.Methodologies to identify and predict critical points fromtime-series data in these systems, if established, will bean essential step of progress to research works. Unfortu-nately, mutual information does not contain dynamicalinformation. However, an alternative information theo-retic measure, transfer entropy, that shares some of thedesired properties of mutual information but takes theinformation flow into account has been introduced [16].For example, consider two Ising spins s and s coupledby exchange interaction. Let s ( t ) and s ( t ) , t = 1 , , · · · , denote sequences of states of the two spins. If the stateof s has no influence on the transition of s , e.g., inthe high temperature limit, we have the Markov prop-erty p ( s ( t ) | s ( t − p ( s ( t ) | s ( t − , s ( t − p denoting the transition probability of s from t − t . This means that there is no information flow from s to s . The deviation from this relation is thus quan-tified as the transfer entropy[16]. The transfer entropydetects the directed exchange of information between twosystems and thus might has more potential applicationsin the study of dynamic systems with time-series dataavailable [17, 18].For a complex dynamic system, it is known that infor-mation flow between elements always peak in an interme-diate order regime. However, the peak may not coincidewith phase transition. It was recently conjectured that,by contrast, information flow in such systems generallymay peak strictly at the disordered side of a phase tran-sition [19]. This conjecture was verified for the ferromag-netic two-dimensional(2D) kinetic Ising model with theGlauber dynamics[20], in which a global transfer entropymeasure attains a maximum in the disordered phase.However, a pairwise transfer entropy measure does notshow such a maximum in the disordered phase[19]. Thenumerically observed peak of global transfer entropy atthe disorder side can be practically very valuable. In astock market, ordered phase, where lots of stocks movein the same direction, in a sense corresponds to a largebubble or a big crash. Peaks in the disorder side impliesthat it might be used as an indicator of a critical regionin the near future before stocks in the market really startto act in the same direction.The MI and information flow discussed in Ref. [19] arebased on the Shannon entropy. It is natural to extendthe theory to include the general R´enyi entropy[15] andto verify other single-spin dynamics than the Glauberdynamics. In present work, we define the R´enyi pairwiseand global MI measures and the corresponding transferentropy measures. For the 2D kinetic Ising model withgeneral single-spin dynamics, these measures are derivedas functions of ensemble averages of observables, includ-ing those related to the single-spin flipping probabilities,in the thermodynamic limit. We further numerically cal-culate these measures by using Monte Carlo simulationswith the Wolff algorithm[21]. We find that the Shannontransfer entropies for the Ising model with the Metropolisdynamics [22] behave similarly as those for the Glauberdynamics. The R´enyi pairwise and global MI measuresare also found to have similar behaviors as the Shan-non counterparts for both dynamics. However, the R´enyipairwise and global transfer entropies show different be-haviors from the Shannon counterparts. The most evi-dent difference is that the R´enyi pairwise transfer infor-mation measure peaks in the disordered phase, which isabsent for the Shannon pairwise information flow.The paper is organized as follows: In Sec II, we defineand derive the R´enyi entropy based MI and flow measuresin the thermodynamic limit. In Sec III, we calculate themeasures for the 2D kinetic Ising model with Glauberand Metropolis dynamics. We conclude in Sec IV. II. R´ENYI MUTUAL INFORMATION ANDFLOW
We consider the ferromagnetic 2D Ising model on thesquare lattice with periodic boundary conditions. TheHamiltonian is given by H ( S ) = − J X h i,j i S i S j , (1)where S = ( S , ..., S N ) , S i ∈ { +1 , − } , denotes the spinconfiguration and h i, j i the nearest neighbors. J = 1 setsthe energy unit. The Boltzmann-Gibbs probability of aconfiguration S is P ( S ) = 1 Z e − β H ( S ) , (2) where β = 1 /T is the inverse temperature with the Boltz-mann constant k B = 1, and Z = P S e − β H ( S ) is the par-tition function.The model is largely solved in the thermodynamic limit[23–25]. We quote the main exact results here for lateruse:The critical inverse temperature β c = 1 T c = 12 log(1 + √ , (3)the magnetization m = { ± (1 − sinh − β ) , T < T c ;0 , T ≥ T c , (4)the free energy per site − βf = log(2 cosh β )+ 2 π Z π/ log(1+ p − κ sin θ ) dθ, (5)and the internal energy per site u = − coth 2 β [1 + 2 π ( κ sinh 2 β − Z π/ dθ p − κ sin θ ] , (6)where κ = β cosh β .Mutual information between random variables is theessential information-theoretic quantity, which can beframed in terms of statistical dependence. Based on theShannon entropy H ( X ) of a random variable X , the mu-tual information I ( X : Y | Z ) between two random vari-ables X and Y , optionally conditional on a third variable Z , is defined as I ( X : Y | Z ) ≡ H ( X | Z ) − H ( X | Y, Z ) , (7)which is equivalent to I ( X : Y | Z ) = H ( X | Z )+ H ( Y | Z ) − H ( X, Y | Z ) . (8)Barnett et al. [19] thus define the pairwise MI measure I pw = 12 N X h i,j i I ( S i : S j ) = 12 N X h i,j i (2 H ( S i ) − H ( S i , S j )) , (9)and the global MI measure as the multi-information I gl = X i H ( S i ) − H ( S ) . (10)The parametric family of entropies so called R´enyi en-tropy were introduced by Alfred R´enyi as a mathematicalgeneralization of the Shannon entropy. The definition ofR´enyi’s entropy of index n is given by[15] H n ( X ) = 11 − n log( X i ∈ X p ni ) , (11)where X represents a random variable and p i is the prob-ability of outcome i ∈ X . Shannon entropy is the specialcase at the limit n → I n ( X : Y ) between two random variables X, Y [26], e.g., I n ( X : Y ) ≡ H n ( X ) − H n ( X | Y ), or, I n ( X : Y ) ≡ H n ( X ) + H n ( Y ) − H n ( X, Y ) . Following Iaconis et al. [9], we adoptthe latter and extend I pw to the R´enyi pairwise MI mea-sure I Rpw , and I gl to the R´enyi global MI measure I Rgl byreplacing the Shannon entropy H to the R´enyi entropy H n in Eq. (9) and (10), respectively.Following Barnett et al. [19], we express them in thethermodynamic limit I Rpw = 21 − n log( X σ p nσ ) − − n log( X σ,σ ′ p nσσ ′ ) (12)and1 N I
Rgl = 11 − n log( X σ p nσ )+ nβ − n ( f ( T /n ) − f ( T )) , (13)where the sums are over σ, σ ′ = ±
1, with p σ = 12 (1+ σm ) , p σσ ′ = 14 [1+( σ + σ ′ ) m − σσ ′ u ] , (14)and n is the index of the R´enyi entropy, m, f and u isthe magnetization, the free energy persite and the inter-nal energy persite, respectively. Note that for T < T c ,the sign of the magnetization m does not affect thesetwo and any subsequent quantities, which is to say thatthe information measures are invariant under symmetrybreaking. I Rpw and I Rgl , at the thermodynamic limit, can be com-puted directly by substituting the exact results as in Eq.(4-u) into their analytic expressions (12) and (13). Alsowe note that the second derivative of I Rgl has singularpoints at T c and nT c due to the singular behavior of thefree energy.To study the information flow between stationarystochastic processes X ( t ) and Y ( t ), the transfer entropy T Y → X ≡ I ( X ( t ) : Y ( l ) ( t ) | X ( l ) ( t )) with l -length historyis useful. Here X ( l ) ≡ X ( t − , . . . , X ( t − l ). Barnett et al. [19] considered the l = 1 history pairwise trans-fer entropy measure and global transfer entropy measurebased on the Shannon entropy: T pw = 12 N X h i,j i T S j → S i (15)= 12 N X h i,j i ( H ( S i ( t ) | S i ( t − − H ( S i ( t ) | S i ( t − , S j ( t − , and T gl = X i ( H ( S i ( t ) | S i ( t − − H ( S i ( t ) | S ( t − , (16) where S i ( t ) denotes the spin i at time t , S j ( t −
1) rep-resents the neighboring spin j at time t − H ( S i ( t ) | S i ( t − S i ( t ) conditional on S i ( t −
1) and similarly for the others. Starting from thesedefinitions, these measures are calculated for an arbitrarysingle-spin dynamics of the Ising model in the thermody-namic limit[19], where exact results (Eq. (4)-(6)) in thethermodynamic limit are used. For the sake of complete-ness, we quote the their results as follows: N T pw = − q X σ log qp σ + X σ ′ q σ ′ X σ log q σ ′ p σσ ′ , (17)and N T gl = − q X σ log qp σ + h P i ( S ) log P i ( S ) i , (18)where q = 12 h P i ( S ) i , q σ ′ = 14 ( h P i ( S ) i + σ ′ h S j P i ( S ) i ) , (19)with i, j arbitrary nearest neighbors and h S j P i ( S ) i ≡ T ≥ T c ; P i ( S ) is the flipping probability of spin S i in a given spin configuration S [19], which describes anysingle-spin process as long as this process satisfies the de-tailed balance. It is important to notice that the MI mea-sures are independent of the dynamics, while the transferentropy measures do depend on the dynamics.We can also generalize the pairwise and global transfer(Shannon) entropy measures to the R´enyi pairwise andR´enyi global transfer entropy measures. For two station-ary stochastic processes X ( t ) and Y ( t ), we define the l = 1-length history R´enyi transfer entropy T RY → X ≡ H n ( X ( t ) | X ( t − − H n ( X ( t ) | X ( t − , Y ( t − , (20)which reduces to the Shanon transfer entropy T Y → X atthe limit n →
1. The R´enyi pairwise and R´enyi globaltransfer entropy measures are thus defined by replacing H to H n in Eq. (15) and (16), respectively. The expres-sions, at thermodynamic limit, are found to be T Rpw = 11 − n X σ p σ log[(1 − qN p σ ) n + ( qN p σ ) n ] − − n X σ,σ ′ p σσ ′ log[(1 − q σ ′ N p σσ ′ ) n + ( q σ ′ N p σσ ′ ) n ](21)and T Rgl = 11 − n X σ p σ log[(1 − qN p σ ) n + ( qN p σ ) n ] − − n h log((1 − P i ( S ) N ) n + ( P i ( S ) N ) n ) i , (22)respectively. Here, p σ , p σσ ′ , q and q σ ′ are defined inEq.(14) and (19).For large system N → ∞ , we obtain the index n = 2R´enyi T Rpw and T Rgl , by applying Taylor expansion, to theorder O ( N ), respectively: T Rpw = − X σ p σ ( − qN p σ + 43 q N p σ ) + X σ,σ ′ p σσ ′ ( − q σ ′ N p σσ ′ + 43 q σ ′ N p σσ ′ ) + O ( 1 N )= − N ( X σ q p σ − X σ,σ ′ q σ ′ p σσ ′ ) + O ( 1 N ) (23)and T Rgl = − X σ p σ ( − qN p σ + 43 q N p σ ) + h− P i ( S ) N + 43 P i ( S ) N i + O ( 1 N )= − N ( X σ q p σ − h P i ( S ) i ) + O ( 1 N ) . (24)One great advantage of these two formulas is that theyare expressed in terms of ensemble averages of observ-ables, based only on the Boltzmann-Gibbs distribution.The nature of the transfer entropies which are sensitiveto the update scheme is represented in ensemble averagesof quantities like h P i ( S ) i and h S j P i ( S ) i ), which can becalculated by using efficient MC method with dynamicsother than the single-spin dynamics involved in the ki-netic model, given that the update probability P i ( S ) isspecified. Simulation results of these two quantities fortwo dynamics are presented in Section III A. III. NUMERICAL RESULTS
The Metropolis algorithm is the first MC algorithm tosimulating lattice models[22]. The underlying discrete-time Metropolis spin-flip dynamics is defined as fol-lows: at each time step, an arbitrary spin i is cho-sen randomly. Consider the energy difference betweenthe state that spin i is flipped and the original state:∆ E i = 2 s i P j ∈ ν ( i ) s j , ν ( i ) denotes the nearest neighborsof spin i . The spin-flipped state will be accepted withprobability 1, if ∆ E i ≤
0; otherwise, the state will beaccepted with the probability P i ( S ) = e − ∆ E i /T . (25)The discrete-time Glauber spin-flip dynamics[20] isslightly different from the Metropolis dynamics: The ran-domly chosen spin i flips with the probability P i ( S ) = [1 + e ∆ E i /T ] − . (26)These processes satisfy detailed balance. Since not every term of the transfer entropies has an-alytic expression, we make use of MC method to obtaintheir behavior. The Wolff cluster algorithm[21] is usedto generate microscopic states. The ensemble averageof an observable is calculated as means in the samples.This algorithm is much more efficient than other single-spin flip algorithms, such as the Metropolis algorithmand its variations. In particular, it suppresses criticalslowing down. In our simulations, typically 10 samplesare used to obtain ensemble averages and statistical er-rors after equilibrating the systems. It is worthy to notethat we do not study the transfer entropies of the kineticIsing model with the dynamics of the Wolff algorithm.Instead, the Wolff MC method is used to calculate theinformation flows, according to Eqs. (17), (18), (23), and(24)), in the kinetic Ising model with the Metropolis andthe Glauber dynamics, respectively. A. Shannon entropy based information flow for theMetropolis dynamics
To further verify the conjecture raised in [19] that MIflows peak in the disordered phase, we study the MI flowsin the Ising model with the Metropolis dynamics. N T p w T L=8L=16L=32L=64L=128L=256
0 0.03 0.06 0.09 0.12 0.15 T L FIG. 1. (color online) Plot of NT pw for the Metropolis dy-namics against temperature for several system sizes. The sta-tistical errors are much smaller than the symbol sizes. Theinset shows the maximum of T pw as a function of 1 /L , inwhich the horizontal dashed line indicates T c = 2 . We simulate the Ising model on the square lattice ofsize N = L × L for L = 8 , , , , , T pw and T gl as functions of temperature T and linear size L for the Metropolis dynamics, respec-tively. The results are very similar to those found for theGlauber dynamics in Ref.[19]. As the system size grows,the finite-size effects are reducing. Kinks turn to appearin the T pw and T gl versus temperature curves at the ex- N T g l T L=8L=16L=32L=64L=128L=256
FIG. 2. (color online) Plot of NT gl for the Metropolis dynam-ics against temperature for several system sizes. The dashedvertical line indicates T c = 2 . actly known critical point T c ≈ . T pw as a function of 1 /L ,which converges to the known critical point T c very well.This quantity can thus be used to determine the criticalpoint of other systems without knowledge of the analyti-cal solution. However, there are humps in the T gl curves.The maximum keeps sitting in the disorder region when N → ∞ . Compared with Ref. [19] in which T gl is foundmax at T = 2 . ± .
003 for the Glauber dynamics, T gl has a maximum at T = 2 . ± . et al. [19],namely, T pw peaks at T c while T gl has a maximum atthe disordered phase. Similar results on a measure re-lated to T pw have been obtained by direct simulating theIsing model with the Metropolis dynamics and Glauberdynamics [18].It is worthy to mention that, in Ref.[19], the authorsstressed that the dynamics of the spin updating in theirMC algorithm is necessarily to be the same as the dynam-ics under discussing. By contrast, we come to a conclu-sion that it is irrelevant to use which MC algorithm or dy-namics to update configurations in the simulations, as faras the MC means is equal to the ensemble averages. Thisis because that the entropy flows have been expressed asensemble averages of observables in the equilibrated sys-tem (see Eq. (17), (18), (23), and (24)). There’s no needto extract the time series of the entropies appearing in thedefinitions of the flows. For example, h P i i and h S j P i i aretwo essential quantities related to the specific dynamicsin above expressions, which can be determined by MCsimulations with spin update algorithms different fromthe dynamics studied, as far as the Boltzmann-Gibbs dis-tribution are realized by the simulations. Figure 3 and 4show these two quantities as functions of temperature forthe kinetic Ising model with the Metropolis dynamics andwith the Glauber dynamics, respectively. The results areobtained by MC simulations with the Wolff algorithm.It is seen that singularities develop in the two quantities closing to the critical point when system size turns large.We have verified this conclusion by repeating Barnett’sresults using the Wolff algorithm (not shown here). < S j P i > T 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 < P i > L=8L=16L=32L=64L=128L=256L=512
FIG. 3. (color online) h P i i (upper panel) and h S j P i i (lowerpanel) plotted against temperature for the 2D Ising modelwith the Metropolis dynamics. The statistical errors are muchsmaller than the symbol sizes. < S j P i > T 0 0.05 0.1 0.15 0.2 0.25 0.3 < P i > L=8L=16L=32L=64L=128L=256L=512
FIG. 4. (color online) h P i i (upper panel) and h S j P i i (lowerpanel) plotted against temperature for the 2D Ising modelwith the Glauber dynamics. The statistical errors are muchsmaller than the symbol sizes. B. R´enyi entropy based mutual information andflow for the Glauber and Metropolis dynamics
We now study the generalized index 2 R´enyi MI mea-sures and transfer entropy measures for the Ising modelwith the Metropolis dynamics and Glauber dynamics, re-spectively.The R´enyi MI measures do not depend on the dynam-ics, thus can be calculated analytically. By substitutingthe exact m , f and u into Eq.(12) and Eq.(13), we ob-tain the results, which are plotted against temperaturein Fig.5. As expected, the R´enyi pairwise MI and globalMI bear singularities at the exactly known critical point.We also expect singular behavior of I Rgl at 2 T c due to thesingularity in the free energy (see Eq. (13)). Such sin-gularity is not visible directly in Fig. 5(lower panel), butshould appear as a logarithmic divergence in the secondorder derivative. I R g l / N T 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 I R p w FIG. 5. (color online) Plot of I Rpw (upper panel) and I Rgl /N (lower panel) against temperature. By contrast, the R´enyi pairwise MI flow T Rpw and globalMI flow T Rgl depend on the dynamics of the kinetic Isingmodel. MC simulations with the Wolff algorithm areused to calculate the two measures. According to (23)and (24), the leading terms in T Rpw and T Rgl scale as 1 /N .We therefore evaluate N T Rpw and N T Rgl . N T R g l T 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 N T R p w L=8L=16L=32L=64L=128L=256L=512
FIG. 6. (color online) N T Rpw (upper panel) and N T Rgl (lowerpanel) plotted against temperature for the 2D Ising modelwith Metropolis dynamics for several system sizes L . Thestatistical errors are much smaller than the symbol sizes. Figure 6 illustrates the R´enyi transfer entropy T Rpw and global transfer entropy T Rgl for the Metropolis dy-namics as functions of temperature for system sizes L =8 , , , , , , T Rpw and T Rgl : as system size turnslarge, they all develop a kink around T c ; the curve foreach size has a hump in the disorder region. For R´enyipairwise transfer entropy, the maximum point in thecurve of the largest size L = 512 is at T = 2 . ± . T = 2 . ± .
05 for theGlauber dynamics, respectively; For R´enyi global trans-fer entropy, the maximum point in the curve of the largestsize L = 512 is at T = 3 . ± .
05 and T = 2 . ± . T Rgl is similar to the Shannon globaltransfer entropy T gl for the Glauber dynamics, however,the R´enyi pairwise transfer entropy T Rpw shows a com-pletely different behavior from the Shannon T pw , namely, T Rpw peaks in the disordered region, while T pw does not. N T R g l T 0 0.005 0.01 0.015 0.02 0.025 0.03 N T R p w L=8L=16L=32L=64L=128L=256L=512
FIG. 7. (color online) N T Rpw (upper panel) and N T Rgl (lowerpanel) as a function of temperature for the 2D Ising modelwith the Glauber dynamics for several system sizes. The sta-tistical errors are much smaller than the symbol sizes.
IV. CONCLUSIONS AND DISCUSSION
We have extended the Shannon pairwise, global MIand l = 1 history transfer entropies to the R´enyi coun-terparts. Expressions related to thermodynamic quan-tities and ensemble averages of dynamic probability arederived in the thermodynamic limit for the 2D kineticIsing model with arbitrary single-spin dynamics. ClusterMonte Carlo algorithms with different dynamics from thesingle-spin dynamics are thus applicable to estimate thetransfer entropies. As a result, much larger system sizesand numerical accuracy can be reached in simulations.By using Wolff cluster Monte Carlo simulations, wehave calculated the transfer entropies for both the Shan-non and the R´neyi entropy, for the kinetic Ising modelwith the Glauber and the Metropolis dynamics.The Shannon global transfer entropy is shown tohas a maximum point in the disordered regime for theMetropolis dynamics, similar to that found [19] for theGlauber dynamics. Also, the Shannon pairwise transferentropy for the Metropolis dynamics behaves similarly asthat for the Glauber dynamics [19]: T pw peaks at T C , butdoes not max in the disordered regime.For the R´enyi transfer entropies with index 2, we havefound that, in additional to the global transfer entropy T gl , the R´enyi pairwise transfer entropy T Rpw peaks inthe disordered phase for both the Metropolis and theGlauber dynamics. This is different from the behavior of the Shannon pairwise transfer entropies. T gl is regarded as measure of collective informationtransfer [27], capturing both pairwise and higher-order(multivariate) correlations of a site. Its peak is inter-preted [19] in terms of conflicting tendencies amongstthese components as the level of disorder in the systemincreases when the system is further away from the phasetransition. This might also explain the postcritical peakin our R´enyi global transfer entropy T Rgl . However, wedon’t have an intuitive explanation for the postcriticalpeak in our R´enyi pairwise transfer entropy T Rpw which isabsent in the Shannon counterpart. Further investigationis required.
Acknowledgment
This work is supported by the Na-tional Science Foundation of China (NSFC) under Grant11175018 (Guo) and 11205014 (Wu). [1] H. Matsuda, K. Kudo, R. Nakamura, O. Yamakawa, andT. Murata, Int. J. Theor. Phys. , 839 (1996).[2] S.-J. Gu, C.-P. Sun, and H.-Q. Lin, J. Phys. A , 025002(2008).[3] R. G. Melko, A. B. Kallin, and M. B. Hastings, Phys.Rev. B , 100409 (2010).[4] C. E. Shannon and W. Weaver, The Mathematical Theoryof Information (University of Illinois Press, Urbana, IL,1949).[5] L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev.Mod. Phys. , 517 (2008).[6] P. Calabrese and J. Cardy, J. Stat. Mech. (2004) P06002.[7] M. A. Metlitski, C. A. Fuertes, and S. Sachdev, Phys.Rev. B , 115122 (2009).[8] R. R. P. Singh, M. B. Hastings, A. B. Kallin, and R. G.Melko, Phys. Rev. Lett. , 135701 (2011).[9] J. Iaconis, S. Inglis, A. B. Kallin, and R. G. Melko, Phys.Rev. B , 195134 (2013).[10] S. Inglis and R. G. Melko, Phys. Rev. E , 013306(2013).[11] A. B. Kallin, K. Hyatt, R. R. P. Singh, and R. G. Melko,Phys. Rev. Lett. , 135702 (2013).[12] R. T. Wicks, S. C. Chapman, and R. O. Dendy, Phys.Rev. E , 051125 (2007).[13] A. S. Ribeiro, S. A. Kauffman, J. Lloyd-Price, B.Samuelsson, and J. E. S. Socolar, Phys. Rev. E , 011901 (2008).[14] M. Harr´e and T. Bossomaier, Europhys, Lett. , 18009(2009).[15] A. R´enyi, Proc. of the 4-th Berkeley Symposium on Math-ematics, Statistics and Probability , , 547 (1961).[16] T. Schreiber, Phys. Rev. Lett. , 461 (2000).[17] M. Prokopenko, J. T. Lizier, and D. C. Price, Entropy , 524 (2013).[18] D. Marinazzo, M. Pellicoro, G. Wu, L. Angelini, J. M.Cort´es, and S. Stramaglia, Plos One, , e93616 (2014).[19] L. Barnett, J. T. Lizier, M. Harr´ e , A. K. Seth, and T.Bossomaier, Phys. Rev. Lett. , 177203 (2013).[20] R. J. Glauber, J. Math. Phys. (N. Y.) , 294 (1963).[21] U. Wolff, Phys. Rev. Lett. , 361 (1989).[22] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A.H. Teller, and E. Teller, J. Chem. Phys. , 1087 (1953).[23] L. Onsager, Phys. Rev. , 117 (1944).[24] C. N. Yang, Phys. Rev. , 808 (1952).[25] B. M. McCoy and T. T. Wu, The Two-DimensionalIsing Model (Harvard University Press, Cambridge, MA,1973).[26] J. Principe,
Information Theoretic Learning: R´enyi’sentropy and Kernel perspectives (Springer Sci-ence+Bussiness Media. LLC2010).[27] J. T. Lizier, M. Prokopenko, and A. Y. Zomaya, Chaos20