MMonte Carlo Simulations of BFSSand IKKT Matrix Models
Nikhil Tanwar
A dissertation submitted for the partial fulfillmentof BS-MS dual degree in Science
Indian Institute of Science Education and Research MohaliJune 2020 a r X i v : . [ h e p - l a t ] J u l ertificate of Examination This is to certify that the dissertation titled
Monte Carlo Simulations of BFSSand IKKT Matrix Models submitted by
Nikhil Tanwar (MS15111) for thepartial fulfillment of BS-MS dual degree programme of the Institute, has been ex-amined by the thesis committee duly appointed by the Institute. The committeefinds the work done by the candidate satisfactory and recommends that the reportbe accepted.Dr. Kinjalk Lochan Dr. Sanjib Dey Dr. Anosh Joseph(Supervisor)Dated: July 31, 2020 eclaration
The work presented in this dissertation has been carried out by me under theguidance of Dr. Anosh Joseph at the Indian Institute of Science Education andResearch Mohali.This work has not been submitted in part or in full for a degree, a diploma, or afellowship to any other university or institute. Whenever contributions of others areinvolved, every effort is made to indicate this clearly, with due acknowledgement ofcollaborative research and discussions. This thesis is a bonafide record of originalwork done by me and all sources listed within have been detailed in the bibliography.Nikhil Tanwar(Candidate)Dated: July 31, 2020In my capacity as the supervisor of the candidates project work, I certify that theabove statements by the candidate are true to the best of my knowledge.Dr. Anosh Joseph(Supervisor) cknowledgment
I express my sincere gratitude to my supervisor, Dr. Anosh Joseph for his patience,motivation, prudent comments, valuable suggestions, and beneficial information,which have helped me remarkably in my research and in writing this thesis. Hisenormous knowledge and thorough experience in lattice field theory have enabledme to complete this research successfully. I am incredibly thankful to him forsparing his precious time to guide me, clarifying my queries, correcting me duringmy research and sharing his workstation without which simulations would havenever been completed.Besides my advisor, I wish to thank the rest of my thesis committee members,Dr. Kinjalk Lochan and Dr. Sanjib Dey, for their insightful comments andencouragement.I extend my sincere thanks to Mr. Arpith Kumar for giving his valuable timefor the discussion sessions and for helping me during the simulation of the BFSSMatrix model. I also extend my sincere thanks to Mr. Adeeb Mev for thecollaboration we made for the simulation of the BFSS matrix model.I acknowledge IISER Mohali for providing me with the best infrastructure andenvironment for carrying out this project. I am also thankful to the Department ofScience and Technology (DST), Government of India, for supporting me with theinstitute fellowship during the past five years.I want to thank my batchmates and friends for interesting peer discussionsthroughout the five years of the course. I am thankful to Nilangshu Bhattacharyya,Amit Suthar, Adarsh Bhardwaj, Anubhav Jindal, Apoorv Gaurav, DebanjanChowdhury, Debjit Ghosh, Gaurav Singh, Ishan Sarkar, Paresh Nath Das, SatyamPrakash, Swastik P G, Vidur Sury and Vivek Jadhav. I would also like tothank all my friends listed above for their continuous moral support and motiva-tion throughout this journey without which it would have been a very tough journey.I would also like to thank and acknowledge my parents and my brother DhruvTanwar for their love and support throughout the years of my study. This wouldnot have been possible without them. ist of Figures O against Monte Carlo sweep number. . . . . 213.3 Normalized autocorrelation against Monte Carlo sweep number for observable O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.4 (cid:104) O ( t ) (cid:105) against lattice site t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.5 Acceptance rate against Monte Carlo sweep number. . . . . . . . . . . . . . . 273.6 The plot of e − ∆ H against the number of accepted states. . . . . . . . . . . . . 273.7 Run-time history of observable O against Monte Carlo sweep number. . . . . 283.8 Normalized autocorrelation against Monte Carlo sweep number. . . . . . . . . 283.9 The observable O against m . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.1 The link variables form the plaquette U µν . . . . . . . . . . . . . . . . . . . . . 364.2 Polyakov loop against temperature T . . . . . . . . . . . . . . . . . . . . . . . 454.3 Distribution of the eigenvalues of the Polyakov loop operator. . . . . . . . . . 454.4 EN against temperature T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.5 Plot of R against temperature T . . . . . . . . . . . . . . . . . . . . . . . . . 464.6 The Polyakov loop against temperature T . . . . . . . . . . . . . . . . . . . . . 504.7 The plot of EN against temperature T . . . . . . . . . . . . . . . . . . . . . . . 514.8 The plot of extent of space R against temperature T . . . . . . . . . . . . . . 515.1 Plot of the eigenvalues of I µν against N . . . . . . . . . . . . . . . . . . . . . . 565.2 Extent of spacetime R against N . . . . . . . . . . . . . . . . . . . . . . . . . 575.3 Plot of the eigenvalues of I µν against 1 /N for phase quenched IKKT model. . 62ii ist of Tables O for different m values. . . . . . . . . . . . . . . . . 294.1 Values of the fitting parameters A , B , T c and D . . . . . . . . . . . . . . . . . 444.2 Values of the fitting parameters A , B , T c and D . . . . . . . . . . . . . . . . . 49iii bstract In this thesis, we studied the bosonic BFSS and IKKT matrix models using Monte Carlosimulations. First, we explored some toy models to check the validity of the numericalsimulations. Then we simulated the BFSS matrix model using Hamiltonian Monte Carlo(HMC) algorithm. In the BFSS matrix model, we used the Polyakov loop as an orderparameter to investigate the large- N behaviour of this model at different temperatures. Oursimulations confirmed that the model exhibits a confinement-deconfinement phase transitionas the temperature of the system is varied. Besides the Polyakov loop, other observablessuch as internal energy and extent of space were also computed. In the bosonic IKKTmodel, we studied the spontaneous symmetry breaking (SSB) of SO (10) symmetry usingthe moment of inertia tensor and found that there is no SSB of SO (10) symmetry in thismodel. Besides the eigenvalues of the moment of inertia tensor, other observable such asextent of spacetime was also computed. We also studied the simulation theory of the phase-quenched IKKT model. iv ontents List of Figures iList of Tables iiiAbstract iv1 Introduction 1 D = 4 and BFSS Matrix Models 31 D = 4 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.2.1 HMC for D = 4 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2.2 Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.2.3 Simulation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.3 BFSS Matrix Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.3.1 HMC for Bosonic BFSS Matrix Model . . . . . . . . . . . . . . . . . . 474.3.2 Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.3.3 Simulation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 vi hapter 1 Introduction
String theory is believed to be a very promising candidate for the theory of everything.It attempts to unify the theory of gravity with other three fundamental forces of nature:Electromagnetism, Strong and Weak nuclear forces. It replaces the point like particles,which is the fundamental assumption of the Standard Model of particle physics, by one-dimensional objects called strings . All the matter particles as well as the force carryingparticles are made up of these strings and the theory explains how these strings propagatein spacetime and interact with each other. In string theory, each of the fundamental particleis represented by the unique vibrational frequency of the string.Till today there are five consistent superstring theories: type I, type IIA, type IIB,heterotic SO (32) and heterotic E × E . These theories differ whether the strings areopen or closed, whether the strings are oriented or not and on how they treat electricalcharges. All of these theories are in 10-dimensional spacetime and all the calculations aredone perturbatively in terms of string length l s and string coupling g s .It has been found that all of these different string theories are connected to each otherthrough some transformation (T-duality, S-duality etc.) and this led to the conjecture in1995 by Edward Witten [Witten 95] that they are part of some bigger theory called M-theory . M-theory is still undiscovered but it is known that it lives in 11 dimensions and itis a non-perturbative theory. The M in M-theory is undefined and sometimes it is referredto as “membrane” or “matrix.”String theory was first studied in the late 1960s as a model of strong interaction and from1974 onwards proper study of string theory started and since then it has been evolving. Eventoday, there is a debate over its validity as no part of this theory has been experimentallyverified. 1 .1 BFSS Matrix Model
BFSS Model was conjectured in 1996 by T. Banks, W. Fischler, S. H. Shenker, and L.Susskind [Banks 97]. This model is one dimensional supersymmetric Yang-Mills theory andthis theory acts as the low energy effective description of D N → ∞ it is believed that this theory will represent M-theory. One of the methods to obtain the action of this theory is by dimensionally reducingthe N = 1 super Yang-Mills theory in 9+1 dimensions with gauge group SU ( N ) to 0+1dimensions (See Appendix A). The resulting action is given by S = 12 g (cid:90) dt Tr (cid:32) ( D t X i ) + 12 [ X i , X j ] − i Ψ Tα C γ αβ D t Ψ β + Ψ Tα C γ iαβ [ X i , Ψ β ] (cid:33) . (1.1)where D t = ∂ t − i [ A ( t ) , · ]; i, j = 1 , , · · · , α, β = 1 , , · · · , X i are N × N trace-less Hermitian matrices; g is the one-dimensional Yang-Mills coupling; A ( t ) is the one-dimensional gauge field; Ψ is a 32-component Majorana-Weyl fermion with each componentof the fermion being an N × N traceless Hermitian matrix; C is the charge conjugationmatrix in the ten dimensions; and γ i are the gamma matrices in ten dimensions. The actionis invariant under the following set of gauge transformations X i ( t ) −→ V ( t ) X i ( t ) V † ( t ) , (1.2) A ( t ) −→ V ( t ) ( A ( t ) + i∂ t ) V † ( t ) , (1.3)Ψ α ( t ) −→ V ( t ) Ψ α ( t ) V † ( t ) , (1.4)where V ( t ) ∈ SU ( N ). One of the motivations to study the BFSS model is the connection of this model with theblack hole states. The finite temperature states of BFSS model is related to the black holestates of type IIA supergravity [Klebanov 98] [Banks 98]. The connection between BFSSmatrix model and the black hole states has also been discussed in the Ref. [Kabat 01a][Kabat 01b]. In these two references, the authors calculated the entropy of a black hole instrongly coupled quantum mechanics beginning with the Bekenstein-Hawking entropy of aten-dimensional non-extremal back hole. The free energy of the black hole in terms of theparameters of the gauge theory (here the BFSS model) can be written as FT = − . N (cid:18) T g ym N (cid:19) / . (1.5)2he above formula is of interest for a couple of reasons. First, its dependence on the ’tHooft coupling. The limit N → ∞ corresponds to the region where supergravity is validand therefore it is the appropriate limit to look for black holes in the BFSS model. Secondreason being that the free energy depends on N , which is the expected behaviour of a gaugetheory in the deconfined phase.It would be important to reproduce this formula from the matrix model. To get thisbehaviour in the matrix model, one must look in the strong coupling, which is not accessiblein the perturbation theory. That is, in a regime where it is highly impossible to carry out ananalytical calculation to derive this expression. Thus we are left with numerical simulationsas these technique does not have any such restrictions. For a system with a density of states that grows exponentially, ρ ( E ) ∼ e β H E (1.6)there exist an upper limit to the temperature known as the Hagedorn temperature. Abovethis temperature the partition function diverges [Furuuchi 03]lim T → T − H Tr (cid:16) e − βH (cid:17) → ∞ . (1.7)Above this cutoff temperature T H , partition function does not exist. However, it can bemade to exist if we keep N large but finite. Performing this breaks the exponential growthin the asymptotic density of states at some large but finite energy value. For temperaturegreater than T H , the entropy and the energy are dominated by the states at and above thecutoff scale and the free energy jumps from O (1) to O ( N ) [Hadizadeh 05]. Thuslim N →∞ FN = 0 (Confined phase) , (1.8)lim N →∞ FN (cid:54) = 0 (Deconfined phase) . (1.9)The transition from the confined phase to deconfined phase is known as confinement-deconfinement phase transition or deconfinement phase transition. In the confined phase, thequantum states of the Hamiltonian should be singlets under the gauge symmetry [SEMENOFF 04].This condition fails as soon as the system reaches the Hagedorn temperature. This type oftransition is found in large- N gauge theories, such as weakly coupled Yang-Mills theory andit is also expected to be found in the BFSS model [SEMENOFF 04].This confinement-deconfinement phase transition in the matrix models is associated withthe breakdown of the center symmetry, i.e., A ( t ) → A ( t ) + k . The order parameter for this3ymmetry breaking is the Polyakov loop [Polyakov 78]. Polyakov loop is defined as the traceof the holonomy of the gauge field around the finite temperature Euclidean time circle P = 1 N Tr (cid:16) P ( e i (cid:72) dt A ( t ) ) (cid:17) . (1.10)This operator is gauge invariant as it is a special case of the Wilson line operator. Theexpectation value of this operator is zero in confined phase and it jumps to a non-zero valueas we cross the phase transition point and enters the deconfined phase. This is simplybecause Polyakov loop is a unitary matrix and its eigenvalues are uniformly distributed onthe unit circle in the confined phase and in the deconfined phase the eigenvalues clumptowards a single point. We have (cid:104) P (cid:105) = 0 (Confined phase) , (1.11) (cid:104) P (cid:105) (cid:54) = 0 (Deconfined phase) . (1.12)We use this as an order parameter for our simulations of the BFSS model. IKKT model was conjectured in 1997 by N. Ishibashi, H. Kawai, Y. Kitazawa, A. Tsuchiya[Ishibashi 97]. This model was proposed as a non-perturbative formulation of superstringtheory. This model is also known as type IIB matrix model. The action of this model can beobtained by dimensionally reducing the N = 1 super Yang-Mills theory in 9+1 dimensionswith gauge group SU ( N ) to 0 + 0 dimensions (See Appendix A). The resulting Euclideanaction is given by S E = − g Tr (cid:0) [ X i , X j ] (cid:1) − g Tr (cid:0) Ψ Tα C γ iαβ [ X i , Ψ β ] (cid:1) , (1.13)where i, j = 1 , , · · · , α, β = 1 , , · · · , X i are N × N traceless Hermitian matri-ces; g is the zero-dimensional Yang-Mills coupling; Ψ is a 32-component Majorana-Weylfermion with each component of the fermion being an N × N traceless Hermitian matrix; C is the Euclidean charge conjugation matrix in ten dimensions; and γ i are the Euclideangamma matrices in ten dimensions. The action is invariant under the following set of gaugetransformations X i −→ V X i V † , (1.14)Ψ α −→ V Ψ α ( x ) V † . (1.15)4 .5 Motivation for IKKT Model One of the main motivations to study the IKKT model is that in the large- N limit, spacetimeemerges dynamically in this model from the eigenvalues of the ten bosonic matrices and thistells us that dynamical compactification can be viewed as a purely non-perturbative effect[Aoki 98]. The evidence for the same came from the simulations of the Lorentzian versionof the IKKT model [Nishimura 19, Nishimura 20]. In these simulations it has been foundthat continuous time emerges dynamically and three-dimensional space undergoes expansionafter a critical time with the other six spatial dimensions remaining small. The expansion isexponential at early times [Ito 14], which becomes a power law at later times [Ito 15]. Thesetwo results provide evidence that a realistic cosmological scenario may also arise dynamicallyfrom this model.On the other hand, we can also look for dynamical compactification of extra dimensionsin the Euclidean model by spontaneous symmetry breaking (SSB) of the SO (10) due to thefluctuations of the phase of the Pffafian for SO ( d ) symmetric configurations with larger d [Nishimura 00]. It has been showed that SSB does not take place in phase-quenched model,so SSB is attributed to the effect of the phase of the Pffafian [Ambjrn 00]. It was also shownthat in the IKKT model that the SO (3) symmetric vacuum has the lowest free energy, whichimplies SSB to SO (3) [Nishimura 11]. Recent work on this model using complex Langevinalgorithm showed that there is SSB of SO (10) to SO (3) × SO (7) [Anagnostopoulos 20].They further showed that the SO (7) symmetry is also broken into smaller groups. In this section will introduce how the path integrals in quantum field theory and statisticalmechanics are connected. We will explain this using the example of a real scalar field.Let us consider a theory containing a scalar field defined by the following action S [ φ ( x )] = t (cid:90) dt (cid:48) (cid:90) d x (cid:18) ∂ µ φ∂ µ φ − V ( φ ) (cid:19) , (1.16)where V ( φ ) is the potential functional. Then the probability amplitude to go from the fieldconfigurations φ ( (cid:126)x ) to φ ( (cid:126)x ) in time t is given by the path integral G ( φ ( (cid:126)x ) , φ ( (cid:126)x ); t ). Wehave G ( φ ( (cid:126)x ) , φ ( (cid:126)x ); t ) = (cid:104) φ ( (cid:126)x ) | e − iHt | φ ( (cid:126)x ) (cid:105) , (1.17) G ( φ ( (cid:126)x ) , φ ( (cid:126)x ); t ) = φ ( (cid:126)x,t )= φ ( (cid:126)x ) (cid:90) φ ( (cid:126)x, φ ( (cid:126)x ) D φ e iS [ φ ( x )] , (1.18)5here H is the Hamiltonian of this system. The partition function for the theory of scalarfield is given by Z = Tr (cid:16) e − βH (cid:17) = (cid:90) D φ ( (cid:126)x ) (cid:104) φ ( (cid:126)x ) | e − βH | φ ( (cid:126)x ) (cid:105) . (1.19)If we perform a Wick rotation (see Appendix A), i.e., t = − it E , then we go to theEuclidean signature from the Minkowski signature and we can use that to write (cid:104) φ ( (cid:126)x ) | e − βH | φ ( (cid:126)x ) (cid:105) = φ ( (cid:126)x,β )= φ ( (cid:126)x ) (cid:90) φ ( (cid:126)x, φ ( (cid:126)x ) D φ e − S E [ φ ( x )] = G ( φ ( (cid:126)x ) , φ ( (cid:126)x ); − iβ ) , (1.20)where S E [ φ ( x )] = β (cid:90) dt E (cid:90) d x (cid:18) ∂ µ φ∂ µ φ + V ( φ ) (cid:19) (1.21)is the Euclidean action. If we substitute this in Eq. (1.19), the partition function becomes Z = (cid:90) D φ ( (cid:126)x ) (cid:104) φ ( (cid:126)x ) | e − βH | φ ( (cid:126)x ) (cid:105) = (cid:90) D φ ( (cid:126)x ) φ ( (cid:126)x,β )= φ ( (cid:126)x ) (cid:90) φ ( (cid:126)x, φ ( (cid:126)x ) D φ e − S E [ φ ( x )] = (cid:73) P BC D φ e − S E [ φ ] , (1.22)where PBC stands for periodic boundary conditions, i.e., φ ( (cid:126)x,
0) = φ ( (cid:126)x, β ). If we performthe same calculation for fermionic fields we will get the same result but in place of pe-riodic boundary conditions we will have anti-periodic boundary conditions, i.e., ψ ( (cid:126)x,
0) = − ψ ( (cid:126)x, β ). Using this formalism we can study the physics of finite temperature quantum fieldtheory. This formalism is also useful in exploring strongly coupled quantum field theoriesusing Monte Carlo simulations of the Euclidean model.So a general result for fields in flat Minkowski spacetime follows from here: Euclideanquantum field theory in ( D + 1) -dimensional spacetime with ≤ t E < β is the same as thequantum statistical mechanics in D -dimensional space [Zee 10].6 hapter 2 Markov Chain Monte CarloAlgorithms
This chapter introduces the computational techniques required to sample data from a prob-ability distribution (PD) that is known up to some multiplicative constants. We will seehow to apply these algorithms to the Euclidean path integrals in this and later chapters. Tobegin, consider a general partition function of a particle in one dimension in some potential V ( x ), given by Z = (cid:90) D x ( t ) e − S E [ x ( t )] , (2.1)where the Euclidean action for a general potential V ( x ) is given by S E [ x ( t )] = β (cid:90) dt (cid:18)
12 ˙ x + V ( x ) (cid:19) , (2.2)with periodic boundary conditions x ( t + β ) = x ( t ) (for bosonic fields). Since we are usingthe canonical ensemble, the system is at temperature given by T = β − . Now if we wereable to calculate Z , then we can calculate thermodynamic quantities like free energy, energy,and other quantities using Z and derivatives of it.But for a general potential V ( x ) it cannot be solved exactly. So we need to resort to othermethods to calculate the expectation values. One method is to use perturbative calculationsusing Feynman diagrams. But this method breaks down for strong-coupling terms presentin the potential function as the contribution of the higher order diagrams are comparable tothe previous diagrams. So we need to resort to non-perturbative methods for those cases.One of the methods to evaluate these expectation values is through numerical simula-tions. But our system is continuous, and so the partition function is to be integrated overdynamical variables at all times. The standard way to do this is to divide the time variable7nto slices and evaluate each spatial integral at a fixed time [Peskin 95]. First we discretizethe time variable by dividing the time variable into a lattice of T points with uniform latticespacing a with β = aT . Now the dynamical variable x ( t ) is no longer continuous and existsonly on the discrete lattice points x t . So our partition function on the lattice becomes Z = (cid:90) (cid:32) T (cid:89) t =1 dx t (cid:33) e − S lat ( x t ) , (2.3)where S lat is the lattice action. We can construct this action by discretizing S E . For this,we need to replace integrals with finite sums and derivatives with finite differences. Theprescription to do this is as follows x ( t ) −→ x t , (2.4) ∂x∂t −→ x t − x t − a , (2.5) β (cid:90) −→ a T (cid:88) t =1 , (2.6) V ( x ( t )) −→ V ( x t ) . (2.7)This prescription works well only for bosonic fields. The description to put gauge fieldsand fermionic fields on the lattice will be discussed in the later chapters. Since bosonic fieldsfollow periodic boundary conditions we also need to maintain that condition on the lattice.On the lattice, periodic boundary condition becomes x t = x t + T . So with this prescriptionlattice action S lat is given by S lat = a T (cid:88) t =1 (cid:34) (cid:18) x t − x t − a (cid:19) + V ( x t ) (cid:35) , (2.8)with x = x T . The above method describes the behaviour of the partition function up tosome multiplicative constant provided that enough number of lattice sites are used so thatsystem behaves as if it is a continuous system.Now we can use the above form of the partition function to calculate the expectationvalues of observables. For some observable O , the expectation value (cid:104)O(cid:105) is given by (cid:104)O(cid:105) = (cid:90) (cid:32) T (cid:89) t =1 dx t (cid:33) O ( x t ) e − S lat ( x t ) (cid:90) (cid:32) T (cid:89) t =1 dx t (cid:33) e − S lat ( x t ) . (2.9)8ow we can evaluate this expression numerically. But if we use simple numerical inte-gration techniques (e.g., Gauss’ quadrature, Simpson’s rule), evaluating the above integralbecomes a computationally heavy task. We can understand this using a simple argument.For one-dimensional integral, we need to divide the integration region into grids, let us say, N grids. If we now consider a d -dimensional integral then we would require N d grids toevaluate the integral numerically. For better accuracy we need N to be large, so the com-plexity of the algorithm is O ( N d ). For example, if we take 20 lattice points and the numberof grids to be 100, then we need to sum 100 = 10 terms, which is a huge number. Apartfrom this, there will be regions in the integration domain where the contribution to integralwill be negligible as the weight factor here is e − S lat ( x t ) . This way, we are going to wastecomputational resources on these integration regions. So we require more clever methods toevaluate the above integral. Such methods may give more importance to the volume of theintegration region that contributes the most to the integral. In any Monte Carlo integration, the error is proportional to the standard deviation of theintegrand and inversely proportional to the sample size. One way to reduce the error is toreduce the standard deviation as it does not cost more computational power. This is doneby sampling the important region of the integral.Let us say we have an integral I = b (cid:90) a f ( x ) dx, (2.10)and we have a probability distribution p ( x ) over this domain. That is, b (cid:90) a p ( x ) dx = 1 . (2.11)Then we can write the integral in this form I = b (cid:90) a f ( x ) p ( x ) p ( x ) dx. (2.12)9ow if we sample N data points from probability distribution p ( x ), then we can approx-imate the above integral as I ≈ N N (cid:88) i =1 f ( x i ) p ( x i ) , I = lim N →∞ N N (cid:88) i =1 f ( x i ) p ( x i ) . (2.13)Now we need to choose the probability distribution p ( x ) such that the above term variesslowly as this will decrease the standard deviation. Right choice of p ( x ) will give a goodresult even for small number of data points. For example, if we have f ( x ) sharply peakedat some point then choosing the probability distribution p ( x ) which is also sharply peakednear that point will give a nice estimate of the integral as compared to the probabilitydistribution which has vanishing tail in that region. Our main aim is to evaluate the expectation values of observables like Eq. (2.9). Since theseare all infinite-dimensional integrals for the continuous case, so to evaluate these, we putthe system on the lattice and get a finite-dimensional integral to solve. As already discussedin the introduction of this Chapter, the dimension of the integral is large, and we neednumerical techniques to evaluate it. So we replace the integral with sum and try to get theestimate of it. Then the quantity in Eq.(2.9) becomes (cid:104)O(cid:105) (cid:39) (cid:104)O(cid:105) n = n (cid:88) s =1 O ( { x t } s ) e − S lat ( { x t } s ) n (cid:88) s =1 e − S lat ( { x t } s ) , (2.14)where { x t } s is one of the microstates allowed to the system and n is the total number ofmicrostates. For our case, the number of microstates are infinite and we are estimatingthe integral with only finite number of terms. So we require some method to sample theimportant region of the integral.We will use importance sampling to get the field configurations which contribute largelyto the integral. So we need to introduce a probability distribution p s , which denotes theprobability of getting microstate { x t } s and the above average value can be written as (cid:104)O(cid:105) = (cid:88) s O ( { x t } s ) p s p s e − S lat ( { x t } s ) (cid:88) s e − S lat ( { x t } s ) p s p s . (2.15)10ow we generate the microstate s with probability p s and approximate (cid:104)O(cid:105) with (cid:104)O(cid:105) n as (cid:104)O(cid:105) n = n (cid:88) s =1 O ( { x t } s ) p s e − S lat ( { x t } s ) n (cid:88) s =1 e − S lat ( { x t } s ) p s . (2.16)If we look back at Eq.(2.15) then we can clearly see that the sum will be dominatedby those terms whose exponential term is large as the function e − x decays very fast. Sowe can choose the probability distribution p s ∝ e − S lat ( { x t } s ) as the required probabilitydistribution. The exact probability distribution p s is given by p s = e − S lat ( { x t } s ) n (cid:88) s =1 e − S lat ( { x t } s ) . (2.17)Now if we substitute this in Eq. (2.16), the average value (cid:104)O(cid:105) n just becomes the arith-metic average (cid:104)O(cid:105) n = 1 n n (cid:88) s =1 O ( { x t } s ) . (2.18)Now we require some method to sample data according to probability distribution p s .Once we get the means of generating field configurations according to probability distribution p s , we can calculate expectation value of any observable using Eq. (2.18). Metropolis algorithm [Metropolis 53] is one of the algorithms to sample data from prob-ability distributions which are known up to some multiplicative constant. Metropolis algo-rithm uses Markov chain to generate these configurations and in the limit of Markov chainlength going to infinity, it converges to the exact probability distribution. During this pro-cess the system passes through the region of configuration space which contains most of thevalues contributing to the expectation value.Now let us describe the algorithm. Suppose the probability distribution we want tosample data from is P ( (cid:126)x = { x , x , . . . , x n } ), where P ( (cid:126)x ) depends on n variables. • Step 1 : Choose an initial state say (cid:126)x = (cid:8) x , x , . . . , x n (cid:9) . • Step 2 : Then we choose a distribution Q ( (cid:126)y | (cid:126)y ) to generate a small random change inthe previously accepted state. The distribution Q ( (cid:126)y | (cid:126)y ) is symmetric, i.e., Q ( (cid:126)y | (cid:126)y ) = Q ( (cid:126)y | (cid:126)y ). The new proposed state is (cid:126)x pro = (cid:126)x n +∆ (cid:126)x where ∆ (cid:126)x is distributed accordingto Q ( (cid:126)x pro | (cid:126)x n ). 11 Step 3 : Let A ( (cid:126)x pro , (cid:126)x n ) = min (cid:26) , P ( (cid:126)x pro ) P ( (cid:126)x n ) (cid:27) . A ( (cid:126)x pro , (cid:126)x n ) is the probability of ac-cepting the state (cid:126)x pro starting from (cid:126)x n . We accept the state (cid:126)x pro with probability A ( (cid:126)x pro , (cid:126)x n ). If the state is accepted then (cid:126)x n +1 = (cid:126)x pro otherwise (cid:126)x n +1 = (cid:126)x n . • Step 4 : Repeat Steps 2 and 3.This algorithm makes a random walk in the configuration space sometimes accepting thestate and sometimes remaining in the same place. Intuitively this algorithm accepts the moreprobable states by making use of A ( (cid:126)x pro , (cid:126)x n ). So states with relatively larger probabilitywill always be accepted but states with relatively smaller probability will be accepted withprobability A ( (cid:126)x pro , (cid:126)x n ). So more points are sampled from high-density regions of P ( (cid:126)x ), whilevisiting low-density region for very less number of times.Computationally Step 3 is implemented by choosing a random number α ∈ [0 ,
1] from auniform random number generator. We used ran3 function from the text
Numerical Recipes [Vetterling 02] for this purpose. If α ≤ A ( (cid:126)x pro , (cid:126)x n ) then we accept the proposed state andotherwise reject the proposed state and keep the original state.Necessary and sufficient conditions for this algorithm to converge are the detailed balancedcondition and ergodicity [Robert 04]. Ergodicity simply means that the system should beable to reach from one state to another state in finite number of steps. Detailed balancecondition is that the probability to reach from (cid:126)x to (cid:126)x is same as the probability to reachfrom (cid:126)x to (cid:126)x , i.e., P ( (cid:126)x ) Q ( (cid:126)x | (cid:126)x ) A ( (cid:126)x , (cid:126)x ) = P ( (cid:126)x ) Q ( (cid:126)x | (cid:126)x ) A ( (cid:126)x , (cid:126)x ). To prove this, firstnote that for Metropolis algorithm Q ( (cid:126)x | (cid:126)x ) = Q ( (cid:126)x | (cid:126)x ) and A ( (cid:126)x , (cid:126)x ) = min (cid:26) , P ( (cid:126)x ) P ( (cid:126)x ) (cid:27) . If P ( (cid:126)x ) > P ( (cid:126)x ) ⇒ A ( (cid:126)x , (cid:126)x ) = 1 and A ( (cid:126)x , (cid:126)x ) = P ( (cid:126)x ) P ( (cid:126)x ) (2.19) P ( (cid:126)x ) A ( (cid:126)x , (cid:126)x ) = P ( (cid:126)x ) (2.20) P ( (cid:126)x ) A ( (cid:126)x , (cid:126)x ) = P ( (cid:126)x ) P ( (cid:126)x ) P ( (cid:126)x ) = P ( (cid:126)x ) (2.21) So, P ( (cid:126)x ) Q ( (cid:126)x | (cid:126)x ) A ( (cid:126)x , (cid:126)x ) = P ( (cid:126)x ) Q ( (cid:126)x | (cid:126)x ) A ( (cid:126)x , (cid:126)x ) (2.22)The proof of ergodicity condition is given in Ref. [Robert 04]. Metropolis algorithm is very old and it has some drawbacks. First, the convergence rateis slow and second, it has high autocorrelation in the generated states. To overcome theseproblems, many algorithms had been developed - e.g., heat bath algorithm. But thesealgorithms also have some problems; they cannot be applied to all the models. So to rescueus from this HMC comes into play. 12etropolis algorithm uses random walk to move through the configuration space butHMC uses directed paths to move through the configuration space while sampling. It makesuse of Hamilton’s equations for this purpose.For this algorithm, first we need to introduce conjugate momentum variable for eachdegree of freedom i.e., field variables and a fictitious time τ to evolve the field and conjugatemomentum variables using Hamilton’s equations. Let us say the probability distribution wewant to sample data from is P ( (cid:126)x = { x , x , . . . , x n } ). We can also write this probabilitydistribution as P ( (cid:126)x ) = e −{− log( P ( (cid:126)x )) } . Now for each variable x i we introduce a conjugatemomentum p i and defines the Hamiltonian of the system as H ( (cid:126)x, (cid:126)p ) = 12 n (cid:88) i =1 p i − log( P ( (cid:126)x )) . (2.23)The joint probability distribution is given by π ( (cid:126)x, (cid:126)p ) = e − H ( (cid:126)x,(cid:126)p ) . If we integrate out themomentum variables p i from this joint probability distribution π ( (cid:126)x, (cid:126)p ) we get the originalprobability distribution P ( (cid:126)x ). Hamilton’s equations are as follows d(cid:126)xdτ = ∂H∂(cid:126)p , d(cid:126)pdτ = − ∂H∂(cid:126)x . (2.24)Hamilton’s equations are used to evolve these field variables and their conjugate momentain the fictitious time τ numerically. This is done using symplectic integrators i.e., thosenumerical algorithms for solving Hamilton’s equation which preserves the time reversibilityand volume of phase space property. The simplest symplectic integrator is the leapfrogintegrator/algorithm .The steps involved in the leapfrog algorithm are p i (cid:16) τ + (cid:15) (cid:17) = p i ( τ ) − (cid:15) ∂H∂q i ( (cid:126)q ( τ )) , (2.25) q i ( τ + (cid:15) ) = q i ( τ ) + (cid:15)p i (cid:16) τ + (cid:15) (cid:17) , (2.26) p i ( τ + (cid:15) ) = p i (cid:16) τ + (cid:15) (cid:17) − (cid:15) ∂H∂q i ( (cid:126)q ( τ + (cid:15) )) . (2.27)Now let us describe the algorithm. Suppose the probability distribution we want tosample data from is P ( (cid:126)x = { x , x , . . . , x n } ), where P ( (cid:126)x ) depends on n variables. • Step 1 : First we form the Hamiltonian as given in Eq. (2.23). Then we choose someinitial state say (cid:126)x = (cid:8) x , x , . . . , x n (cid:9) = (cid:126)x (0). • Step 2 : Then we choose each p i = p i (0) randomly from N (0 ,
1) i.e., normal distribu-tion with µ = 0 and σ = 1. Then we calculate H i = H ( (cid:126)x (0) , (cid:126)p (0)).13 Step 3 : Now we evolve the (cid:126)x i and (cid:126)p using leapfrog algorithm for n time steps with (cid:15) being the time resolution parameter. The (cid:126)x ( n(cid:15) ) is the new proposed state. Then wecalculate H f = H ( (cid:126)x i ( n(cid:15) ) , (cid:126)p ( n(cid:15) )). • Step 4 : Now we apply the Metropolis test. A ( (cid:126)x i ( n(cid:15) ) , (cid:126)x i (0)) = min (cid:26) , π ( (cid:126)x i ( n(cid:15) ) , (cid:126)p ( n(cid:15) )) π ( (cid:126)x i (0) , (cid:126)p (0)) (cid:27) = e − ( H f − H i ) . We accept the state (cid:126)x i ( n(cid:15) ) with the probability A ( (cid:126)x i ( n(cid:15) ) , (cid:126)x i (0)). If thestate is accepted then (cid:126)x i +1 = (cid:126)x i ( n(cid:15) ), otherwise (cid:126)x i +1 = (cid:126)x i (0). • Step 5 : Repeat Steps 2 to 4.We know that Hamilton’s equations are time-reversible if H ( p, q ) = H ( − p, q ) so we buildthe Hamiltonian Eq. (2.23), which follows this property. This property is required for thedetailed balance condition [Neal 11]. The volume of phase space also remains conservedin Hamiltonian dynamics. We need this because the joint probability distribution π ( (cid:126)q, (cid:126)p )remains covariant under the Hamiltonian dynamics as the Jacobian of the transformationis 1. Since HMC uses Metropolis test for the Markov chain formation so both the suffi-cient and necessary conditions are satisfied. The proofs are given in Refs. [Neal 11] and[Betancourt 17].One of the tests to check whether the HMC is implemented properly is to look for theexpectation value of e − ∆ H [Ydri 17]. The value of this is given by (cid:10) e − ∆ H (cid:11) = (cid:68) e − ( H ( (cid:126)x (cid:48) ,(cid:126)p (cid:48) ) − H ( (cid:126)x,(cid:126)p )) (cid:69) = 1 Z (cid:90) d(cid:126)p d(cid:126)x e − H ( (cid:126)x,(cid:126)p ) e − ( H ( (cid:126)x (cid:48) ,(cid:126)p (cid:48) ) − H ( (cid:126)x,(cid:126)p )) (2.28)= 1 Z (cid:90) d(cid:126)p d(cid:126)x e − H ( (cid:126)x (cid:48) ,(cid:126)p (cid:48) ) (2.29)= 1 Z (cid:90) d(cid:126)p (cid:48) d(cid:126)x (cid:48) e − H ( (cid:126)x (cid:48) ,(cid:126)p (cid:48) ) as ∂ ( (cid:126)p (cid:48) , (cid:126)x (cid:48) ) ∂ ( (cid:126)p, (cid:126)x ) = 1 (2.30) (cid:10) e − ∆ H (cid:11) = 1 Z Z = 1 . (2.31)Eq. (2.31) acts as one of the tests for the HMC algorithm. If the value of e − ∆ H fluctuatesaway from 1 then it is clear indication that the algorithm is not implemented correctly. After thermalization, we can use the field configurations to calculate the expectation valuesof observables using Eq.(2.18). But the error in the expectation value depends on whetherthe states used to calculate the expectation are correlated or not. If the states (cid:126)x , (cid:126)x , . . . , (cid:126)x N are not correlated, then error in the expectation of f is given by [Gattringer 09] δf = σ √ N , (2.32)14here σ is the standard deviation in the expectation value of f . We can get the uncorrelatedstates by finding the autocorrelation length and taking the states for expectation values afterthe autocorrelation length. Autocorrelation length is where autocorrelation becomes nearlyzero. For all Monte Carlo simulations certain features remain common. We will discuss some ofthese basic features here and additional features will be discussed in the later chapters.
We can initialize our simulation with any field configuration for both the algorithms, andwe will get the same result from any starting point as the algorithm respects the ergodicproperty. But sometimes the algorithm may take longer time to converge to the desiredprobability distribution for some starting points. So, if we have some a priori informationregrading the probability distribution then we can use that for selecting the initial condition.But mostly we do not have a priori information so we mostly make a cold start for thesimulation. A cold start is the initial configuration in which all the field configurations areset to zero (for bosonic fields only). If we start with a random field configuration, then wecall it a hot start . Sometimes people use cold start for some fields and hot start for somefields, this is known as mixed-field configuration. Mostly in our simulations, we used coldstart but for some simulations we used mixed-field configurations as well.After choosing the initial field configurations, we run the simulation to allow the dis-tribution to reach the equilibrium distribution and after that we use it to generate/sampledata. This process is known as thermalization . Thermalization is achieved when the valueof the observable starts to oscillate about some value. Then we can use the data afterthermalization to calculate the expectation values. Thermalization occurs faster in HMC ascompared to Metropolis algorithm. We will see this in the later chapters.
On the lattice we choose to update only one field variable at one lattice site at a time andrepeat this for other lattice sites. For Metropolis algorithm it is better to update the fieldvariable at one lattice site multiple times before moving to the next field variable. This isdone so that the system converges to the required distribution faster. For HMC it does notgive any better result by making multiple changes at one lattice site. So we update the fieldvariables only once at each lattice site for HMC.When we have updated the whole lattice with the new configuration it is known as a sweep . We collect all the information of the observables after each sweep and store this data15n a file. Every thermalization plots are ploted against the sweep number. To prevent anydata loss we store the field configurations after every 1000 sweeps. This way we did notneed to thermalize the system again and again and we can restart the simulation with thisprevious stored configuration.
It is important to note that field configurations generated by the Markov process are notindependent, each configuration is dependent on the previous configuration. So we cannotdirectly evaluate our observable from these correlated field configurations as this will increaseerror bars and it would require more field configurations to reduce this error bars. So whatwe do is that we run the simulation for enough number of times and store the values ofobservables. Then we calculate the normalized autocorrelation to get to know how manysweeps are required to generate statistically independent field configuration. Autocorrelationand normalized autocorrelation are defined asΓ τ = 1 N − τ N − τ (cid:88) i =1 ( f i − (cid:104) f (cid:105) )( f i + τ − (cid:104) f (cid:105) ) Autocorrelation , (2.33) ρ τ = Γ τ Γ Normalized Autocorrelation , (2.34)where N is the number of data points after the thermalization, τ is the number of sweepsfor which we are checking the correlation. Γ is simply the variance. Autocorrelation lengthis defined as the minimum number of sweeps required to get the uncorrelated field config-uration. So we find the autocorrelation length by looking at the values of the normalizedautocorrelation where it crosses 0 or just near 0. After getting the autocorrelation lengthwe calculate the expectation value of the observable and error bars from these uncorrelatedfield configurations. From now on we will use the term autocorrelation for the normalizedautocorrelation. 16 hapter 3 Harmonic Oscillator and HarmonicOscillator with CommutatorPotential
In this chapter, we are going to simulate two models, one using Metropolis algorithm and theother one using HMC algorithm. From these simulations we will see that the HMC algorithmconverges to the required distribution faster compared to the Metropolis algorithm.We start this chapter with a simple model: a collection of N uncoupled-harmonicoscillators. The Euclidean action for this model is S E = 12 β (cid:90) dt Tr (cid:16) ˙ X + m X (cid:17) , (3.1)where X is a N × N Hermitian matrix with periodic boundary condition X ( t ) = X ( t + β ),with β being the period of the system. We can easily see, by expanding the trace, that itis indeed the Euclidean action of N uncoupled-harmonic oscillators. This model is exactlysolvable, but our aim here is to simulate this using Metropolis algorithm and verify thesimulation results with the analytical results. Now, using the discretization procedure fromthe previous chapter, Eqs. (2.4) - 2.7, the Euclidean action on the lattice takes the form S lat = a T (cid:88) t =1 Tr (cid:34)(cid:18) X t − X t − a (cid:19) + m X t (cid:35) , (3.2)where X = X T . Our aim is to generate the field configurations whose probability distri-bution is proportional to e − S lat . We will use Metropolis algorithm to sample data from thisprobability distribution. 17 .1 Metropolis Algorithm for Harmonic Oscillator This section introduces us on how to apply Metropolis algorithm to generate field config-urations according to the given probability distribution. The idea is to propose a randomchange in the field configurations and then compare it with the previous field configurations.We can either propose a change in the field configuration at one lattice site or we can pro-pose the change in all the field configurations. It is easier to make a change in a single fieldconfiguration at a time as each time we need to calculate the difference in the action. Thatis, ∆ S = S ( X pro t ) − S ( X old t ) . (3.3)To calculate this, we can see from Eq. (3.2) that the terms which contribute to this differenceonly contains terms with X t and we do not need to calculate the whole action to calculatethis change.We can separate the terms containing X t from the action and we get the local part ofthe action S loc containing X t S loc = a (cid:20)(cid:18) a + m (cid:19) X t − X t a ( X t − + X t +1 ) (cid:21) . (3.4)Now we need to propose a random change from a probability distribution which is symmetrici.e., Q ( x | y ) = Q ( y | x ). We can use any distribution with this property but the easiest one isthe uniform distribution. Thus the random change proposed to the field configuration hasthe form X pro t = X old t + δY t , (3.5)where Y t is a random Hermitian matrix with each element, both real and complex, takenfrom a uniform probability distribution between ( − , δ is a configurable parameterwhich will allow us to set the acceptance rate for this algorithm. Now we need to compare theproposed state with the old state. For that we need to calculate the acceptance probability,which is just e − ∆ S here. Thus the acceptance probability is e − ∆ S = e − ( S loc ( X pro t ) − S loc ( X old t )) . (3.6)We accept the proposed state with the probability e − ∆ S . For that we take a random number α from a uniform random number generator between (0 , e − ∆ S ≥ α , then we acceptthe proposed state otherwise we reject the proposed state and propose a new state. Wecontinue this procedure until we have enough data to calculate the expectation values ofobservables.In the above the parameter δ controls the efficiency of the algorithm. It is known as the18ump parameter. We look at the effectiveness of the algorithm by looking at the acceptancerate . Acceptance rate is defined as the ratio of the total number of accepted states to thetotal number of proposed states. If δ is small, then the acceptance rate will be high, and itwill take small steps in the configuration space. This way we need to run the simulation forlonger time to get enough uncorrelated data for the expectation value of the observable. If δ is large, then it will take large steps in the configuration space, and the acceptance rate willbe low. Again we need to simulate for a longer time to get the desired result. So we needsome value in between these large and small values to get the acceptance between 65-85%.Acceptance rate between this range is good enough to get the data sampled according tothe probability distribution. In this section, we will discuss the important quantities which are needed to monitor thereliability of simulations and to compare the simulation data with the analytical results.In simulation everything is measured in unit of a length a . Since action needs to bedimensionless in natural units, so X has a dimension of [ L / ], mass m has a dimensionof [ L − ]. So a is measured in units of a , X is measured in units of a / and mass m ismeasured in units of a − . We use the following parameters for the simulations: N = 16, a = 1, m = 1, T = 50 and β = 50. Let us discuss the initial conditions for the simulations first. For this model, we started oursimulation with a cold start, i.e., all the field configurations are set to zero matrices. Wealso need to choose the value of the parameter δ so that we get the acceptance rate between65% and 85%. In Fig. 3.1 we show the acceptance rate for this simulation.After that, we update the field configurations at a single lattice site for multiple times(in our case we updated field at each site 50 times) to make the system thermalize faster.We stored the value of the observables after every sweep. Now we need to decide whetherthe system has been thermalized or not. For that what we do is to simulate the system forlarge number of sweeps, say 10000. Then we plot one of the observables against the numberof sweeps to see whether the system has been thermalized or not. When the value of theobservable starts to oscillate about some value, then we are sure that the system has beenthermalized. Let us look at the run time history of the observable O defined as O ≡ N β β (cid:90) dt Tr (cid:0) X (cid:1) = 1 N T T (cid:88) t =1 Tr (cid:0) X t (cid:1) (Discretized version) . (3.7)19ow we collected the data for this observable after the system has been thermalized. Wecan see in Fig. 3.2 that the value of O is oscillating about 0.45. This observation leads usto the conclusion that the system has been thermalized and now we can collect the data forour calculation purpose. A cc e p t a n c e r a t e Monte Carlo sweep numberAcceptance rate against Monte Carlo sweep number
Figure 3.1: Acceptance rate against Monte Carlo sweep number. We can see that most ofthe values of the acceptance rate lies between 70% and 85%, which is good enough to getthe desired results.
After thermalization, we collected data for the calculation purpose. It is clear from Fig.3.2 that the value of O after every sweep is not statistically independent and it takesmultiple sweeps to get a statistically independent value of the observable. We use theautocorrelation to get to know after how many sweeps the value of the observable arestatistically independent.Fig. 3.3 shows the plot of normalized autocorrelation of the observable O . We can seethat the plot decreases exponentially and after that, it just oscillates about zero. The pointwhere the plot crosses the zero in the graph, we defined that thing as the autocorrelationlength. So from this autocorrelation plot, we can see that the autocorrelation length forobservable O is 158. Autocorrelation is an observable dependent quantity. It will bedifferent for other observables. Hence the autocorrelation length will also be different forother observables. 20 O Monte Carlo sweep numberO against Monte Carlo sweep number Figure 3.2: Run-time history of observable O against Monte Carlo sweep number. Herewe only plotted the run-time history after the system has been thermalized and discardedall the previous values. We can clearly see that the value of the observable O is oscillatingaround 0.45.For calculating any expectation value using Eq. 2.18, we need to take a gap of at leastthe autocorrelation length so that we only take the average of the uncorrelated data. One of the observables which we want to look at is the two-point correlation function. It isdefined as O ( t ) ≡ N Tr( X (0) X ( t )) = 1 N Tr( X X t ) (Discretized form) . (3.8)The theoretical result for the expectation value of O is (cid:104) O ( t ) (cid:105) = e − mt + e − m ( β − t ) m (1 − e − βm ) . (3.9)Fig. 3.4 shows the simulated result along with the theoretical result. We can see that itmatches the theoretical result at most of the points but at some points it lies inside the errorrange. This result is convincing enough to believe that the algorithm is in working order.21 A u t o c o rr e l a t i o n Monte Carlo sweep numberAutocorrelation against Monte Carlo sweep number
Figure 3.3: Normalized autocorrelation against Monte Carlo sweep number for observable O . It starts from 1 and decreases exponentially towards 0 and then oscillates about 0. Itcrosses 0 value after 158 sweeps, so the autocorrelation length is 158 for O . Now we introduce more complexity into the harmonic oscillator potential by adding a com-mutator square term into the action. The Euclidean action is given by S E = N λ β (cid:90) dt Tr ˙ X i − d (cid:88) i,j =1 i 12 Tr (cid:16) P it (cid:17) + S lat , (3.13) H = T (cid:88) t =1 Tr P it + N a λ (cid:18) X it − X it − a (cid:19) − d (cid:88) i,j =1 i 1) distribution. Details about how to generate randomnumbers distributed as Gaussian distribution can be found in Ref. [Ydri 17]. After that weneed to evolve the field and momentum variables according to the Hamilton’s equations.We need to find the forces for each momentum variable. We have( ˙ X it ) rs = ∂H∂ ( P it ) sr and ( ˙ P it ) rs = − ∂H∂ ( X it ) sr , (3.15)where the dot on the field and momentum variables are time derivatives with respect to thefictitious time τ . We then have the equation of motion( ˙ X it ) rs = ( P it ) rs , (3.16)( ˙ P it ) rs = − N aλ (cid:34) (cid:18) a + m (cid:19) ( X it ) rs − a (cid:2) ( X it − ) rs + ( X it +1 ) rs (cid:3) − d (cid:88) j =1 j (cid:54) = i [ X jt , [ X it , X jt ]] rs (cid:35) . (3.17)24n matrix form these equations simply become˙ X it = P it , (3.18)˙ P it = − N aλ (cid:34) (cid:18) a + m (cid:19) X it − a (cid:0) X it − + X it +1 (cid:1) − d (cid:88) j =1 j (cid:54) = i [ X jt , [ X it , X jt ]] (cid:35) . (3.19)We will evolve the above equations using the leapfrog algorithm described in the previouschapter, Eq. (2.25) - (2.27). The equations are simply P it (cid:16) τ + (cid:15) (cid:17) = P it ( τ ) − (cid:15) ∂H∂X it ( X t ( τ )) , (3.20) X it ( τ + (cid:15) ) = X it ( τ ) + (cid:15)P it (cid:16) τ + (cid:15) (cid:17) , (3.21) P it ( τ + (cid:15) ) = P it (cid:16) τ + (cid:15) (cid:17) − (cid:15) ∂H∂X it ( X t ( τ + (cid:15) )) , (3.22)where (cid:15) is a tunable parameter. We also have one more tunable parameter here which is thenumber of times we evolve the fields using the leapfrog algorithm. We call this parameter n . So we evolve the fields for n times and for n(cid:15) amount of fictitious time. Both (cid:15) and n must be tuned to get the desirable acceptance rate in the simulations.After evolving the fields and their conjugate momenta, we need to compare it withthe initial field and their momenta. For that, we need to calculate the acceptance prob-ability for this algorithm. Acceptance probability for this algorithm is simply e − ∆ H =exp (cid:8) − ( H ( X i ( n(cid:15) ) , P i ( n(cid:15) )) − H ( X i (0) , P i (0))) (cid:9) . We accept the proposed state with theprobability e − ∆ H . For that, we take a random number α from a uniform random num-ber distribution between (0 , α ≤ e − ∆ H , then we accept the proposed state, otherwisewe reject the proposed state and propose a new state and continue this procedure until wehave enough data to calculate the expectation value of the observable.For this algorithm also we require the acceptance rate between 65-85% to move throughthe configuration space optimally. This acceptance rate can be achieved by tuning (cid:15) and n parameters appropriately. Similar to the Metropolis algorithm, we need to monitor some of the important things forthe HMC algorithm also. We will discuss these in this section and later will show someresults for this model.In this simulation each variable and parameter has been measured in the unit of ’t Hooftcoupling λ . Since the Yang-Mills coupling g has a dimension of [ L − ] in one dimension,so X can be measured in the unit of λ / , a can be measured in units of λ − / and mass25 can be measured in units of λ / . Thus, every variable and parameter appearing in Eq.(3.10) are dimensionless. Values of the parameter used for this simulations are: N = 8, a = 0 . T = 10, β = 5, d = 3 and λ = 1. We have taken m = 0 . 01 for the results of thenext two sub-sections. We started the simulation with a cold start, i.e., all the field configurations are set to zeromatrices initially. To explore the configuration space more efficiently, we choose our tunableparameter (cid:15) and n so that we get the acceptance rate between 65-85%. Fig. 3.5 shows theacceptance rate against Monte Carlo sweeps for this simulation. Tuning these parameters isa very crucial step for the HMC algorithm. Sometimes a slight change in these parameterscan make the simulation unstable.After tuning these parameters, we monitor the quantity e − ∆ H . As mentioned in theprevious chapter, this is one of the tests to check whether the HMC has been implementedproperly or not. Fig. 3.6 shows the plot of e − ∆ H against number of accepted states. Wecan clearly see that the data points oscillate about 1 in this plot. The average value of the e − ∆ H comes out to be 1 . ± . O defined as O ≡ N β β (cid:90) dt Tr (cid:16) X i (cid:17) = 1 N T T (cid:88) t =1 Tr (cid:16) X it (cid:17) (Discretized form) . (3.23) Looking at the plot of the run-time history of the observable O , it is clear that the datapoints are correlated with each other, and we need to skip a certain number of consecutivesteps to get uncorrelated data. Fig. 3.8 shows the plot of normalized autocorrelation againstthe number of sweeps for the observable O . Autocorrelation quickly decays and oscillateabout zero after about 15-16 sweeps, which shows that the autocorrelation between the datapoints is quite less. Autocorrelation length of this observable comes out to be 15.26 A cc e p t a n c e r a t e Monte Carlo sweep numberAcceptance rate against Monte Carlo sweep number Figure 3.5: Acceptance rate against Monte Carlo sweep number. We can see that mostof the data points are concentrated between 75 to 85%, which is what we want for goodsimulation results. e - Δ H No. of accepted statese - Δ H against No. of accepted states Figure 3.6: The plot of e − ∆ H against the number of accepted states. In this plot we cansee that most of the values are concentrated near 1 which is a clear indication that HMCalgorithm has been implemented correctly. 27 O Monte Carlo sweep numberO against Monte Carlo sweep number Figure 3.7: Run-time history of observable O against Monte Carlo sweep number. Thedata tell us that it took only about 10-12 sweeps for the system to thermalize. -0.2 0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 250 300 350 A u t o c o rr e l a t i o n Monte Carlo sweep numberAutocorrelation against Monte Carlo sweep number Figure 3.8: Normalized autocorrelation against Monte Carlo sweep number. If we compare itwith Fig. 3.3 we see that the data obtained through HMC algorithm are not that correlatedcompared to the data obtained through the Metropolis algorithm.28 O O for different m values. We simulated this model by removing the flat directions in two ways. First, we simulatedthe model at m = 0 by applying the local trace condition. We calculated the value of theobservable O from this simulation. Then we simulated this model by adding different massterms and then linearly extrapolated it towards zero mass limit. Table 3.1 shows the valueof O for different mass m . Fig. 3.9 shows the plot of O against mass m . The straight lineis the linear fit to the data. T r ( X i ) / N T MassTr(X i2 )/NT vs Mass Linear FitDataError Figure 3.9: The observable O against m . The plot also contains the linear fit to the data.Equation of the linear fit is given by O = − . m + 1 . . (3.24)29rom the linear fit we get the value of O at m = 0 to be 1 . m = 0 case given in Table 3.1. This may be because the linearextrapolation is only the first-order correction and we need to know how the observablebehaves for different mass values. If we have some idea on how the observable behaves withdifferent mass, then we can use that information for extrapolation. In this chapter, we simulated two different models, harmonic oscillator and harmonic oscil-lator with commutator potential, using two different algorithms. The second model is morecomplicated as compared to the first one. We have observed that using HMC algorithmon the second model not only made the system to thermalize much faster but also the au-tocorrelation between the data is quite less as compared to the simulation done using theMetropolis algorithm. Although we have not simulated the same model with both the algo-rithms but the second model is more complicated as compared to the first one. So from here,we conclude that HMC algorithm is more efficient as compared to the Metropolis algorithm.From the next chapter onwards, we will use the HMC algorithm for the simulations.30 hapter 4 Bosonic D = 4 and BFSS MatrixModels In this chapter, we are first going to discuss how to implement gauge fields on the latticeand then apply these techniques to simulate the bosonic D = 4 and bosonic BFSS models.Simulations including fermions will be discussed in the next chapter. Most of the contents of this section can be found in Ref. [Gattringer 09]. They haveperformed these calculations for lattice QCD but we will do it for more general gauge group SU ( N ).In this section we try to describe how to implement gauge fields on a lattice by using anexample of Yang-Mills theory coupled to a complex scalar field in four spacetime dimensions.We take the gauge group of the gauge field to be SU ( N ). The scalar field transforms inthe fundamental representation of this gauge group. The metric of the four-dimensionalspacetime is η µν = diag(1 , − , − , − S = (cid:90) d x (cid:26) − g Tr( F µν F µν ) + ( D µ Φ) † D µ Φ − m Φ † Φ (cid:27) , (4.1)where D µ = ∂ µ + iA µ , (4.2) F µν = ∂ [ µ A ν ] + i [ A µ , A ν ] . (4.3)31he path integral for this model is given by Z = (cid:90) D A µ D Φ exp { iS } . (4.4)After performing the Wick rotation we will get the Euclidean action for this model.The details of this is given in Appendix A. The metric after Wick rotation becomes η µν =diag(1 , , , S E = (cid:90) d x E (cid:26) g Tr (cid:0) F µν (cid:1) E + ( D µ Φ) † D µ Φ + m Φ † Φ (cid:27) , (4.5)where F µν E = ∂ [ µ A ν ] + i [ A µ , A ν ] , (4.6) D µ = ∂ µ + iA µ . (4.7)While working in Euclidean spacetime we note that we do not need to change the signwhen we go from lower index to upper index and vice-versa. Let us represent all tensorquantities using only the lower indices. Another item that has changed is that now both µ and ν will run from 1 to 4 instead of 0 to 3. As both the fields are bosonic type they willsatisfy periodic boundary conditions in Euclidean time t E , i.e., A ( (cid:126)x, t E + β ) = A ( (cid:126)x, t E ) , (4.8)Φ( (cid:126)x, t E + β ) = Φ( (cid:126)x, t E ) . (4.9)From now on we will suppress the subscript E on the expressions and we will get to knowthe distinction between the Euclidean and Minkowski terms from the context.Now we can discretize the above action to put the theory on the lattice. We cannotdiscretize this theory using a naive lattice discretization as this theory is locally gaugeinvariant under the gauge transformationΦ( x ) −→ V ( x )Φ( x ) , (4.10) A µ ( x ) −→ V ( x ) A µ ( x ) V † ( x ) + i∂ µ V ( x ) V † ( x ) . (4.11)In the above equations we have V ( x ) = exp( − i Λ( x )) with Λ( x ) belonging to the Lie algebra su ( N ), x ∈ R and V † ( x ) = V − ( x ).Our aim is to make the lattice action locally gauge invariant such that in the limit oflattice spacing a going to zero we get back the action of the continuum theory. To do thislet us first forget about the gauge field for now and focus on the free scalar field whose32uclidean action is given by S = (cid:90) d x (cid:110) ∂ µ Φ † ∂ µ Φ + m Φ † Φ (cid:111) . (4.12)As no gauge field is present we can simply discretize the action to get the lattice action S lat = a (cid:88) n ∈ Λ (cid:88) µ =1 (cid:16) Φ † n +ˆ µ − Φ † n (cid:17) (Φ n +ˆ µ − Φ n ) a + m Φ † n Φ n , (4.13)where Λ = { n = ( n , n , n , n ) | n , n , n = 1 , , · · · , N s ; n = 1 , , · · · , T and x = na } , ˆ µ is the basis vector on the lattice; ˆ1 = (1 , , , 0) and similarly for ˆ2 , ˆ3 and ˆ4. We also applyperiodic boundary conditions in all the four directions. It is easy to see that the termΦ † n Φ n is gauge invariant as both of these belong to the same lattice point and the gaugetransformation cancels each other leaving this term invariant. Now we need to check whetherthe term Φ † n +ˆ µ Φ n remains invariant or not. We do the required calculation by transformingthe field at both of these lattice points by appropriate transformationsΦ n −→ V ( n )Φ n , (4.14)Φ † n +ˆ µ −→ Φ † n +ˆ µ V † ( n + ˆ µ ) , (4.15)Φ † n +ˆ µ Φ n −→ Φ † n +ˆ µ V † ( n + ˆ µ ) V ( n )Φ n , (4.16)where both V † ( n + ˆ µ ) and V ( n ) belong to SU ( N ). We note that V ( n + ˆ µ ) and V ( n ) canbe different in general. Thus the above term is not locally gauge invariant and this is theonly term (and a complex conjugate term also) in the action Eq. (4.13) which is not locallygauge invariant. We need to add something in between Φ † n +ˆ µ and Φ n which cancels thesegauge transformations. We know one such object which transforms in the opposite directionto that of the scalar field. It is the path ordering of the exponential of the line integral ofgauge field along some curve γ connecting points from x to y . It is called the Wilson line integral or the gauge transporter . The expression of the operator and the transformationproperties are given by U ( x, y ) = P exp i (cid:90) γ dz µ A µ ( z ) , (4.17) U ( x, y ) −→ V ( x ) U ( x, y ) V † ( y ) , (4.18)where P stands for path ordering. If we have an analog of this for the lattice then we can usethat to make Φ † n +ˆ µ Φ n locally gauge invariant. If we replace x with n and y with n + ˆ µ andapproximate the line integral with iaA µ ( n ) then to the lowest order of approximation wedo not need any path ordering and only higher order terms will have path ordering terms.33hus we get U ( n, n + ˆ µ ) = exp { iaA µ ( n ) } = U µ ( n ) , (4.19) U µ ( n ) −→ V ( n ) U µ ( n ) V † ( n + ˆ µ ) . (4.20)This operator U µ ( n ) is known as the link field . Also note that U − µ ( n ) = U ( n, n − ˆ µ ) =exp {− iaA µ ( n ) } = U † µ ( n − ˆ µ ). We note that U µ ( n ) has a directional nature, i.e., U µ ( n )connects n to n + ˆ µ and U − µ ( n ) connects n to n − ˆ µ . So if we add U ( n + ˆ µ, n ) = U − µ ( n + ˆ µ ) = U † µ ( n ) in between Φ † n +ˆ µ and Φ n we get Φ † n +ˆ µ U † µ ( n )Φ n . We can easily check that it is locallygauge invariant. The calculations are provided belowΦ † n +ˆ µ U † µ ( n )Φ n −→ Φ † n +ˆ µ V † ( n + ˆ µ ) V ( n + ˆ µ ) U † µ ( n ) V † ( n ) V ( n )Φ n = Φ † n +ˆ µ U † µ ( n )Φ n . (4.21)Now we are left with our last task to check whether in the limit a −→ a −→ 0. This is done by first replacing Φ † n +ˆ µ with Φ † n +ˆ µ U † µ ( n ).Then the action becomes S sc = a (cid:88) n ∈ Λ (cid:88) µ =1 (cid:16) Φ † n +ˆ µ U † µ ( n ) − Φ † n (cid:17)(cid:16) U µ ( n )Φ n +ˆ µ − Φ n (cid:17) a + m Φ † n Φ n . (4.22)From Eq. (4.19) it is clear that U µ ( n ) = + iaA µ ( n ) + O ( a ) and U † µ ( n ) = − iaA µ ( n ) + O ( a ) as A µ ( n ) ∈ su ( N ). Substituting this in Eq. (4.22) and collecting the terms we get S sc = a (cid:88) n ∈ Λ (cid:34) (cid:88) µ =1 (cid:32) Φ † n +ˆ µ (cid:8) − iaA µ ( n ) + O ( a ) (cid:9) − Φ † n a (cid:33) × (cid:32) (cid:8) + iaA µ ( n ) + O ( a ) (cid:9) Φ n +ˆ µ − Φ n a (cid:33) + m Φ † n Φ n (cid:35) = a (cid:88) n ∈ Λ (cid:34) (cid:88) µ =1 (cid:32) Φ † n +ˆ µ − Φ † n a − i Φ † n +ˆ µ A µ ( n ) (cid:33) (cid:18) Φ n +ˆ µ − Φ n a + iA µ ( n )Φ n +ˆ µ (cid:19) + m Φ † n Φ n + O ( a ) (cid:35) . (4.23)34implifying further we get S = lim a → a (cid:88) n ∈ Λ (cid:34) (cid:88) µ =1 (cid:32) Φ † n +ˆ µ − Φ † n a − i Φ † n +ˆ µ A µ ( n ) (cid:33) × (cid:18) Φ n +ˆ µ − Φ n a + iA µ ( n )Φ n +ˆ µ (cid:19) + m Φ † n Φ n + O ( a ) (cid:35) = (cid:90) d x (cid:110)(cid:16) ∂ µ Φ † ( x ) − i Φ † ( x ) A µ ( x ) (cid:17) ( ∂ µ Φ( x ) + iA µ ( x )Φ( x )) + m Φ † ( x )Φ( x ) (cid:111) = (cid:90) d x (cid:110) ( D µ Φ( x )) † D µ Φ( x ) + m Φ † ( x )Φ( x ) (cid:111) . (4.24)If we compare Eq. (4.24) with Eq. (4.5) we can see that we have got all the parts ofthe action apart from the kinetic term for the gauge field. For the kinetic term we needto introduce one more operator, which is a combination of four link fields, known as the plaquette . It has the form U µν ( n ) = U µ ( n ) U ν ( n + ˆ µ ) U − µ ( n + ˆ µ + ˆ ν ) U − ν ( n + ν )= U µ ( n ) U ν ( n + ˆ µ ) U † µ ( n + ˆ ν ) U † ν ( n ) . (4.25)Pictorially we can understand U µν ( n ) with the help of Fig. 4.1. The trace of U µν ( n ) isa gauge invariant object. We can see this easilyTr( U µν ( n )) −→ Tr (cid:16) V ( n ) U µ ( n ) U ν ( n + ˆ µ ) U † µ ( n + ˆ ν ) U † ν ( n ) V † ( n ) (cid:17) −→ Tr (cid:16) V † ( n ) V ( n ) U µν ( n ) (cid:17) = Tr( U µν ( n )) . (4.26)Using the plaquette we can form the kinetic term for the gauge field. This lattice actionwas first introduced by K. G. Wilson [Wilson 74] in 1974. Wilson gauge action is given by S G = 2 g (cid:88) n ∈ Λ (cid:88) µ<ν Re [Tr( − U µν ( n ))] , (4.27)where we need to consider all the plaquettes with only one orientation. Now our task isto show that this converges to the continuum action in the limit a → 0. For that we needto use Baker-Campbell-Hausdorff formula to evaluate the value of the plaquette. Baker-Campbell-Hausdorff formula is given byexp( X ) exp( Y ) = exp (cid:18) X + Y + 12 [ X, Y ] + · · · (cid:19) . (4.28)35 μ ( n ) U μ ( n + ̂ ν ) U ν ( n + ̂ μ ) U ν ( n ) n n + ̂ μn + ̂ μ + ̂ νn + ̂ ν Figure 4.1: The link variables form the plaquette U µν . The circle indicates the direction weneed to follow to form the plaquette. This figure is taken from Ref. [Joseph 20].Then using this U µν ( n ) = exp( iaA µ ( n )) exp( iaA ν ( n + ˆ µ )) exp( − iaA µ ( n + ˆ ν )) exp( − iaA ν ( n ))= exp (cid:110) ia ( A µ ( n ) + A ν ( n + ˆ µ )) − a A µ ( n ) , A ν ( n + ˆ µ )] (cid:111) × exp (cid:110) − ia ( A µ ( n + ˆ ν ) + A ν ( n )) − a A µ ( n + ˆ ν ) , A ν ( n )] (cid:111) . (4.29)Expanding the terms U µν ( n ) = exp (cid:40) ia ( A ν ( n + ˆ µ ) − A ν ( n ) + ( A µ ( n + ˆ ν ) − A µ ( n )))+ a A µ ( n ) , A µ ( n + ˆ ν )]+ [ A ν ( n + ˆ µ ) , A ν ( n )] + [ A µ ( n ) , A ν ( n )] + [ A ν ( n + ˆ µ ) , A µ ( n + ˆ ν )] − [ A µ ( n ) , A ν ( n + ˆ µ )] − [ A µ ( n + ˆ ν ) , A ν ( n )]) (cid:41) . (4.30)36n the limit a → U µν ( n ) takes the form U µν ( n ) = exp (cid:110) ia ( ∂ µ A ν ( n ) − ∂ ν A µ ( n ) + i [ A µ ( n ) , A ν ( n )]) (cid:111) = exp { ia F µν ( n ) } . (4.31)Expanding U µν ( n ) up to order a we get U µν ( n ) = + ia F µν ( n ) − a F µν ( n ) . (4.32)Substituting the value of U µν ( n ) in Eq 4.27 we get S G = 2 g (cid:88) n ∈ Λ (cid:88) µ<ν Re (cid:20) Tr (cid:18) ia F µν ( n ) − a F µν ( n ) (cid:19)(cid:21) = 1 g (cid:88) n ∈ Λ (cid:88) µ<ν Re (cid:20) Tr (cid:18) ia F µν ( n ) − a F µν ( n ) − ia F µν ( n ) − a F µν ( n ) (cid:19)(cid:21) . (4.33)That is, S G = a g (cid:88) n ∈ Λ (cid:88) µ<ν Tr (cid:0) F µν ( n ) (cid:1) , = a g (cid:88) n ∈ Λ (cid:88) µ,ν Tr (cid:0) F µν ( n ) (cid:1) . (4.34)(4.35)Taking a → S = lim a → S G = lim a → a g (cid:88) n ∈ Λ (cid:88) µ,ν Tr (cid:0) F µν ( n ) (cid:1) = 12 g (cid:90) d x Tr (cid:0) F µν ( x ) (cid:1) . (4.36)Thus, in the limit a → 0, Wilson gauge action converges to the correct continuum gaugeaction. The full action of the theory on the lattice is the sum of the Eq. (4.22) and Eq.(4.34). That is, S Full = S sc + S G . (4.37)Apart from the lattice action we also need to take care of the measure for the integralin the partition function. In the continuum theory we have gauge fields in the measure ofthe partition function but on the lattice we have link fields, which belong to the Lie group SU ( N ), as compared to the gauge fields, which belong to the Lie algebra su ( N ). So, for37he invariance of the measure on the lattice we need to choose that measure which remainsinvariant under both the left and right multiplication by the group element. One suchmeasure for evaluating integral on the group manifold SU ( N ) is the Haar measure . Moredetails of this measure can be found in Ref. [Gattringer 09]. One important property of theHaar measure is D U µ ( n ) = D ( V ( n ) U µ ( n )) = D ( U µ ( n ) V † ( n + ˆ µ )) = D ( V ( n ) U µ ( n ) V † ( n + ˆ µ )) . (4.38) D = 4 Model In this section we will discuss the bosonic D = 4 model. It is obtained by dimensionallyreducing the 3 + 1-dimensional Yang-Mills model with gauge group SU ( N ) to 0 + 1 dimen-sions. The details on these are provided in Appendix A. The Euclidean action of the modelis given by S E = Nλ β (cid:90) dt Tr (cid:26) 12 ( D t X i ) − 14 [ X i , X j ] (cid:27) , (4.39)where D t X i = ∂ t X i − i [ A ( t ) , X i ] is the covariant derivative. As always repeated indicesare summed over, i, j = 1 , , · · · , d , λ = N g where λ is the ’t Hooft coupling and g is theone-dimensional Yang-Mills coupling, X i are N × N traceless Hermitian matrices, A ( t ) isthe gauge field. We also have periodic boundary conditions for both the scalar and gaugefield, i.e., X i ( t + β ) = X i ( t ) and A ( t + β ) = A ( t ). For this model, d = 3 since we have threescalar fields for the D = 4 model but we retained d because BFSS model action is also givenby Eq. (4.39) but in BFSS model we have d = 9, i.e., nine scalar fields. The action Eq.(4.39) remains invariant under the following gauge transformation X i ( t ) −→ V ( t ) X i ( t ) V † ( t ) , (4.40) A ( t ) −→ V ( t ) ( A ( t ) + i∂ t ) V † ( t ) , (4.41)where V ( t ) ∈ SU ( N ).The partition function is given by Z = (cid:90) D A D X exp {− S E } . (4.42)Now we need to put this theory on the lattice. The whole procedure is described in Ref.[Filev 16]. We will use the same arguments here also. First, note that here we only havetemporal direction and so we will have a one-dimensional lattice and that lattice is given byΛ = { n | n = 0 , , · · · , T − t = na } and imposing boundary conditions implies that we needto identify 0 with T and β = aT . To discretize the covariant derivate we need to introduce38he link fields given by U n,n +1 = P exp i ( n +1) a (cid:90) na dtA ( t ) = exp { iaA ( t ) } . (4.43)If we discretize the ordinary derivate we get ∂ t X in −→ X in +1 − X in a . (4.44)We need to bring the field at t n +1 back to t n and for that we use the link field U n,n +1 .Then the discretized ordinary derivative becomes the discretized covariant derivative. It isgiven by D t X in −→ U n,n +1 X in +1 U † n,n +1 − X in a . (4.45)Substituting this back in the discretized action we get S = N a λ T − (cid:88) n =0 Tr (cid:32) U n,n +1 X in +1 U † n,n +1 − X in a (cid:33) − 12 [ X in , X jn ] . (4.46)Simplifying the above equation we get S = Nλ T − (cid:88) n =0 Tr (cid:26) − a U n,n +1 X in +1 U † n,n +1 X in + 1 a ( X in ) − a X in , X jn ] (cid:27) . (4.47)Now we use the SU ( N ) symmetry to simply the action further. If we change the fieldsby the following set of transformations X i −→ X i , (4.48) X i −→ U † , X i U , ,X i −→ ( U , U , ) † X i ( U , U , ) , ... X iT − −→ ( U , U , · · · U T − ,T − ) † X iT − ( U , U , · · · U T − ,T − ) , and also if we define W = ( U , U , · · · U T − ,T − U T − , ) , (4.49)39e can express the action in the following way S = − Naλ Tr (cid:40) T − (cid:88) n =0 X in +1 X in + W X i W † X iT − (cid:41) + Nλ T − (cid:88) n =0 Tr (cid:26) a ( X in ) − a X in , X jn ] (cid:27) . (4.50)Upon using the result that every (finite-dimensional) Hermitian matrix can be diagonal-ized, we can diagonalize the W matrix by using the gauge symmetry of the field X in . Let W diagonlized by V matrix then W = V DV † (4.51)where D = diag { e iθ , e iθ , · · · , e θ N } . (4.52)If we transform X in −→ V X in V † then the action becomes S = − Naλ Tr (cid:40) T − (cid:88) n =0 X in +1 X in + DX i D † X iT − (cid:41) + Nλ T − (cid:88) n =0 Tr (cid:26) a ( X in ) − a X in , X jn ] (cid:27) . (4.53)We will use the action Eq. (4.53) for the simulation purpose. Now let us take a lookat the integration measure for this system. The integration measure for the scalar fieldswill remain the same but we will see that the integration measure for the link fields will getmuch simplified. The integration measure for the link fields is given by T − (cid:89) n =0 D U n,n +1 = T − (cid:89) n =1 D U n,n +1 D U , . (4.54)Now using the property of the Haar measure, Eq. 4.38, and U , = W ( U , · · · U T − ,T − U T − , ) † , (4.55)we get D U , = D W . If we look at Eq. (4.53), the action only depends on the diagonalmatrix D which is the diagonalized form of W . So we can integrate the extra variables out40nd the integration measure simplifies to Z = (cid:90) T − (cid:89) n =0 D U n,n +1 D X e − S [ X,D ] = (cid:90) T − (cid:89) n =1 D U n,n +1 D W D X e − S [ X,D ] = (Vol( SU ( N ))) T − (cid:90) D W D X e − S [ X,D ] ∝ (cid:90) N (cid:89) i =1 dθ i N (cid:89) j>k (cid:12)(cid:12)(cid:12) e iθ j − e iθ k (cid:12)(cid:12)(cid:12) D Xe − S [ X,D ] ∝ (cid:90) N (cid:89) i =1 dθ i N (cid:89) j>k sin (cid:18) θ j − θ k (cid:19) D Xe − S [ X,D ] ∝ (cid:90) N (cid:89) i =1 dθ i D Xe − S [ X,D ( θ )] − S FP [ θ ] , (4.56)where S FP [ θ ] is known as Faddeev-Popov term and it is given by S FP [ θ ] = − N (cid:88) j Now we will introduce the conjugate momenta for every field variables. Momentum conju-gate to X in is P in , where P in is an N × N traceless Hermitian matrix and the momentumconjugate to θ k is P k . The Hamiltonian for the molecular dynamics part of the HMC is H = T − (cid:88) n =0 12 Tr (cid:8) P ni (cid:9) + N (cid:88) j =1 P j + S [ X, D ( θ )] + S FP [ θ ] . (4.58)The Hamilton’s equations are given by˙ X in,rs = ∂H∂P in,sr = P in,rs , ˙ P in,rs = − ∂H∂X in,sr , (4.59)˙ θ j = ∂H∂P j = P j , ˙ P j = − ∂H∂θ j , (4.60)41here the dot represents time derivative with respect to the fictitious time τ . These forcescan be calculated using the Hamiltonian, Eq. (4.58), and are given by − ∂H∂X i ,sr = Naλ (cid:16) X i − X i + D † X iT − D (cid:17) rs + N aλ [ X j , [ X i , X j ]] rs , − ∂H∂X in,sr = Naλ (cid:0) X in +1 − X in + X in − (cid:1) rs + N aλ [ X jn , [ X in , X jn ]] rs for n = 1 , , · · · , T − , − ∂H∂X iT − ,sr = Naλ (cid:16) DX i D † − X iT − + X iT − (cid:17) rs + N aλ [ X jT − , [ X iT − , X jT − ]] rs , − ∂H∂θ j = 2 Naλ N (cid:88) k =1 Re (cid:16) iX iT − ,kj X i ,jk e i ( θ j − θ k ) (cid:17) + N (cid:88) k =1 k (cid:54) = j cot (cid:18) θ j − θ k (cid:19) . (4.61)Since D ∈ SU ( N ), all the θ variables will not be independent and is constrained by theequation N (cid:88) i =1 θ i = 0 (4.62)as det( D ) = 1. The observables we are interested in the D = 4 model are the Polyakov loop | P | , the extentof space R , and the internal energy E . The definition and the discretized forms of theseobservables are given below • Polyakov loop | P |(cid:104)| P |(cid:105) ≡ N (cid:68)(cid:12)(cid:12)(cid:12) P ( e i (cid:72) dtA ( t ) ) (cid:12)(cid:12)(cid:12)(cid:69) = (cid:28)(cid:12)(cid:12)(cid:12)(cid:12) Tr( D ) N (cid:12)(cid:12)(cid:12)(cid:12)(cid:29) (On lattice) (4.63) • Extent of space R (cid:10) R (cid:11) ≡ (cid:42) λN β β (cid:90) dt Tr (cid:16) X i (cid:17)(cid:43) = (cid:42) λN T T − (cid:88) n =0 Tr (cid:16) X in (cid:17)(cid:43) (Discretized form) (4.64)42 Internal energy E (cid:28) EN (cid:29) ≡ (cid:42) − λ N β β (cid:90) dt Tr (cid:0) [ X i , X j ] (cid:1)(cid:43) = (cid:42) − λ N T T − (cid:88) n =0 Tr (cid:0) [ X in , X jn ] (cid:1)(cid:43) (Discretized form) (4.65) In the simulations each variable and parameter has been measured in the unit of ’t Hooftcoupling λ . Since the Yang-Mills coupling g has a dimension of [ L − ] in one dimension,so X in and A ( n ) can be measured in the unit of λ / and a can be measured in units of λ − / .Thus, every variable and parameter appearing in Eq. (4.53) are dimensionless. Value of theparameters used in the simulations are: N = 4, T = 20, d = 3 and λ = 1.The simulations can be performed in two ways. On way is to fix the lattice spacing a and then perform the simulation for different temperatures. However, for large temperaturevalues lattice size becomes large and thus it takes quite a lot of time to simulate the model.Another way is to fix the lattice size T and vary the lattice spacing a . In this method, thelattice size remains constant which ensures that it will take the same amount of simulationtime for different temperatures but the lattice spacing will increases with increasing tem-perature so we need to choose T such that a remains small enough for the simulation. Weperformed the simulation by fixing T = 20 and got pretty good results.The simulations which we are performing can be quite heavy. It takes a lot of time sowe cannot risk to run it in a single go without saving the data and field configurations inbetween. Incidents such as power loss or system crash may happen resulting in a loss of thefinal field configuration forcing us to rerun the whole simulation again. In order to overcomethis we have written a script in bash to stop the code after every 100 sweeps and then savethe data and field configurations and then rerun again. In this way we will not face any lossof data.As we have already mentioned, the simulations of this model is quite heavy; so we run thesimulations for each temperature on a single core of the computer. We run the simulationsfor all temperatures simultaneously using multiple cores at the same time. This proceduresaves us quite a lot of time. (Another method for saving simulation time is to write a codesuitable for parallel computing.)We run the simulations for 17 different temperature values and for each temperaturewe run it for 1600K sweeps. The results are provided below. Our results are in excellentagreement with the results given in Ref. [Hanada 07].43 olyakov Loop | P | We have already mentioned in the Chapter 1 that Polyakov loop | P | acts as an orderparameter for the confinement/deconfinement phase transition. Fig. 4.2 shows the plot ofthe Polyakov loop against temperature T . The dotted and the bold black curves show theleading and next-to-leading terms of the high temperature expansion (HTE) given in Ref.[Kawahara 07a]. The system remains in the confined phase for small temperature and aftercrossing the critical temperature T c , the system moves to the deconfined phase. In the limit N → ∞ the plot will be given by the Heaviside step function with the discontinuity at thecritical temperature T c . So for a finite system we fit our plot of the Polyakov loop with asuitable function that in the limit should converge to the step function. Our fitting functionis given by f ( T ) = A tan − ( B ( T − T c )) + D, (4.66)where A, B, T c and D are the fit parameters. If we choose A = 1 π , D = 0 . B → ∞ then lim B →∞ π tan − ( B ( T − T c )) + 0 . T − T c ) (4.67)We see that it does converge to the required limit; so the function f ( T ) is the correct choice.The values of fit parameters obtained after the fit are provided in Table 4.1. From this weconclude that the critical temperature T c = 1 . A B T c D A , B , T c and D .Fig. 4.3 shows the distribution of the eigenvalues of the Polyakov loop operator in theconfined and deconfined phase. In the confined phase, the eigenvalues spread uniformly onthe unit circle, whereas they tend to cluster near one point in the deconfined phase. Internal Energy E and Extend of Space R Fig. 4.4 shows the plot of the scaled internal energy EN against temperature T . If we hadsimulated this for larger N then we would clearly see a kink near the critical temperature,which would directly show that the system undergoes a phase transition. But for smaller44 | P | TPolyakov loop |P| against TA tan -1 (B(T-T c ))+DLeadingNext leadingData Figure 4.2: Polyakov loop against temperature T . The dotted and bold black curves showthe leading order and the next-to-leading order terms of the high temperature expansion(HTE). The blue curve is the fitted curve to the plot. The phase transition occurs at T c ≈ . -1-0.8-0.6-0.4-0.2 0 0.2 0.4 0.6 0.8 1-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Y - a x i s X-axisEigen Values (a) Confined phase T = 0 . -1-0.8-0.6-0.4-0.2 0 0.2 0.4 0.6 0.8 1-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Y - a x i s X-axisEigen Values (b) Deconfined phase T = 2 . Figure 4.3: Distribution of the eigenvalues of the Polyakov loop operator in the confinedand deconfined phases. N this feature is not visible in the plot. The plot agrees well with the high temperatureexpansion (HTE). In Fig. 4.5 we show the plot of the extent of space R against temp T .45 E / N TE/N against T LeadingNext leadingData Figure 4.4: EN against temperature T . The dotted and bold black curves show the leadingorder and the next-to-leading order terms of the high temperature expansion (HTE). R TR against T Data Figure 4.5: Plot of R against temperature T .46 .3 BFSS Matrix Model The action of this model can be obtained by dimensionally reducing the 9 + 1-dimensional N = 1 Super Yang-Mills theory to 0 + 1 dimensions. The details of this model are given inAppendix A. The Euclidean action of the BFSS model is given by S E = N λ β (cid:90) dt Tr (cid:26) ( D t X i ) − 12 [ X i , X j ] + ψ T C D t ψ − ψ T C γ i [ X i , ψ ] (cid:27) , (4.68)where D t = ∂ t − i [ A ( t ) , · ]; i, j = 1 , , · · · , X i are N × N traceless Hermitian matrices; λ isthe ’t Hooft coupling; ψ is a sixteen component Majorana fermion and each component ofthe fermion is an N × N traceless Hermitian matrix; C is the Euclidean charge conjugationmatrix in the nine dimensions; and γ i are the Euclidean gamma matrices in nine dimensions.Fermionic part of the action is obtained by taking a particular choice for the Majorana-Weylfermion and the gamma matrices. The details are given in Ref. [Filev 16].In this section we will simulate the quenched BFSS model, i.e., we will remove thefermions from the theory and we just simulate the bosonic part of this model. The Euclideanaction of the bosonic part is given by S E = N λ β (cid:90) dt Tr (cid:26) ( D t X i ) − 12 [ X i , X j ] (cid:27) . (4.69)We see that the bosonic action is same as of the D = 4 model but here we have ninescalar fields instead of three scalar fields. So the set of procedures from discretizing theaction to applying the HMC remains the same. However, we discuss the important pointsbelow. We introduce the momentum conjugate to X in to be P in , where P in is an N × N tracelessHermitian matrix and momentum conjugate to θ k is P k . The Hamiltonian for the moleculardynamics part of HMC is H = T − (cid:88) n =0 12 Tr (cid:8) P ni (cid:9) + N (cid:88) j =1 P j + S [ X, D ( θ )] + S F P [ θ ] . (4.70)47he Hamilton’s equations are given by˙ X in,rs = ∂H∂P in,sr = P in,rs , ˙ P in,rs = − ∂H∂X in,sr , (4.71)˙ θ j = ∂H∂P j = P j , ˙ P j = − ∂H∂θ j , (4.72)where the dot represents time derivative with respect to the fictitious time τ .The forces can be calculated using the Hamiltonian, Eq. (4.70), and are given by − ∂H∂X i ,sr = Naλ (cid:16) X i − X i + D † X iT − D (cid:17) rs + N aλ [ X j , [ X i , X j ]] rs , − ∂H∂X in,sr = Naλ (cid:0) X in +1 − X in + X in − (cid:1) rs + N aλ [ X jn , [ X in , X jn ]] rs for n = 1 , , · · · , T − , − ∂H∂X iT − ,sr = Naλ (cid:16) DX i D † − X iT − + X iT − (cid:17) rs + N aλ [ X jT − , [ X iT − , X jT − ]] rs , − ∂H∂θ j = 2 Naλ N (cid:88) k =1 Re (cid:16) iX iT − ,kj X i ,jk e i ( θ j − θ k ) (cid:17) + N (cid:88) k =1 k (cid:54) = j cot (cid:18) θ j − θ k (cid:19) . (4.73)Since D ∈ SU ( N ), all the θ variables will not be independent and is constrained by theequation N (cid:88) i =1 θ i = 0as det( D ) = 1. The observables we are interested in the bosonic BFSS model are the Polyakov loop | P | , theextent of space R and the internal energy E . The definitions and the discretized forms ofthe observables are given below • Polyakov loop | P |(cid:104)| P |(cid:105) ≡ N (cid:68)(cid:12)(cid:12)(cid:12) P ( e i (cid:72) dtA ( t ) ) (cid:12)(cid:12)(cid:12)(cid:69) = (cid:28)(cid:12)(cid:12)(cid:12)(cid:12) Tr( D ) N (cid:12)(cid:12)(cid:12)(cid:12)(cid:29) (On lattice) (4.74) • Extent of space R (cid:10) R (cid:11) ≡ (cid:42) λN β β (cid:90) dt Tr (cid:16) X i (cid:17)(cid:43) = (cid:42) λN T T − (cid:88) n =0 Tr (cid:16) X in (cid:17)(cid:43) (Discretized form) (4.75)48 Internal energy E (cid:28) EN (cid:29) ≡ (cid:42) − λ N β β (cid:90) dt Tr (cid:0) [ X i , X j ] (cid:1)(cid:43) = (cid:42) − λ N T T − (cid:88) n =0 Tr (cid:0) [ X in , X jn ] (cid:1)(cid:43) (Discretized form) (4.76) In simulations each variable and parameter has been measured in the unit of the ’t Hooftcoupling λ . Since the Yang-Mills coupling g has a dimension of [ L − ] in one dimension,we can measure X in and A ( n ) in the unit of λ / , and a can be measured in units of λ − / .Thus, each variable and parameter appearing in Eq. (4.53) is dimensionless. The values ofthe parameters used for this simulations are: N = 4, T = 10, d = 9 and λ = 1. All theother details of the simulations are the same as the D = 4 model. (See Sec. 4.2.3 for thosedetails.)We performed the simulations for 31 different temperature values each running for 800KMonte Carlo sweeps. The critical temperature T c comes out to be 0 . Polyakov Loop | P | Fig. 4.6 shows the plot of the Polyakov loop | P | against temperature T . The values ofthe parameters for the fitted curve are given in Table 4.2. From this we conclude that thecritical temperature T c = 0 . A B T c D A , B , T c and D . Internal Energy E and Extent of Space R In Fig. 4.7 we show the plot of the normalized internal energy EN against temperature T .Our plot shows a smooth transition of EN across different phases but there should be a kinkat T c . This kink becomes visible for larger N , which is a clear indication of a second-orderphase transition. In Ref. [Filev 16], the authors simulated the same model for N = 16 andthere we can clearly see a kink in the EN plot at T c . In Fig. 4.8 we show the plot of the49 | P | TPolyakov loop |P| against TA tan -1 (B(T-T c ))+DLeadingNext leadingData Figure 4.6: The Polyakov loop against temperature T . The dotted and the bold black curvesshow the leading and the next-to-leading terms of the high temperature expansion (HTE)given in Ref. [Kawahara 07a]. The blue curve is the fitted curve to the data. The phasetransition occurs at T c ≈ . R against temperature T . We can see that all our plots are in excellentagreement with the high temperature expansion (HTE) predictions.50 E / N TE/N against T LeadingNext leadingData Figure 4.7: The plot of EN against temperature T . The dotted and bold black curves showthe leading and the next-to-leading order terms of the high temperature expansion (HTE). R TR against T LeadingNext leadingData Figure 4.8: The plot of extent of space R against temperature T . The dotted and boldblack curves show the leading and the next-to-leading order terms of the high temperatureexpansion (HTE). 512 hapter 5 IKKT Matrix Model In this chapter we discuss the numerical simulations of the bosonic IKKT Model. We alsodiscuss the simulations of the model after the inclusion of fermions. We also discuss thesimulations of the full IKKT Model. This model is obtained by dimensionally reducing the Euclidean Yang-Mills action withgauge group SU ( N ) from 9 + 1 dimensions to 0 + 0 dimensions. The Euclidean action ofthe model is given by S E = − g Tr (cid:0) [ X i , X j ] (cid:1) , (5.1)where X i are N × N traceless Hermitian matrices, i, j varies from 1 , , · · · , 10 and g is theYang-Mills coupling in 0 + 0 dimensions. We can replace g with λ the ’t Hooft coupling,which follows the relation λ = N g . The action then takes the form S E = − N λ Tr (cid:0) [ X i , X j ] (cid:1) . (5.2)Now by rescaling the fields we can absorb λ into the field variables. If we rescale thefields by X i → λ / X i , then λ will be absorbed in the field variables and the action simplifiesto S E = − N (cid:0) [ X i , X j ] (cid:1) . (5.3)In the above action all the field variables are dimensionless. The action is gauge invariantunder the following gauge transformation X i −→ V X i V † , (5.4)53here V ∈ SU ( N ). The partition function of this model is given by Z = (cid:90) D X exp {− S E } . (5.5)Apart from this gauge symmetry, the action Eq. (5.3) also has SO (10) symmetry. Thatis, if we change X i by the transformation X i −→ O ik X k , (5.6)where O ∈ SO (10), the action remains invariant. Changing the fields with this transforma-tion, the action becomes S E = − N (cid:16) [ O ik X k , O jl X l ][ O im X m , O jn X n ] (cid:17) = − N O ik O im O jl O jn Tr (cid:16) [ X k , X l ][ X m , X n ] (cid:17) = − N δ km δ ln Tr (cid:16) [ X k , X l ][ X m , X n ] (cid:17) using O ik O im = δ km = − N (cid:16) [ X k , X l ] (cid:17) . (5.7)Thus the action remains invariant under this transformation telling us that SO (10) is asymmetry of this system. Putting this theory on the lattice is a trivial task as this modelis 0-dimensional, and thus everything is happening on a single lattice site. The action Eq.(5.3) itself represents the action of the continuum theory as well as of the lattice theory. We will now introduce conjugate momenta for each of the field variable. The momentaconjugate to X i is P i , where P i is an N × N traceless Hermitian matrix. The Hamiltonianfor the molecular dynamics part of the HMC is given by H = 12 (cid:88) i =1 Tr (cid:16) P i (cid:17) − N (cid:88) i,j =1 Tr (cid:0) [ X i , X j ] (cid:1) . (5.8)Hamilton’s equations are given by˙ X irs = ∂H∂P isr = P irs , ˙ P irs = − ∂H∂X isr , (5.9)where the dot represents time derivative with respect to the fictitious time τ .54he forces can be calculated using the Hamiltonian, Eq. (5.8), and are given by − ∂H∂X isr = N (cid:88) j =1 j (cid:54) = i [ X j , [ X i , X j ]] rs . (5.10) The observables we are interested in the IKKT model are the extent of spacetime R andeigenvalues of the moment of inertia tensor I µν . The definitions of the observables are givenbelow • Extent of spacetime R (cid:10) R (cid:11) = (cid:28) N Tr (cid:16) X i (cid:17)(cid:29) . (5.11) • Eigenvalues of moment of inertia tensor I µν I µν = 1 N Tr( X µ X ν ) . (5.12)Eigenvalues of the moment of inertia tensor are λ i , i = 1 , , · · · , We simulated this model for different values of N . Simulating this model is quite easycompared to the BFSS model. Again to save time, we run the simulation for each N ona single core of the computer. This way, we were able to run the simulations for all thevalues of N simultaneously using multiple cores at the same time. Here we can see that thesimulation depends on N itself, so it will take different time for each N .We performed the simulations for 10 different values of N each running for 110K MonteCarlo sweeps. The results are provided below. Basically, our aim is to study how the systembehaves at large N . Eigenvalues of I µν Figure 5.1 shows the plot of the eigenvalues of I µν against N . We can see in the plot thatthe eigenvalues are converging towards a single point. This shows that we do not encounter SO (10) spontaneous symmetry breaking (SSB) in the limit N → ∞ . This same conclusionwas also drawn in the Ref. [Hotta 99]. However, they used different criteria to arrive at thisconclusion. 55 E i g e n v a l u e s µ ν against 1/N λ λ λ λ λ λ λ λ λ λ Figure 5.1: Plot of the eigenvalues of I µν against N . We can see that the eigenvalues areconverging towards a single point for large N . Extent of spacetime R In Fig. 5.2 we show the plot of the extent of spacetime R against N .We can also arrive at the same conclusion using the extent of spacetime. But here wecalculate R i = N Tr (cid:16) X i (cid:17) and this time we do not have a sum over repeated indices. If weplot these R i ’s, then we will see that all of them converges to the same value in the limit N → ∞ . This will also indicate that there is no SSB of SO (10) in the limit N → ∞ . Ifthere is SSB of SO (10), then some d values of R i will have smaller values as compared tothe remaining ones in the limit N → ∞ . This way we will conclude that the SO (10) willspontaneously break to SO ( d ) × SO (10 − d ). We will now add fermions in our theory and study the full IKKT matrix model. This modelis obtained by dimensionally reducing the N = 1 super Yang-Mills model in 9+1 dimensionsto 0 + 0 dimensions or dimensionally reducing the BFSS model to 0 + 0 dimensions. TheEuclidean action is given by S E = − N λ Tr (cid:0) [ X i , X j ] (cid:1) − N λ ψ α ( C γ i ) αβ [ X i , ψ β ] , (5.13)56 R against 1/N Data Figure 5.2: Extent of spacetime R against N .where i, j = 1 , , · · · , α, β = 1 , , · · · , X i are N × N traceless Hermitian matrices; λ is the ’t Hooft coupling; ψ is a sixteen component Majorana fermion with each componentof the fermion being an N × N traceless Hermitian matrix; C is the Euclidean chargeconjugation matrix in the nine dimensions; and γ i are the Euclidean gamma matrices innine dimensions. Fermionic part of the action is obtained by taking a particular choicefor the Majorana-Weyl fermion. The details are given in Ref. [Filev 16]. The choice ofgamma matrices are such that the charge conjugation matrix becomes identity matrix inthis representation. The details of the choice of gamma matrices are given in the Ref.[Ambjrn 00]. In this representation of gamma matrices the action simplifies to S E = − N λ Tr (cid:0) [ X i , X j ] (cid:1) − N λ Tr (cid:0) ψ α γ iαβ [ X i , ψ β ] (cid:1) . (5.14)The action is invariant under the following gauge transformation X i −→ V X i V † ,ψ α −→ V ψ α V † , (5.15)where V ∈ SU ( N ). Apart from the gauge symmetry, the action also remains invariant under SO (10) symmetry as X i transforms as a vector and ψ transforms as a Majorana-Weyl spinor57nder this transformation. The partition function of this model is given by Z = (cid:90) D X D ψ exp {− S E } . (5.16)We can integrate out the fermions from this partition function. For that, we first needto form the fermion matrix. We can simplify the fermionic part of the action to get thefermion matrix. First, we will decompose the fields with the help of the SU ( N ) generators. X i = X ia T a ,ψ α = ψ aα T a . (5.17)Substituting this in the fermionic part of the action, we get S f = − N λ Tr (cid:16) ψ aα T a γ jαβ [ X jb T b , ψ cβ T c ] (cid:17) = − N λ ψ aα γ jαβ X jb ψ cβ Tr( T a [ T b , T c ])= − N λ ψ aα γ jαβ X jb ψ cβ ( if bcd ) Tr( T a T d ) using [ T b , T c ] = if bcd T d = iN λ ψ aα γ jαβ X jb ψ cβ f bca using Tr( T a T b ) = − δ ab = ψ aα (cid:18) − iN λ γ jαβ X jc f abc (cid:19) ψ bβ using totally antisymmetric property of f abc = ψ aα M αa,βb ψ bβ , (5.18)where M αa,βb = − iN λ γ jαβ X jc f abc = N λ γ jαβ Tr (cid:0) X j [ T a , T b ] (cid:1) , (5.19)with a, b, c = 1 , , · · · , N − α, β = 1 , , · · · , 16. The fermion matrix M αa,βb is a16( N − × N − 1) size matrix. Now using the result that (cid:90) dψ exp (cid:8) − ψ T Aψ (cid:9) = Pf( A ) = det( A ) / where A is an antisymmetric matrix(5.20)we can integrate out the fermions from the partition function Eq. (5.16). Integrating outthe fermions from the partition function we get Z = (cid:90) D X exp {− S b } Pf( M ) , (5.21)where S b is the bosonic part of the action. Pf( M ) for this model comes to be a complexnumber. Since Pf( M ) is a complex number so we cannot treat exp {− S E } as the probability58istribution. This is known as the sign problem . There are methods to deal with this prob-lem by using other techniques like Lefschetz thimbles and complex Langevin with stochasticquantization .One another technique is to consider only the absolute part of the Pfaffian in the partitionfunction and monitor the phase of the Pfaffian during the simulations. If the phase of thePfaffian does not vary much, then the Monte Carlo simulations can be trusted, but if itvaries too much than we cannot trust the Monte Carlo simulations. We are going to usethis technique for our simulation. So our partition function becomes Z = (cid:90) D X exp {− S b } | Pf( M ) | . (5.22)For calculation of Pf( M ) = det( M ) / our best algorithm provides the time complexityof O ( N ). So calculating this determinant is quite a computationally heavy task. So wedo not calculate this determinant directly, rather we use a rational approximation to ap-proximate this determinant within certain error for our simulation purpose. The details areprovided in the next section. We can write the absolute value of the Pfaffian as | Pf( M ) | = det (cid:16) M † M (cid:17) / , (5.23)and using the result (cid:90) dψ dψ † exp (cid:110) − ψ † Aψ (cid:111) ∝ A ) where A = A † (5.24)we can write det (cid:16) M † M (cid:17) / ∝ (cid:90) dξ dξ † exp (cid:110) − ξ † ( M † M ) − / ξ (cid:111) . (5.25)So the partition function becomes Z ∝ (cid:90) D X dξ dξ † exp {− S b [ X ] − S psf } , (5.26)where S ps f = ξ † ( M † M ) − / ξ, (5.27)59ith ξ being a 16( N − S ps f is known as the pseudo-fermion action. First we note that if we define η = ( M † M ) − / ξ (5.28)then the pseudo-fermion action becomes S ps f = η † η . Then we can simply take η randomlyfrom the Gaussian distribution and using Eq. (5.28) we get the value of ξ . But the mainquestion is how to calculate ( M † M ) − / . For this we use the rational approximation toapproximate this and after that we can calculate ξ . The idea is to approximate the rationalexponent of the M † M with the partial sum( M † M ) d = α + p (cid:88) i =1 α i ( M † M + β i ) − , (5.29)where d is the rational exponent, which we require for our calculation purpose and p dependson the accuracy of the approximation. If we want more accurate results then we will use alarge value for p . Remez algorithm is used to obtain α , α i and β i . The detailed theory ofrational approximation is given in Ref. [Ydri 17] and the program of the Remez algorithmis on Github as an open source code - see Ref. [Clark 05]. To calculate ξ we require d = 1 / ξ from η . We call this algorithm theRHMC algorithm since the rational approximation is used to approximate ( M † M ) d , andHMC is used to generate the states distributed according to the probability distribution.The momentum conjugate to X i is P i , where P i is an N × N traceless Hermitianmatrix. We do not introduce momentum variable for pseudo-fermion fields as they arealready distributed according to the required probability distribution. So the Hamiltonianof the system is given by H = 12 Tr (cid:16) P i (cid:17) − N λ Tr (cid:0) [ X i , X j ] (cid:1) + ξ † ( M † M ) − / ξ. (5.30)Thus we require two rational approximations, one for d = 1 / d = − / M † M ) / = α + p (cid:88) i =1 α i ( M † M + β i ) − (5.31)( M † M ) − / = a + q (cid:88) i =1 a i ( M † M + b i ) − . (5.32)60ubstituting the value of ( M † M ) − / back in Eq. (5.30) we get H = 12 Tr (cid:16) P i (cid:17) − N λ Tr (cid:0) [ X i , X j ] (cid:1) + ξ † (cid:32) a ξ + q (cid:88) i =1 a i ( M † M + b i ) − ξ (cid:33) . (5.33)Hamilton’s equations are given by˙ X irs = ∂H∂P isr = P irs , ˙ P irs = − ∂H∂X isr (5.34)where the dot represents time derivative with respect to the fictitious time τ .The forces can be calculated using the Hamiltonian, Eq. (5.33), and are given by − ∂H∂X isr = N (cid:88) j =1 j (cid:54) = i [ X j , [ X i , X j ]] rs − ∂S ps f ∂X isr , (5.35)where − ∂S ps f ∂X isr = q (cid:88) i =1 a i ξ † ( M † M + b i ) − ∂ ( M † M ) ∂X isr ( M † M + b i ) − ξ. (5.36)We can calculate ∂ ( M † M ) ∂X isr using the expression of M . This expression turns out to be ∂ ( M † M ) aα,bβ ∂X isr = (cid:18) N λ (cid:19) (cid:34)(cid:110) ( γ iµα )[ T c , T a ] sr (cid:111) ∗ ( γ jµβ ) Tr (cid:0) X j [ T c , T b ] (cid:1) + (cid:110) ( γ jµα ) Tr (cid:0) X j [ T c , T a ] (cid:1)(cid:111) ∗ ( γ iµβ )[ T c , T b ] rs (cid:35) . (5.37)Previously we mentioned that we can obtain ξ from η . Calculations leading to thisstatement are provided below. ξ =( M † M ) / η,ξ = (cid:32) α + p (cid:88) i =1 α i ( M † M + β i ) − (cid:33) η. (5.38)Now ( M † M + β i ) − η = f i can be evaluated using the Multimass Conjugate Gradient (CG) method. The details of this is given in Ref. [Ydri 17]. As we have obtained ξ so wecan now evaluate the Hamiltonian and forces using ξ .Below we briefly summarize the whole algorithm. Step 1: Take η randomly from a Gaussian distribution N (0 , η is a complexvalued 16( N − 1) dimensional vector both the real and imaginary parts of each61igure 5.3: Plot of the eigenvalues of I µν against 1 /N for the phase quenched IKKT model.From the figure it is clear that all the eigenvalues converge to a single point. This figure istaken from the Ref. [Anagnostopoulos 15].component will be taken from this Gaussian distribution. Step 2: Evaluate ξ using Eq. (5.38), and use Multimass CG for evaluating ( M † M + β i ) − η = f i . Step 3: Apply HMC to this system. Whenever we need to evaluate equations like ( M † M + b i ) − ξ = h i use Multimass CG. We can see that we require this for both theHamiltonian and force evaluations. Step 4: Accept/reject the state according to the Metropolis test. Repeat Steps 1 to 4 untilwe have enough data for evaluating the observables. We also need to monitor thephase of the Pfaffian along this. The observables we are interested in are the same observables we computed in the bosonicIKKT model. (See Sec. 5.1.2 for details.) 62 .2.3 Results Due to lack of time during the final phase of the project, we were not able to producethe results for the full IKKT model. We are expecting the results provided in the Ref.[Anagnostopoulos 15]. Fig. 5.3 shows the result given in the Ref. [Anagnostopoulos 15]. Inthe limit N → ∞ eigenvalues converge to a single point, which clearly shows that there isno SO (10) SSB in the phase quenched IKKT model.Recent work on this model using complex Langevin algorithm showed that there is SSBof SO (10) to SO (3) × SO (7). (See Ref. [Anagnostopoulos 20].) In their paper, they furtherconcluded that SO (7) symmetry is also broken.634 hapter 6 Conclusion and Future Work The goal of this thesis is to study the bosonic BFSS model and the IKKT model usingMonte Carlo simulations.In the BFSS model we used Polyakov loop as an order parameter to investigate the large- N behaviour of this model at different temperatures. We clearly observed the confinement-deconfinement phase transition in the quenched form of this model. From the energy vstemperature plot given in Ref. [Filev 16] it is clear that the phase transition is of secondorder. All our results are in excellent agreement with the results produced by other authors.In the bosonic IKKT model we studied the spontaneous symmetry breaking (SSB) of SO (10) symmetry using the eigenvalues of the moment of inertia tensor and found that thesystem does not undergo SSB in the bosonic model. This directly suggests that dynamicalcompactification of extra dimensions is not possible in the bosonic IKKT model. We alsotried to study the phase-quenched IKKT model but did not get enough time simulate it.In the future, we can study the phase-quenched IKKT model using RHMC algorithm andfull IKKT model using complex Langevin with stochastic quantization. These studies havebeen carried out in Refs. [Ambjrn 00, Anagnostopoulos 20].656 ppendix A Dimensional Reduction This appendix provides an introduction on how to obtain the action of certain theoriesby dimensionally reducing from higher dimensions to lower dimensions using Kaluza-Kleincompactification. We will also talk about Wick rotation, using which one can go from theLorentzian field theory to the Euclidean one and vice-versa.We will discuss how to dimensionally reduce the N = 1 super Yang-Mills theory in 9 + 1dimensions to 0 + 1 dimensions and to 0 + 0 dimension. In between we will also discuss theprocess of Wick rotation. The metric is given by η µν = diag(1 , − , − , · · · , − N = 1 super Yang-Mills theory in 9 + 1 dimensions with gauge group SU ( N ) is S = (cid:90) d x (cid:26) − g Tr (cid:16) F µν F µν + 2 i ¯Ψ α γ µαβ D µ Ψ β (cid:17)(cid:27) , (A.1)where the integral for the temporal direction is from 0 to t . We have D µ Ψ β = ∂ µ Ψ − i [ A µ , Ψ β ] , (A.2) F µν = ∂ [ µ A ν ] − i [ A µ , A ν ] , (A.3)where µ, ν = 0 , , , · · · , α, β = 1 , , · · · , g is the coupling constant in 9 + 1 dimen-sions; Ψ is the Majorana-Weyl fermion in 9 + 1 dimensions with 32 components and eachcomponent is an N × N traceless Hermitian matrix. Ψ is in the adjoint representation of SU ( N ), which is clear from the form of the covariant derivative D µ . The above action, Eq.A.1, is invariant under the following gauge transformations A µ ( x ) → V ( x ) ( A µ ( x ) + i∂ µ ) V † ( x ) , (A.4)Ψ α ( x ) → V ( x ) Ψ α ( x ) V † ( x ) . (A.5)67he path integral for this system is given by Z = (cid:90) D A µ D Ψ exp { iS } . (A.6)Now we will apply the Kaluza-Klein compactification to dimensionally reduce this the-ory to lower dimensions. The idea of the Kaluza-Klein compactification is not to treat thedimension as of infinite length but rather compactify the dimension on the circle by intro-ducing periodicity in this length dimension. For example, suppose there is only one spatialdimension and the objects repeat in this dimension after every 2 πR distance. So it is clearthat if we are close enough we will not be able to distinguish it from a line of infinite lengthbut if we move far enough than we can clearly see that it is a circle of finite circumference.So if we want to compactify one dimension then we need to identify points after every 2 πR distance and then take the limit R → 0. In the limit R → x . Then φ ( x , x , · · · , x ) → φ ( x , x , · · · , x ) , (A.7) ∂ φ ( x ) → , (A.8) A ( x , x , · · · , x ) → X ( x , x , · · · , x ) (Scalar field) . (A.9)Now let us compactify the direction x on S of radius R for the action Eq. A.1. So A ( x ) will be changed to X ( x ) (Scalar field) and ∂ · will vanish. So F ν and D Ψ β willbecome F ν = ∂ ν X + i [ X , A ν ] where ν = 0 , , · · · , ,F ν = D ν X , F ν = − D ν X , (A.10) D Ψ β = i [ X , Ψ β ] . (A.11)68ubstituting these back into the action we get S = (cid:90) dx (cid:90) d x (cid:110) − g Tr (cid:16) F µν F µν + 2 F ν F ν + 2 i ¯Ψ α γ µαβ D µ Ψ β − α γ αβ [ X , Ψ β ] (cid:17)(cid:111) = (cid:90) dx (cid:90) d x (cid:110) − g Tr (cid:16) F µν F µν − D µ X D µ X + 2 i ¯Ψ α γ µαβ D µ Ψ β − α γ αβ [ X , Ψ β ]) (cid:111) =2 πR (cid:90) d x (cid:110) − g Tr (cid:16) F µν F µν − D µ X D µ X + 2 i ¯Ψ α γ µαβ D µ Ψ β − α γ αβ [ X , Ψ β ] (cid:17)(cid:111) . (A.12)This gives S = (cid:90) d x (cid:110) − g Tr (cid:16) F µν F µν − D µ X D µ X + 2 i ¯Ψ α γ µαβ D µ Ψ β − α γ αβ [ X , Ψ β ] (cid:17)(cid:111) , (A.13)where 12 g = πRg , (A.14)and µ, ν = 0 , , · · · , 8. Equation (A.13) is the action of the theory after compactifying x direction. Further, we notice that the coupling constant also changes. Now we willcompactify the x direction on S of radius R . The following changes take place in theaction F ν = D ν X , F ν = − D ν X where ν = 0 , , · · · , , (A.15) D X D X =[ X , X ] , (A.16) D Ψ β = i [ X , Ψ β ] , (A.17)12 g = πRg . (A.18)The action becomes S = (cid:90) d x (cid:40) − g Tr (cid:32) F µν F µν − (cid:88) i =8 D µ X i D µ X i − (cid:88) i,j =8 j>i [ X i , X j ] + 2 i ¯Ψ α γ µαβ D µ Ψ β − (cid:88) i =8 ¯Ψ α γ iαβ [ X i , Ψ β ] (cid:33)(cid:41) , (A.19)69here µ, ν = 0 , , · · · , 7. Now we can clearly see the pattern here. If we compactify onemore spatial direction, say x , then we have (cid:88) i =8 D µ X i D µ X i → (cid:88) i =7 D µ X i D µ X i , (A.20) (cid:88) i,j =8 j>i [ X i , X j ] → (cid:88) i,j =7 j>i [ X i , X j ] , (A.21) (cid:88) i =8 ¯Ψ α γ iαβ [ X i , Ψ β ] → (cid:88) i =7 ¯Ψ α γ iαβ [ X i , Ψ β ] . (A.22)Applying this till we reach 0 + 1 dimensions, the theory gets dimensionally reduced to atheory in 0 + 1 dimensions. The action of this theory is given by S = − g t (cid:90) dt Tr (cid:32) − D t X i ) − [ X i , X j ] + 2 i ¯Ψ α γ αβ D t Ψ β − α γ iαβ [ X i , Ψ β ] (cid:33) = 1 g t (cid:90) dt Tr (cid:32) 12 ( D t X i ) + 14 [ X i , X j ] − i α γ αβ D t Ψ β + 12 ¯Ψ α γ iαβ [ X i , Ψ β ] (cid:33) , (A.23)where g = g , i, j = 1 , , · · · , T C . Then the action becomes S = 12 g t (cid:90) dt Tr (cid:32) ( D t X i ) + 12 [ X i , X j ] − i Ψ Tα C γ αβ D t Ψ β + Ψ Tα C γ iαβ [ X i , Ψ β ] (cid:33) . (A.24)Equation (A.24) represents the action of the BFSS matrix model (see Ref. [Filev 16]).Now we use Wick rotation to go from the Lorentzian action to the Euclidean action.Under Wick rotation the following changes take place t → τ = it, (A.25) dt → dτ = idt, (A.26) ∂ t → ∂ τ = − i∂ t , (A.27) A ( t ) → A ( τ ) = − iA ( t ) . (A.28)Path integral changes to the partition function with β = it and the action changes tothe Euclidean action. We take the representation of gamma matrices and C apart from γ given in Ref. [Filev 16]. We take the following choice for the Majorana-Weyl fermion and70 Ψ = ψ ⊗ (cid:32) (cid:33) , (A.29) γ =Γ ⊗ σ . (A.30)Taking these things in account, the action and the path integral becomes Z = (cid:90) D A τ D X D Ψ exp {− S E } . (A.31) S E = 12 g β (cid:90) dτ Tr (cid:18) ( D τ X i ) − 12 [ X i , X j ] − iψ Tα C Γ αβ D τ ψ β − ψ Tα C Γ iαβ [ X i , ψ β ] (cid:19) , (A.32)where D τ = ∂ τ − i [ A ( τ ) , · ]; α, β = 1 , , · · · , i, j = 1 , , · · · , C is the 9-dimensionalEuclidean charge conjugation matrix; Γ i are the Euclidean gamma matrices in 9 dimensions.Bosonic fields follow periodic boundary conditions, i.e., A ( τ + β ) = A ( τ ) and X i ( τ + β ) = X i ( τ ), and fermionic fields follow anti-periodic boundary conditions, i.e., ψ ( τ + β ) = − ψ ( τ ).Eq. (A.32) represents the Euclidean action of the BFSS matrix model. This equation isused in Chapter 4.If we dimensionally reduce the above model to 0 + 0 dimensions, we will get the IKKTmatrix model. The Euclidean action of IKKT matrix model is given by S E = − g Tr (cid:0) [ X i , X j ] (cid:1) − g Tr (cid:0) ψ Tα C Γ iαβ [ X i , ψ β ] (cid:1) , (A.33)where g is the 0-dimensional Yang-Mills coupling, i , j = 1 , , · · · , 10. This equation is usedin Chapter 5. 712 ibliography [Ambjrn 00] Jan Ambjrn, Jun Nishimura, Konstantinos N Anagnostopoulos, Wolf-gang Bietenholz & Tomohiro Hotta. Monte Carlo studies of the IIBmatrix model at large N . Journal of High Energy Physics, vol. 2000,no. 07, page 011011, Jul 2000.[Anagnostopoulos 15] Konstantinos N. Anagnostopoulos, Takehiro Azuma & JunNishimura. Monte Carlo studies of dynamical compactification of extradimensions in a model of nonperturbative string theory , 2015.[Anagnostopoulos 20] Konstantinos N. Anagnostopoulos, Takehiro Azuma, Yuta Ito, JunNishimura, Toshiyuki Okubo & Stratos Kovalkov Papadoudis. Com-plex Langevin analysis of the spontaneous breaking of 10D rotationalsymmetry in the Euclidean IKKT matrix model , 2020.[Aoki 98] H. Aoki, S. Iso, H. Kawai, Y. Kitazawa & T. Tada. Space-TimeStructures from IIB Matrix Model . Progress of Theoretical Physics,vol. 99, no. 5, page 713745, May 1998.[Banks 97] Tom Banks, Willy Fischler, Steven H Shenker & Leonard Susskind. M theory as a matrix model: A conjecture . Physical Review D, vol. 55,no. 8, page 5112, 1997.[Banks 98] Tom Banks, Willy Fischler, Igor R Klebanov & Leonard Susskind. Schwarzschild black holes in Matrix theory II . Journal of High EnergyPhysics, vol. 1998, no. 01, page 008, 1998.[Betancourt 17] Michael Betancourt. A conceptual introduction to Hamiltonian MonteCarlo . arXiv preprint arXiv:1701.02434, 2017.[Clark 05] M.A. Clark & A.D. Kennedy. https://github.com/mikeaclark/AlgRemez , 2005.[Filev 16] Veselin G. Filev & Denjoe OConnor. The BFSS model on the lattice .Journal of High Energy Physics, vol. 2016, no. 5, May 2016.73Furuuchi 03] K. Furuuchi, E. Schreiber & G. W. Semenoff. Five-Brane Thermody-namics from the Matrix Model , 2003.[Gattringer 09] Christof Gattringer & Christian B Lang. Quantum Chromodynamicson the Lattice: An Introductory Presentation (Lecture Notes in PhysicsVol. 788) , 2009.[Hadizadeh 05] Shirin Hadizadeh, Bojan Ramadanovic, Gordon W Semenoff & Dono-van Young. Free energy and phase transition of the matrix model on aplane wave . Physical Review D, vol. 71, no. 6, page 065016, 2005.[Hanada 07] Masanori Hanada, Jun Nishimura & Shingo Takeuchi. Nonlattice Sim-ulation for Supersymmetric Gauge Theories in One Dimension . PhysicalReview Letters, vol. 99, no. 16, Oct 2007.[Hotta 99] Tomohiro Hotta, Jun Nishimura & Asato Tsuchiya. Dynamical aspectsof large-N reduced models . Nuclear Physics B, vol. 545, no. 1-3, page543575, Apr 1999.[Ishibashi 97] Nobuyuki Ishibashi, Hikaru Kawai, Yoshihisa Kitazawa & AsatoTsuchiya. A large-N reduced model as superstring . Nuclear PhysicsB, vol. 498, no. 1-2, page 467491, Aug 1997.[Ito 14] Y. Ito, S.-W. Kim, Y. Koizuka, J. Nishimura & A. Tsuchiya. A renor-malization group method for studying the early universe in the LorentzianIIB matrix model . Progress of Theoretical and Experimental Physics,vol. 2014, no. 8, page 83B010, Aug 2014.[Ito 15] Yuta Ito, Jun Nishimura & Asato Tsuchiya. Power-law expansion ofthe Universe from the bosonic Lorentzian type IIB matrix model . Journalof High Energy Physics, vol. 2015, no. 11, Nov 2015.[Joseph 15] Anosh Joseph. Review of lattice supersymmetry and gauge-gravity du-ality . International Journal of Modern Physics A, vol. 30, no. 27, page1530054, Sep 2015.[Joseph 20] Anosh Joseph. Markov chain monte carlo methods in quantum fieldtheories: A modern primer. Springer, Cham, 2020.[Kabat 01a] Daniel Kabat, Gilad Lifschytz & David Lowe. Black hole thermody-namics from calculations in strongly coupled gauge theory . InternationalJournal of Modern Physics A, vol. 16, no. 05, pages 856–865, 2001.74Kabat 01b] Daniel Kabat, Gilad Lifschytz & David A Lowe. Black hole entropyfrom nonperturbative gauge theory . Physical Review D, vol. 64, no. 12,page 124015, 2001.[Kawahara 07a] Naoyuki Kawahara, Jun Nishimura & Shingo Takeuchi. High temper-ature expansion in supersymmetric matrix quantum mechanics . Journalof High Energy Physics, vol. 2007, no. 12, page 103103, Dec 2007.[Kawahara 07b] Naoyuki Kawahara, Jun Nishimura & Shingo Takeuchi. Phase struc-ture of matrix quantum mechanics at finite temperature . Journal ofHigh Energy Physics, vol. 2007, no. 10, page 097097, Oct 2007.[Klebanov 98] Igor R Klebanov & Leonard Susskind. Schwarzschild black holes invarious dimensions from matrix theory . Physics Letters B, vol. 416,no. 1-2, pages 62–66, 1998.[Metropolis 53] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth,Augusta H Teller & Edward Teller. Equation of state calculations byfast computing machines . The journal of chemical physics, vol. 21,no. 6, pages 1087–1092, 1953.[Neal 11] Radford M Neal et al. MCMC using Hamiltonian dynamics . Handbookof markov chain monte carlo, vol. 2, no. 11, page 2, 2011.[Nishimura 00] Jun Nishimura & Graziano Vernizzi. Brane World Generated Dynam-ically from String Type IIB Matrices . Physical Review Letters, vol. 85,no. 22, page 46644667, Nov 2000.[Nishimura 11] Jun Nishimura, Toshiyuki Okubo & Fumihiko Sugino. Systematicstudy of the SO(10) symmetry breaking vacua in the matrix model fortype IIB superstrings . Journal of High Energy Physics, vol. 2011,no. 10, Oct 2011.[Nishimura 19] Jun Nishimura & Asato Tsuchiya. Complex Langevin analysis of thespace-time structure in the Lorentzian type IIB matrix model . Journalof High Energy Physics, vol. 2019, no. 6, Jun 2019.[Nishimura 20] Jun Nishimura. New perspectives on the emergence of (3+1)D expand-ing space-time in the Lorentzian type IIB matrix model , 2020.[Peskin 95] Michael E Peskin & Daniel V Schroeder. An Introduction to QuantumField Theory (Boulder, CO) , 1995.[Polyakov 78] Alexander M Polyakov. Thermal properties of gauge fields and quarkliberation . Physics Letters B, vol. 72, no. 4, pages 477–480, 1978.75Robert 04] Christian Robert & George Casella. Monte carlo statistical methodsspringer-verlag . New York, 2004.[SEMENOFF 04] GORDON W. SEMENOFF. MATRIX MODEL THERMODYNAMICS .Quantum Theory and Symmetries, Oct 2004.[Vetterling 02] William T Vetterling, William H Press, Saul A Teukolsky & Brian PFlannery. Numerical recipes example book (c++): The art of scien-tific computing. Cambridge University Press, 2002.[Wilson 74] Kenneth G Wilson. Confinement of quarks . Physical review D, vol. 10,no. 8, page 2445, 1974.[Witten 95] Edward Witten.