ClasSOMfier: A neural network for cluster analysis and detection of lattice defects
CClasSOMfier: A neural network for cluster analysisand detection of lattice defects
Javier F. Troncoso
Atomistic Simulation Centre, Queen’s University Belfast, Belfast BT7 1NN, UKE-mail: [email protected]
Abstract.
ClasSOMfier is a software package to classify atoms into a given numberof disconnected groups (or clusters) and detect lattice defects, such as vacancies,interstitials, dislocations, voids and grain boundaries. Each cluster is formed by atomswhose atomic environment can be described by a common pattern. Unlike manymethods available in the literature, where these patterns are given in advance and areassociated with known lattice structures (i.e. fcc, bcc or hcp), this code implements aKohonen network, which is based on unsupervised learning and where no informationabout the atomic environment has to be given in advance. ClasSOMfier acceleratesthe application of machine learning for cluster analysis by providing an efficient andfast code in Fortran with a user-friendly interface in Python.
Keywords : neural network, Kohonen network, cluster analysis. a r X i v : . [ phy s i c s . a t m - c l u s ] O c t lasSOMfier: A neural network for cluster analysis and detection of lattice defects
1. Introduction
Some materials properties such as the mechanical strength or the thermal conductivityare determined by the lattice structure. Thus, the knowledge of the atomic structure iscrucial in the design of materials for specific uses, and the design of defected lattices,including grain boundaries and point defects, has received significant attention recently.Just to give a concrete example: in PbTe, the thermal conductivity can be reducedby more than 30% in the presence of grain boundaries and point defects [1]. For thatreason, it is important to study the dynamics and the interactions of these defects. Theuse of molecular dynamics (MD) simulations is an efficient strategy for this purpose,but they have to be supplemented by visualization tools to track the evolution of theposition and size of all defects in the simulation box and their impact on the total energyof the system. From a computational point of view, the detection of these defects can bemade by identifying patterns and crystal symmetries. For this purpose, several analysismethods have been proposed in the literature [2]. The Common Neighbor Analysis(CNA) method and the Ackland–Jones analysis method are the most known methods.They study the environment of each atom and each atom is classified into a known ideallattice structure (such as fcc, bcc or hcp) or an amorphous structure [3, 4]. In the CNAmethod, one atom is assigned to an ideal perfect lattice if its number of neighbors andthe interatomic distances match the requirements of the lattice structures. In the BondAngle Analysis (BAA) method, angles are used instead of distances. The Ackland–Jonesanalysis method uses both distance and angular distributions. One of the advantagesof these methods is that the number of ideal lattices is limited and the algorithm onlyneeds information about the local environment of each atom, so they work relativelyfast and are not limited to small samples. However, they are not able to characterizeamorphous regions and certain information about the lattice structure has to be knownin advance.Other methods use specific properties of each atom, such as the potential energy,to indicate if they are part of a perfect lattice. If one atom is part of a crystal defect, itsenergy is higher and it can be identified. Similarly, the identification of atoms of crystaldefects can be achieved by using the Centrosymmetry parameter (CSP), a function of theinteratomic distances among one atom and its first N neighbors [5]. The CSP and thepotential energy per atom are useful quantities to characterize the local lattice disorder.However, in MD simulations, all atoms are far from their equilibrium positions, and thechoice of the threshold energy or the cutoff distance that differentiates atoms that formthe perfect lattice is not easy.Finally, geometric methods such as the Voronoi decomposition have also beenproposed to identify the lattice structure [6]. In this method, one atom is enclosedby the Voronoi polyhedron formed by the first neighbors to determine the structuraltype. However, this method is highly sensitive to perturbations of the atomic positions,like those present in MD simulations.All of these methods can identify known lattice structures, but certain information lasSOMfier: A neural network for cluster analysis and detection of lattice defects k initial random points, known as seeds, andall input vectors are compared with all these seeds by means of the Euclidean distanceand assigned to the cluster represented by the closest seed. After each step, the seedsare recalculated as the average position of the objects assigned to the cluster that theyrepresent. However, this method is sensitive to initialization and depends on the choiceof the initial seeds. More complex algorithms, such as the Fuzzy c-means, where eachinput vector can belong to more than one cluster, and artificial NNs, have been proposedin the literature [13, 11]. In the present work, we use a Kohonen network, a type ofartificial NN based on unsupervised learning, to study local atomic environments. Theutilization of NNs has become a standard tool for regression and classification tasks byusing both supervised and unsupervised learning. Since the introduction of artificialNNs in 1943 to model signal processing in the brain [14], they have been used in manyfields, including Biology, Physics, Engineering, and Economics [15]. Applications ofNN techniques to domains in physics cover fields such as Particle Physics, Cosmology,Quantum Computing, and Materials Science [7]. The advantage of Kohonen networksis that they can be easily implemented, provide a more robust learning and are lesssensitive to noise than the k-means algorithm. lasSOMfier: A neural network for cluster analysis and detection of lattice defects
2. Kohonen networks for cluster analysis
An artificial NN is a collection of connected nodes, known as neurons, which reproducethe interactions among neurons in a biological brain. A simple feed-forward NN isrepresented in Fig. 1. In a feed-forward NN, the data flow from input nodes, whichcontain the input data, to the output nodes through layers of interconnected nodes (ifany), known as hidden layers. In the example in Fig. 1, there are n = 2 input features, h = 3 nodes at the hidden layer and m = 1 output feature. The circles with whitebackground represent bias nodes, which produce a constant input (= 1). They allowobtaining more flexibility in the NN. When the information moves forward, the output ofeach neuron is computed by some non-linear function, f , known as activation function,of the weighted sum of its inputs. Activation functions determine the final output of theNN and help normalize the output of each neuron NNs, limiting the presence of largeweights. The most common activation functions are the sigmoid function, the hyperbolicfunction, the ReLu function, and the identity function [16]. The NN represented in Fig.1 corresponds to the analytic form: y m = f (cid:32) (cid:88) h =0 w mh f (cid:32) (cid:88) n =0 w hn x n (cid:33)(cid:33) , (1)where w mh and w hn are the weight matrices that connect the layers. NNs are trainedusing the training data to determine these weights. The advantage of the architectureof a feed-forward NN is that these weights can be easily obtained using the Back-Propagation algorithm in supervised learning [17]. In this algorithm, weights aredetermined by propagating the errors backward, from the output layer to the inputlayer.ClasSOMfier implements a Kohonen network. A Kohonen network is a type of self-organizing map (SOM) that was introduced in 1982 by Teuvo Kohonen for image andsound analysis [18]. SOMs are artificial NNs trained using unsupervised learning [17].In this method, input vectors are mapped into a discrete number of output nodes, andthe algorithm is based on competitive learning: all nodes compete among themselvesto be activated and the winning node determines the category of the input vector.The objective of this method is not to predict the output vector, but to determine theoutput node that best represents the input data. The number of nodes is indicated asan input parameter in ClasSOMfier and depends on the degree of variability that theuser wants to detect. In Section 3, we will analyze the performance of this code witha number of clusters between 2 and 5, and we will see that these numbers of nodesare enough to detect lattice defects. SOMs are feed-forward NNs but with no hidden lasSOMfier: A neural network for cluster analysis and detection of lattice defects x x x y x n Hidden Layer(h:3+1 nodes) Output Layer y m w hn w mh Figure 1.
Neural network formed by 2 input features and 1 output feature and ahidden layer formed by 3 nodes. The nodes with white background represent biases.The matrices w hn and w mh connect the layers. layers, i.e., the data flow from the input layer directly to the output layer, and theoutput layer is represented by a 1- or 2-dimensional lattice of nodes. In this work, weuse a 1-dimensional lattice like that shown in Fig. 2. In this figure, the input layer isrepresented by the vector x n , where n = 1 , , w mn , where m = 1 , , , w mn to a random value.(ii) Set t = 1, where t is the number of the current iteration. t max is the maximumnumber of iterations (epochs) over all input vectors.(iii) Select an input vector (cid:126)x .(iv) For each output node m , calculate the Euclidean distance: D m = (cid:118)(cid:117)(cid:117)(cid:116) n (cid:88) i ( x i − w mi ) . (2)(v) Select the node m that minimizes this distance. This node is known as the BestMatching Unit (BMU). The BMU will determine the category of the input vector. lasSOMfier: A neural network for cluster analysis and detection of lattice defects x x x y y y y Input Layer x n Output Layer y m w mn Figure 2.
Kohonen network formed by 3 input features and 4 output nodes. Theinput and output layers are connected through the weight matrix w mn . (vi) In Kohonen learning, all weights are updated following the rule: w tmn = w t − mn + η ( t ) h m ( t ) (cid:0) x n − w t − mn (cid:1) , (3)where η ( t ) is the learning rate. Different alternatives have been proposed to ensureconvergence and prevent oscillations [19]. In this work, the learning rate decreaseswith the number of iterations, t , as follows: η ( t ) = η t (4)Different options have also been tested, with success: η ( t ) = η e − t/t max , (5) η ( t ) = η t/t max . (6)These options are compared in Fig. 3, where η is the maximum learning rate.The function h m ( t ) is known as the neighborhood distance weight and determinesthe variation of the weight after each step. In a Kohonen network, this parameteris higher for all nodes closer to the BMU. In this work, we use the Pseudo-Gaussianneighborhood distance weight proposed by Matsushita et al. [20]: h m ( t ) = exp (cid:18) − d σ ( t ) (cid:19) , (7) lasSOMfier: A neural network for cluster analysis and detection of lattice defects d is the euclidean distance between node m and the BMU, using periodicboundary conditions (PBC), and σ ( t ) is the neighborhood function, which dependson the current iteration t as the learning rate does. Iteration, t L e a r n i n g r a t e , = 1/ t = 0.01 t / t max = e t / t max Figure 3.
Dependence of the learning rate, η , on the iteration step t , where t max isthe maximum number of iterations. (vii) Repeat steps (iii)-(vi) with different input vectors from the train data.(viii) Stop if the termination criteria is met or update t = t + 1 and move to step (iii). ClasSOMfier is a software package for cluster analysis. Thus, known the positionsof one atom and its neighbors, ClasSOMfier can classify it according to a commonpattern shared with all atoms in the same group. The input vector must containall the relevant information of the atomic environment. Several descriptors of localenvironments have been proposed in the literature, including Coulomb matrices [21],sine matrices [22], Atom-centered Symmetry Functions [23] and Smooth Overlap ofAtomic Orbitals (SOAP) [24]. These descriptors are invariant to rotations, translationsand permutations of atoms of the same type, and provide a reliable description of localenvironments. In the present work, we use the energy per atom and Atom-centeredSymmetry Functions to describe local environments. These Symmetry Functions havebeen proved to be effective in the description of atomic environments in the developmentof NN potentials to describe atomic interactions [23]. Two radial symmetry functions lasSOMfier: A neural network for cluster analysis and detection of lattice defects i [23]: G i = N (cid:88) j f c ( R ij ) , (8) G i = N (cid:88) j e − α ( R ij − R s ) f c ( R ij ) , (9)where f c ( R ij ) = 0 . (cid:18) cos (cid:18) πR ij R c (cid:19) + 1 (cid:19) , (10)and N is the number of neighbors inside the sphere with cutoff radius R c and centered atatom i . The function f c ( R ij ) is known as cutoff function and is multiplied by Gaussianswith width proportional to α and center shifted by a distance R s in Eq. 9. Equations8 and 9 are radial functions, and R c , R s and α are model parameters. The inputvector in our calculations consists of a set of 30 radial symmetry functions, G i and G i ,with R c = 0 . l c , . l c , . l c , . l c , . l c and 1 . l c , where l c is a characteristic length, R s = 0 . R c and α = 1 , . , . , .
01 ˚A − , plus the energy per atom and the 6 numbersof neighbors in the 6 spheres with radii R c . The number of input features necessary todescribe the local environment of one atom depends on the complexity of the material.
3. Results and Discussion
Python is the default interfacing language through which the user interacts with thealgorithm, which is implemented in Fortran. The code reads dump files from LAMMPS[25, 26] and returns xyz files that can be visualized in Ovito [27]. In this section, we useClasSOMfier to detect lattice defects in PbTe and Mg. The structures are generated inLAMMPS after performing energy minimization or MD simulations with the potentialsdescribed in Refs. [1] and [28] for PbTe and Mg, respectively. An implementationof the classifier to separate atoms into two different groups according to their localenvironment is shown in Fig. 3. σ , η and the number of epochs can be added asoptional arguments. In this example, default values are: σ = 1 . η = 0 . https://github.com/JaviFdezT/ClasSOMfier .The performance of the ClasSOMfier package is illustrated for the detection oflattice defects in PbTe and Mg. PbTe presents a rocksalt structure, with cubicsymmetry, while Mg presents an hcp structure. In this section, we will study the effectof charged particles in PbTe and characterize complex grain boundaries in Mg. We demonstrate the use of the classifier to detect point defects and study their regionof influence. The input data is generated with LAMMPS. In Fig. 5, we analyze the lasSOMfier: A neural network for cluster analysis and detection of lattice defects from c l a s s o m f i e r import ClasSOMfier Figure 4.
Python example to run the classifier with default values. l c is thecharacteristic length. The input file, dumpgb.file, includes the positions and energiesof all atoms in a structure containing grain boundaries. The total number of atoms is6447. effect of a single vacancy in PbTe and Mg. We can see that the presence of a singlevacancy has a stronger impact in PbTe than in Mg. This is caused by the fact thatthe total charge is not kept constant in PbTe. In Fig. 5(a), one atom of Pb has beendeleted at the center of the simulation box, and a group of atoms around the vacancy isaffected by this new defect (red atoms). On the other hand, the blue atoms keep theirrocksalt structure and their local environment is different from that of the atoms in red.As a consequence, the two types of atoms are classified into two different groups. InFig. 5(b), the blue atoms have been deleted to visualize the region of influence of thevacancy. As we can see in Figs. 5(b) and 5(d), the number of atoms affected by thepresence of a vacancy is smaller in Mg, because a single vacancy in PbTe is a chargeddefect. It is important to note that the detection of lattice defects is not affected by theuse of the energy per atom, which depends on the atom type, as a feature in the inputvector.In Fig. 6, we study the effect of the charge on the formation of vacancies. Wecompare the region of influence of a single vacancy (Fig. 6(a)), a Schottky dimer (Fig.6(b)), and a Schottky pair (Fig. 6(c)) when the atoms forming the ideal lattice structureare not shown. A Schottky pair is formed by a pair of isolated vacancies, and theSchottky dimer is formed by 2 consecutive vacancies. The region of influence of theSchottky dimer is formed by 92 atoms, while it is formed by 160 atoms for a Schottkypair. This difference is in good agreement with the fact that the formation energy toform a Schottky pair is larger than that to form a Schottky dimer.We also demonstrate the use of ClasSOMfier to detect interstitials and study theirregion of influence. We analyze the effect of a single interstitial in Mg in Fig. 7, and asingle interstitial and pairs of interstitials in PbTe in Fig. 8. In Figs. 7(a) and 7(b), asingle interstitial is created at the center of the simulation box containing 9600 atoms.In Fig. 8, we compare the effect of a single vacancy (Fig. 8(a)), a pair of interstitials(Fig. 8(b)) and a dimer of interstitials (Fig. 8(c)). We can see that the presence ofa single interstitial has a stronger impact in Mg than in PbTe; this difference occursbecause Mg presents an hcp lattice, which is not isotropic. We also observe that fewer lasSOMfier: A neural network for cluster analysis and detection of lattice defects (a) (b)(c) (d) Figure 5.
Study of the area of influence of a single vacancy in PbTe (a,b) and Mg(c,d). The vacancy is created at the center of a simulation box containing 8000 atoms,and all atoms are classified into two groups. Atoms in blue present the ideal latticestructure and atoms in red are affected by the presence of the vacancy. The red regionsare isolated in (b,d). atoms are affected by dimers than by single interstitials. This difference lies in the factthat dimers are no charged defects, and therefore long-range Coulomb interactions arenot present.
We used ClasSOMfier to analyze the region of influence of a void, dislocations, andgrain boundaries in PbTe in Fig. 9. In Fig. 9(a), a void is created at the centerof a simulation box containing 8000 atoms, followed by relaxation using MD withthe classical potential defined in Ref. [1]. ClasSOMfier identifies a thin layer aroundthe void (in red) as the regions formed by the atoms affected by the presence of thevoid. In Fig. 9(b), ClasSOMfier is used to characterize the region of influence of adislocation in PbTe. The edge dislocation was generated with Atomsk [29] and relaxed lasSOMfier: A neural network for cluster analysis and detection of lattice defects (a) (b) (c) Figure 6.
Study of the area of influence of a single vacancy (a), a Schottky dimer(b), and a Schottky pair (c) in PbTe. The vacancies are created at the center of asimulation box containing 8000 atoms, and all atoms are classified into two groups.Only the atoms affected by the presence of the point defects are shown.(a) (b) (c)
Figure 7.
Study of the area of influence of a single interstitial in Mg. The interstitialis created at the center of a simulation box containing 9600 atoms, and all atoms areclassified into two groups. Atoms in blue present the ideal lattice structure and atomsin red are affected by the presence of the vacancy. The red region is isolated in (c). with LAMMPS. The code can detect and differentiate small deviations in the ideal latticestructure, associated with the presence of the dislocation. In Fig. 9(c), ClasSOMfierdifferentiates atoms with the ideal lattice structure (blue) and atoms at grain boundaries(red). The polycrystalline structure is formed by randomly generated grains followingthe Voronoi tessellation [30]. The study and characterization of grain boundaries atthe nanoscale is important to understand phenomena such as grain growth and Zenerpinning [31]. Grain boundaries are metastable defects and, at finite temperatures, grainsgrow until a limiting grain size is reached [31]. The analysis of the interaction betweengrain boundaries and volume defects present in the sample, together with the study ofchanges in the grain boundary width and the position of the grain boundary, is crucialto understand properties at the meso- and macroscale. ClasSOMfier can isolate grain lasSOMfier: A neural network for cluster analysis and detection of lattice defects (a) (b) (c) Figure 8.
Study of the area of influence of a single interstitial (a), a pair of interstitials(b), and a dimer of interstitials (c) in PbTe. The interstitials are created at the centerof a simulation box containing 8000 atoms, and all atoms are classified into two groups.Only the atoms affected by the presence of the point defects are shown. boundaries to analyze how they evolve in MD simulations.In Fig. 10, ClasSOMfier is used to analyze a more complex structure. A twinembryo is created in Mg, and this software package is used to detect and characterizethe grain boundary. The structure was generated using the Eshelby method [32, 33]and relaxed using the potential described in Ref. [28]. The final structure is shown inFig. 10(a). The atoms forming the ideal lattice (blue) and the grain boundary (red)are shown in Fig. 10(a), and the atoms at the grain boundary are isolated in Fig.10(b). Finally, all atoms at the grain boundary are analyzed in Fig. 10(c). There aredifferent types of grain boundaries between the two grains due to the different relativeorientations between the lattice vectors of the adjacent grains at each interface. Thedifferent types of grain boundaries are shown in Fig. 10(c): two twin boundaries (TBs)and two twin tips (TTs) forming the lateral grain boundaries, and two basal-prismatic(BP) grain boundaries and two prismatic-basal (PB) grain boundaries at the corners.The program is not able to differentiate these regions; however, it is possible to detectspecific patterns for each grain boundary. In this figure, the atoms at the grain boundaryare classified into five groups, shown in red, yellow, white, blue and green. It is possibleto see that the red atoms are dominant at BP and non-existent at PB, while blue atomsare dominant at PB and non-existent at BP. Additionally, TB boundaries are thin andpresent red atoms at the center, while TT boundaries are thicker and red atoms are notat the center.
4. Conclusions
ClasSOMfier is a new code developed in Python and Fortran for cluster analysis anddetection of lattice defects. Its user-friendly interface allows using it in Python with acomprehensive workflow, from LAMMPS to Ovito. It reads dump files from LAMMPScontaining atom positions and energies per atom, classifies all atoms into a given number lasSOMfier: A neural network for cluster analysis and detection of lattice defects (a) (b) (c) Figure 9.
Study of the area of influence of a void (a), a dislocation (b) and grainboundaries (c) in PbTe. The void is created at the center of a simulation box containing8000 atoms, followed by relaxation using MD, and all atoms are classified into twogroups. Atoms in blue present the ideal lattice structure and atoms in red are affectedby the presence of lattice defects.(a) (b) (c)
Figure 10.
Study of the area of influence of a grain boundary in Mg. The atoms areclassified into several groups according to their local environment. The atoms formingthe grain boundary, in red, are isolated in (b), while the atoms from the other groupsare shown in blue in (a). In (c), the atoms at the grain boundary is classified intodifferent groups (5) to detect common patterns of categories, and writes output files that can be read by Ovito to visualize the differentclusters. It implements a Kohonen network, which is a NN based on unsupervisedlearning. The utilization of NNs has become a standard tool for regression tasks inmaterials science [8], and we show that it is a powerful tool for cluster analysis anddetection of lattice defects. Its robust architecture can easily find patterns in latticestructures and the figures shown in Section 3 do not depend on the initial conditions,i.e., the initial weights. This problem is often found in k-mean clustering.Radial functions, coordination numbers and the energy per atom are used to buildthe input vector. These radial functions have been proved to be reliable structuraldescriptors [23]. The performance of these radial functions and the Kohonen networkimplemented in ClasSOMfier was studied through the analysis of the effect of point lasSOMfier: A neural network for cluster analysis and detection of lattice defects
Acknowledgments
This work was supported by a research grant from Science Foundation Ireland (SFI)and the Department for the Economy Northern Ireland under the SFI-DfE InvestigatorsProgramme Partnership, Grant Number 15/IA/3160. We thank Jorge Kohanoff,Vladyslav Turlo, Yang Hu and Eduardo M. Bringa for insightful discussions.
References [1] Fernandez-Troncoso J, Aguado-Puente P and Kohanoff J J 2019
Journal of Physics: CondensedMatter ISSN 0953-8984 (
Preprint )[2] Stukowski A 2012
Modelling and Simulation in Materials Science and Engineering ISSN09650393 (
Preprint )[3] J Dana Honeycutt and Andemen H C 1949
Applied Scientific Research Physical Review B ISSN 10980121[5] Kelchner C L, Plimpton S J and Hamilton J C 1998 URL https://journals.aps.org/prb/pdf/10.1103/PhysRevB.58.11085 [6] Voronoi G 1908
Journal f¨ur die reine und angewandte Mathematik
Reviews of Modern Physics Preprint ) URL https://doi.org/10.1103/RevModPhys.91.045002 [8] Behler J 2016
Journal of Chemical Physics
ISSN 00219606 URL http://dx.doi.org/10.1063/1.4966192 [9] Rasmussen C E and Williams C K I 2006
Gaussian Processes for Machine Learning (MIT Press)ISBN 026218253X[10] Nello Cristianini and Shawe-Taylor J 2000
An Introduction to Support Vector Machinesand other Kernel-Based Learning Methods vol 11 (Cambridge University Press) ISBN9786162833052 URL [11] Ceriotti M 2019
Journal of Chemical Physics
Preprint )[12] MacQueen J 1967 Some methods for classification and analysis of multivariate observations. lasSOMfier: A neural network for cluster analysis and detection of lattice defects Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability,Volume 1: Statistics (Berkeley, Calif.: University of California Press) pp 281—-297 ISSN15441024[13] Mingoti S A and Lima J O 2006
European Journal of Operational Research
The bulletin of mathematical biophysics Preprint )[15] Rabu˜nal J R and Dorado J 2006
Artificial Neural Networks in Real-Life Applications (Idea GroupInc (IGI)) ISBN 1591409020[16] Fausett Laurene 1994
Fundamentals of Neural Network, Architectures, Algorithm And Applications (Prentice-Hall, Inc.) ISBN 0133341860[17] Larose D T and Larose C D 2014
Discovering knowledge in data : an introduction to data mining (Wiley) ISBN 9780470908747[18] Kohonen T 1982
Biological Cybernetics Remote Sensing ISSN 20724292 (
Preprint arXiv:1903.11114v3 )[20] Matsushita H and Nishio Y 2010
The 2010 International Joint Conference on Neural Networks(IJCNN)
Physical Review Letters
ISSN 00319007 (
Preprint )[22] Faber F, Lindmaa A, Von Lilienfeld O A and Armiento R 2015
International Journal of QuantumChemistry
Preprint )[23] Behler J 2011
Journal of Chemical Physics
ISSN 00219606[24] Bart´ok A P, Kondor R and Cs´anyi G 2013
Physical Review B ( Preprint arXiv:1209.3140v3 )[25] http://lammps.sandia.gov[26] Plimpton S 1995
J. Comp. Phys.
Modelling and Simulation in Materials Science and Engineering ISSN 0965-0393[28] Dickel D E, Baskes M I, Aslam I and Barrett C D 2018
Modelling and Simulation in MaterialsScience and Engineering ISSN 1361651X[29] Hirel P 2015
Computer Physics Communications http://dx.doi.org/10.1016/j.cpc.2015.07.012 [30] Aurenhammer F 1991
ACM Computing Surveys (CSUR) Recrystallization and Related Annealing Phenomena (Elsevier) ISBN 9788578110796 (
Preprint arXiv:1011.1669v3 ) URL http://aspire.surrey.ac.uk/lists/261A016F-4A2D-319D-D2D7-07DD8784C67A.html [32] Xu B, Capolungo L and Rodney D 2013
Scripta Materialia ActaMaterialia194