Time- and ensemble-average statistical mechanics of the Gaussian Network Model
TTime- and ensemble-average statistical mechanics ofthe Gaussian Network Model
Alessio Lapolla, Maximilian Vossel and Aljaˇz Godec
Mathematical bioPhysics group, Max Planck Institute for Biophysical Chemistry, AmFassberg 11, G¨ottingen 37077, GermanyE-mail: [email protected]
Abstract.
We present analytical results (up to a numerical diagonalization of a realsymmetric matrix) for a set of time- and ensemble-average physical observables inthe non-Hookean Gaussian Network Model (GNM) – a generalization of the Rousemodel to elastic networks with links with a certain degree of extensional and rotationalstiffness. We focus on a set of coarse-grained observables that may be of interest in theanalysis of GNM in the context of internal motions in proteins and mechanical framesin contact with a heat bath. A C++ computer code is made available that implementsall analytical results.
Submitted to:
J. Phys. A: Math. Theor.
1. Introduction
Proteins utilize their unique dynamic character encoded in internal motions to executea biological function [1]. These motions span fs to s time-scales and their study thusrequires a multitude of experimental and/or computational methods [1]. The mostdetailed, atomically resolved information about these motions comes – with a grainof salt because of an underlying approximate, empirical potential energy function –from Molecular Dynamics (MD) simulations [2, 3]. However, even if the state-of-the-arthardware and highly parallel algorithms allow to reach ms time-scales [4] a substantialtime-scale gap remains. In addition, the sheer amount of detail in such tour de force simulations [4] often poses a challenge if one aims at extracting minimal, “leading order”physical principles underlying protein internal motions. Moreover, physical or eventopological properties alone may accurately predict selected features of protein dynamics[5, 6].To describe internal motions in proteins on an effective, coarse-grained leveldisregarding chemical details Tirion introduced the so-called Elastic Network Model(ENM) [7] akin to the seminal works of Rouse [8] and Flory [9] in polymer physics.The basic idea underlying ENMs is an elastic network connecting those residues, more a r X i v : . [ c ond - m a t . s t a t - m ec h ] F e b he Gaussian Network Model α atoms, that lie within a cutoff distance typically chosen inthe range 7 −
16 ˚A. Subsequent works considered various alternative models, e.g. so-called Gaussian Network Model (GNM) [10, 11] and the Anisotropic Network Model(ANM) [12, 13].Up do date elastic network models in various forms have have been successfullyapplied (and extended) to refine NMR- [14] and X-ray crystallography-derived proteinstructures [15], derive NMR-structural order parameters [16], investigate structuralcorrelations [17], function [18, 19, 20, 21], conformational transitions [22], and allostericeffects [23] in proteins, and to identify and decompose protein domains [24]. Furtherapplications involve improving Molecular Dynamics simulations [25], the study ofprotein evolution [26], investigations of smart polymers [27, 28], viruses [29], membranechannels [30, 31], and nucleic acids [32], as well as the prediction of rupture points insingle-molecule pulling experiments [33],Most of these works rely on “standard” Normal Mode Analysis (NMA) [34, 35],i.e. on spectral characteristics of the underlying mechanical vibration spectrum. Inthe particular context of proteins NMA has been used predominantly to identify thelarge-scale collective motions encoded in the eigenvector corresponding to the principaleigenvalue of the Hessian. Notably, the low-frequency modes are quite insensitive to theprecise value of the cutoff distance [36].Here we go beyond and present analytical results for time- and ensemble-averagecharacteristics of internal “reaction coordinates” in GNM in contact with a heat bathat a finite temperature. More precisely, we consider the non-Markovian dynamics ofinternal distances at equilibrium. Our results may be relevant for interpreting single-molecule spectroscopy data or Molecular Dynamics simulations.
2. The Gaussian Network Model
The Rouse model [8] is one of the earliest “elastic network” models of flexible linearpolymers (later on extended to more general network structures [9]). It neglects excludedvolume effects and hydrodynamic interactions. Within this theoretical framework beadsare connected by ideal, Hookean springs with vanishing resting length (i.e. at T = 0the beads’ positions would coincide). The strength of the springs is proportional to thetemperature T of the heat bath. The model does not accurately capture the features ofmolecules with a non-negligibly internal rigidity.ENMs [7] extend these core ideas by including a non-zero resting length, i.e. at T = 0 the residues are assumed to have distinct positions that are fixed in space. Thisidea is consistent with the results of NMR and X-ray crystallography that yield a setof positions R = { r i } of the N + 1 residues to which we refer as “the structure” of aprotein (NMR experiments in fact yield an ensemble of such structures).In GNMs a pair of residues i, j within a cutoff distance (i.e. | r i − r i | ≤ r c ) areassumed to be connected by identical (for sake of simplicity) but non-Hookean springswith a constant K . The interaction energy as a function of the particles’ positions he Gaussian Network Model R = { r i } is written as U GNM ( { r ij } ) = K (cid:88) (cid:104) i,j (cid:105) ( r ij − r ij ) T ( r ij − r ij ) , (1)where the sum spans all connected pairs. We now introduce for convenience the deviationfrom the (equilibrium) “structure” , ∆ R = { ∆ r i ≡ r i − r i } . The main simplifyinghypothesis of the GNM is that ∆ R at tempertature T corresponds to an isotropicGaussian random super-vector, i.e. P (∆ R ) = (cid:104) (2 π ) N ˜ K det Γ − (cid:105) − / exp (cid:32) − ˜ K R T Γ ∆ R (cid:33) , (2)where ˜ K ≡ K/k B T is the dimensionless strength (in units of thermal energy k B T ) and Γ is a 3( N + 1) × N + 1) block matrix in which each diagonal block is the connectivity(or Kirchhoff) matrix Γ with elementsΓ ij = − , if i (cid:54) = j and | r i − r j | ≤ r c , if i (cid:54) = j and | r i − r j | > r c − N +1 (cid:88) j,j (cid:54) = i Γ ij , if i=j. (3)The dynamics of the beads’ positions (i.e. deviations from the equilibrium “structure”)is assumed to follow the Itˆo equation d ∆ R ( t ) = − ξK Γ ∆ R ( t ) dt + √ D d W ( t ) , (4)where D is the diffusion coefficient and ξ ≡ D/k B T the mobility both assumed to beequal for all beads, and d W ( t ) is the increment of the multi-dimensional Wiener process(i.e. Gaussian white noise) with zero mean and covariance (cid:104) dW i ( t ) dW j ( t (cid:48) ) (cid:105) = δ ij δ ( t − t (cid:48) ).A discussion of ANM would require a different matrix Γ to take into account foranisotropic interactions between beads. Moreover, the potential energy U E NM woulddepend only on distances between the residues [35]. We do not treat this model here.Henceforth we measure energy in units of thermal energy k B T (i.e. U → U/k B T ),distances in units of the cutoff distance r c (i.e. ∆ R i → ∆ R i /r c ) and time in units ofthe diffusion time , t D ≡ r c /D – the time required for a bead with a diffusion coefficient D to diffuse a distance r c (i.e. t → t/t D ).It is convenient to pass to normal super-coordinates Q = { q k } that diagonalize Γ ,i.e. Q T ΓQ = diag( µ ) with ( Q ) ij ≡ Q ij , being the 3 × Q diagonalizes the Kirchoff matrix, i.e. Q T Γ Q = diag( µ i ), and thereforediag( µ ) ii = µ i . For convenience we let k ∈ { , · · · , N } with µ = 0 and Q i, referringto the center of mass motion, while i ∈ { , · · · , N + 1 } such that∆ r i = N (cid:88) k =0 Q ik q k , ∀ i { , · · · , N + 1 } . (5)In this notation the Itˆo equation corresponds to the Fokker-Planck equation describing N independent isotropic three-dimensional Ornstein–Uhlenbeck processes. Neglecting he Gaussian Network Model ∂ t G ( Q , t | Q ) = N (cid:88) k =1 (cid:2) ∂ q k + µ k ∂ q k q k (cid:3) G ( Q , t | Q ) , (6)with localized initial condition G ( Q , t = 0 | Q ) = δ ( Q − Q ) and naturalboundary conditions lim | Q |→∞ G ( Q , t | Q ) = 0. We solve Eq. (6) by means of aneigendecomposition [37] yielding G ( Q , t | Q ) = (cid:88) N Ψ R N ( Q )Ψ L N ( Q )e − Λ N t , (7)where Λ N denote eigenvalues, N being a multiset of integer-triples { n , · · · , n N } with n i = { n ix , n iy , n iz } such thatΛ N = N (cid:88) i =1 ( n ix + n iy + n iz ) µ i , (8)and Ψ L N ( Q ) and Ψ R N ( Q ) are the corresponding left and right eigenfunction given byΨ L N ( Q ) = N (cid:89) i =1 ψ n i ( q i ) , Ψ R N ( Q ) = P eq ( Q ) N (cid:89) i =1 ψ n i ( q i ) (9)where P eq ( Q ) (cid:81) Ni =1 ( µ i / π ) / e − µ i q i / is the equilibrium probability density function ofnormal coordinates and ψ n i ( q i ) = H n ix ( µ i q xi / H n iy ( µ i q yi / H n iz ( µ i q zi / (cid:112) n ix + n iy + n iz n ix ! n iy ! n iz ! , (10)where q x , q y and q z are the components of the vector q , and H n ( x ) denotes the n th”physicist’s” Hermite polynomial [38]. Using Mehler’s formula [39] ∞ (cid:88) n =0 ( y/ n n ! H n ( x ) H n ( z ) = 1 (cid:112) − y exp (cid:18) − y [ x − z ] − y (cid:19) (11)and recalling that µ = 0 we can also write Eq. (7) in a closed form [40] G ( Q , t | Q ) = N (cid:89) i =1 (cid:18) µ i π (1 − e − µ i t ) (cid:19) / exp (cid:18) − µ i ( q i − q i e − µ i t ) − e − µ i t ) (cid:19) , (12)where the equilibrium probability density function corresponds P eq ( Q ) ≡ lim t →∞ G ( Q , t | Q ) (13)In what follows we will use both forms of the Green’s function, i.e. Eq. (7) and Eq. (12). he Gaussian Network Model
3. Conformational dynamics
Throughout we are interested in conformational motions encoded in the dynamics ofsome internal distance d , e.g. the distance between two beads i and j , l = | r i − r j | or thedistance between the center of masses of two sets of beads Ω , Ω with Ω ∩ Ω = { } , l Ω , Ω = | (cid:80) i ∈ Ω r i / card(Ω ) − (cid:80) i ∈ Ω r j / card(Ω ) | where card(Ω i ) is the cardinality theset Ω i . Without loss of generality we may thus focus on the distance between twoarbitrary beads. Note that in absence of any dynamics in an equilibrium at T = 0 sucha distance is constant and equal to d . Expressed in normal coordinates we in turn have l ≡ r i − r j = N (cid:88) k =1 ( Q ik − Q jk ) q k + r i − r j ≡ N (cid:88) k =1 A k q k + d , (14)where in the second equality we have defined A k and d and omitted the labels i, j to simplify the notation. Note, moreover, that l ≡ | l | and the generalization to l Ω , Ω follows by linear superposition.We will focus on four types of observables. The first one is the (non-Markovian)conditional probability density of the time series of the coordinate, l t , defined as G d ( l, t | l ) ≡ P ( l t ∈ l d l | l t =0 ∈ l d l ) = (cid:104) δ ( l ( Q t ) − l ) δ ( l ( Q ) − l ) (cid:105) Q t (cid:104) δ ( l ( Q ) − l ) (cid:105) eq , (15)with lim t →∞ G d ( l, t | l ) ≡ P eq d ( l ) = (cid:104) δ ( l ( Q ) − l ) (cid:105) eq , and where in the second equalitywe have used the law of conditional probability and introduced the expectation over allMarkovian paths of the full system evolving from equilibrium (cid:104)·(cid:105) Q t , i.e. (cid:104)B(cid:105) Q t ≡ (cid:90) d Q (cid:90) d Q B ( Q , Q ) G ( Q , t | Q ) P eq ( Q ) . (16)and the expectation of any observable B ( Q ) over the equilibrium measure (cid:104)·(cid:105) eq is (cid:104)B(cid:105) eq ≡ (cid:82) d Q B ( Q ) P eq ( Q ). The second observable is the normalized equilibriumautocorrelation function C d ( t ) ≡ (cid:104) l ( t ) l (0) (cid:105) − (cid:104) l ( t ) (cid:105)(cid:104) l (0) (cid:105)(cid:104) l (cid:105) eq − (cid:104) l (cid:105) (17)where we have introduced th expectations (cid:104) l ( t ) l (0) (cid:105) ≡ (cid:104) l ( Q t ) l ( Q ) (cid:105) Q t = (cid:90) ∞ dl (cid:90) ∞ dl ll G d ( l, t | l ) P eq d ( l ) (cid:104) l ( t ) (cid:105) ≡ (cid:104) l ( Q t ) δ ( l ( Q ) − l ) (cid:105) Q t (cid:104) δ ( l ( Q ) − l ) (cid:105) eq = (cid:90) ∞ dll G d ( l, t | l ) (cid:104) l n (cid:105) eq ≡ (cid:104) l n ( Q ) (cid:105) eq = (cid:90) ∞ dll n P eq d ( l ) (18)The third observable is the 3( N + 1) × N + 1) position-covariance matrix [41] whoseelements are defined as C ijαβ ( t, t ) = (cid:104) ( r i,α ( t + t ) − r i,α )( r j,β ( t ) − r j,β ) (cid:105) Q t , (19)where r i,α is the α = { x, y, z } component of the position vector of bead i , r i . he Gaussian Network Model l τ evolvingfrom l τ =0 called the fraction of occupation time or “empirical density” [42] θ d ( l ; t ) ≡ t − (cid:90) t δ ( l τ − l ) dτ. (20)Note that all observables defined above are assumed to evolve from equilibrium.However, except for C ijαβ ( t, t ), the initial distribution in fact corresponds to equilibriumconstrained to a given value of the tagged distance l , i.e. from all those equilibriumconfigurations drawn from P eq ( Q ) that are compatible with l . This introduces memoryin the dynamics of l t [43]. The non-Markovian projected propagator G d ( l, t | l ) defined in Eq. (15) denotes theprobability density that the distance between the two tagged beads is equal to l at time t given that it was initially equal to l . Introducing the auxiliary functions η t ≡ N (cid:88) k =1 A k µ k e − µ k t , Ξ t ( d , l, l (cid:48) ) ≡ erfi (cid:32) d ( η − η t ) + η t ( l + l (cid:48) )2 (cid:112) η t ( η − η t ) (cid:33) (21)we find (for details of the calculation see Appendix A) P eq d ( l ) G d ( l, t | l ) = ll exp (cid:16) − ( l + l ) η t +( η − η t ) d η t ( η − η t ) (cid:17) √ πη t d ( η − η t ) [Ξ t ( d , − l, − l ) − Ξ t ( d , − l, l ) + Ξ t ( d , l, l ) + Ξ t ( − d , − l, l )] (22)where erfi( x ) is the imaginary error function [38] and P eq d ( l ) = ld e − ( l + d ) / η √ πη sinh (cid:18) ld η (cid:19) , (23)We also derive the spectral expansion of G d ( l, t | l ) that reads (see Appendix B) G d ( l, t | l ) = V ( l ; d ) − (cid:88) N V ( l ; d ) V N0 ( l ; d )e − Λ N t , (24)where the overlap elements V and V N0 admit a closed-form expression that is, however,somewhat complicated and thus given in Appendix B. Note that “the ground state”element is simple and corresponds to V ( l ; d ) = P eq d ( l ). The (normalized) autocorrelation function defined in Eq. (17) is made explicit by meansof the following results (cid:104) l (cid:105) eq = 2 (cid:114) η π e − d / η + (cid:18) d + 2 η d (cid:19) erf (cid:18) d √ η (cid:19) , (25) (cid:104) l (cid:105) eq = d + 6 η (26) he Gaussian Network Model (cid:104) l ( t ) l (0) (cid:105) is possible only using the spectral expansion in Eq. (24) and yields, (cid:104) l ( t ) l (0) (cid:105) = (cid:88) N V d V d N0 e − Λ N t , (27) V d = (cid:90) ∞ dllV ( l ; d ) , V d N0 = (cid:90) ∞ dllV N0 ( l ; d ) . (28)The analytic expression of the coefficients V d is lengthy and can be found in AppendixD. Plugging Eqs. (28) and Eq. (26) into Eq. (17) delivers an exact analytical resultfor the equilibrium distance autocorrelation function C d ( t ). Alternatively one may alsoevaluate C d ( t ) by numerical integration of the first line of Eq. (16) using Eq. (22), whichmay in fact be numerically more convenient than implementing the analytical solution. In the analysis of atomistic Molecular Dynamics (MD) simulations one often focuseson the position covariance matrix C ijαβ ( t, t ) [41] and its eigendecomposition. Thetrajectory derived from an MD stimulation is then projected on the eigenvector (orprincipal component) corresponding the largest eigenvalue of the covariance matrix withthe aim to identify the most important (potentially functionally relevant) motion in aprotein [41]. To facilitate a comparison between the aforementioned analysis of MDsimulation with GNM we compute C ijαβ ( t, t ) analytically. Passing as before to normalcoordinates we find C ijαβ ( t, t ) = (cid:104) N (cid:88) k =1 Q ik q kα ( t + t ) N (cid:88) l =1 Q jl q lβ ( t ) (cid:105) , (29)where the matrix elements Q ij do not depend on the spatial coordinate because theGNM is isotropic. Each process q k,α corresponds to an independent Ornstein–Uhlenbeckprocess, i.e. the solution of the Itˆo integral [44] (setting all constant to unity) q kα ( t ) = √ (cid:90) t e − µ k ( t − s ) dW kα ( s ) . (30)Since by construction (i.e. as a result of isotropy) only the elements of the same spatialcoordinate for any given normal mode survive the averaging in Eq. (29), the elementsof the covariance matrix read explicitly C ijαα ( t, t ) = N (cid:88) k =1 Q ik Q jk µ k e − µ k | t − t | . (31)Obviously C ijαα ( t, t ) is stationary (i.e. depends only on the time difference, C ijαα ( t, t ) = C ijαα ( | t − t | )). he Gaussian Network Model Single molecule experiments typically probe time-averaged observables. For example,F¨orster resonance energy transfer (FRET) [45] and plasmon ruler experiments [46] havebeen used to extract information about conformational motions of macro-molecules.A fundamental quantity to that underlies this kind of observables is the fraction ofoccupation time, θ d ( l ; t ), defined in Eq. (20) [47, 48, 49, 50, 51, 42] – the randomfraction of time a time-series (in our case an internal distance between two beads orbetween two center of masses) of length t attains a given value of l .In previous publications we have shown how to obtain the mean and the variance of θ d ( l ; t ) [51, 42]. Along these lines we here focus on the mean, (cid:104) θ d ( l ; t ) (cid:105) , and the variance, σ θ ; d ( l ; t ) ≡ (cid:104) θ d ( l ; t ) (cid:105) − (cid:104) θ d ( l ; t ) (cid:105) , of the occupation time fraction at equilibrium thatread, respectively (for a derivation see Appendix F) (cid:104) θ d ( l ) (cid:105) = P eq d ( l ) , (32) σ θ ; d ( l, t ) = 2 t (cid:88) N (cid:54) = V ( d ; d ) V N0 ( d ; d )Λ N (cid:18) − − e − Λ N t Λ N t (cid:19) . (33)Note that (cid:104) θ d ( l ; t ) (cid:105) corresponds to the equilibrium probability density for all times t since we are considering an ergodic system evolving from equilibrium initial conditions.The variance of the occupation time fraction can equivalently be obtained from (seee.g. [52]) σ θ ; d ( l, t ) = 2 t P eq d ( l ) (cid:20)(cid:90) t (1 − τ /t ) G d ( l, τ | l ) − P eq d ( l ) (cid:21) dτ. (34)The integral in Eq. (34) does not admit an explicit solution. However, it can easily becomputed via numerical quadrature. Moreover, it is possible to expand G d ( l, τ | l ) forshort times (details are given in Appendix E) yielding the small deviation limit G d ( l, t | l ) t → = 2 (cid:114) π (cid:18) √ κt + √ κtl (cid:19) + O ( t / ) (35)where we have introduced the shorthand notation κ = (cid:80) Nk =1 A k . Plugging Eq. (35) intoEq. (34) and performing the integral in turn yields σ θ ; d ( d, t ) t → (cid:39) P eq d ( l ) (cid:32) √ κπt + 415 l (cid:114) κtπ − P eq d ( l ) (cid:33) . (36)Since the dynamics of every stable system at equilibrium can be “linearized” forsufficiently small times t the small deviation asymptotic in Eqs. (36) and (35) is infact a general result for the (large) fluctuations of θ d ( l ; t ) at sufficiently short times.
4. Examples
We now apply the result of the previous section to the analysis of a Gaussian NetworkModel of a protein called adenylate kinase and the analysis of toy-model mechanicalframes. he Gaussian Network Model Figure 1.
Panels (a) and (b) depict a cartoon and the molecular surface (gray) of thetwo protein structures, called (a) “the closed” configuration 1AKE and (b) “the open”configuration 4AKE. Panels (c) and (d) show the corresponding connectivity matricesfor 1AKE and 4AKE, respectively. The blue and cyan square enclose, respectively, theNMP and LID residues. The cutoff distance used to obtain these matrices was 8 ˚A.
Adenylate kinase (ADK) is an enzyme catalyzing the reversible phosphorylation reactionthat transforms adenosine monophosphate (AMP) to adenosine triphosphate (ATP).The structure of ADK has been resolved using X-ray crystallography that uncoveredtwo distinct conformations of the protein that are deposited in the Protein Data Bank(PDB ID: 1AKE [53] and PDB ID: 4AKE [54]) and shown in Fig. 1.ADK consists of 214 residues divided in 3 macro-domains called CORE (residues1 −
29, 68 − − − − he Gaussian Network Model Table 1.
Distance between the center of masses of the three domains for bothstructures of ADK. All distances are expressed in units of the cutoff distance r c = 8 ˚A. d [ r c ] 1AKE 4AKECORE-LID 2 . . . . . . r c = 8 ˚A. The static (zero-temperature)distances between the center of masses of the three domains in both structures are givenin Table 1.Fig. 2 shows the equilibrium probability density function P eq d ( l ) (panels a and b) aswell as the autocorrelation function C d ( t ) (panels c and d) for all considered distances ofthe two GNMs representing the two conformational states of ADK. The structure 1AKEis evidently more compact than 4AKE and its corresponding autocorrelation functionsconsistently decay faster. Moreover, the CORE-NMP distance autocorrelation functiondecays faster compared to the other two distances whose autocorrelation functions arealmost identical (see Fig. 2d). This difference in relaxation is a result of differences inthe respective projection, i.e. whereas the eigenvalues of the underlying generator areidentical (see Eq. (28)) the numerical coefficients V d and V d N0 depend strongly on theparticular type of projection and thus modify the relaxation rate substantially [59].The lines in Figs. 2c) and 2d) have been obtained by means of a numericalintegration of the first line of Eq. (18) using the Gauss-Kronrod quadrature [60].Unfortunately the evaluation of the integrand is challenging for very short-times becauseit is a function sharply peaked along the diagonal of the l, l -plane. This feature prohibitsus to obtain reliably (that is, due to numerical imprecision) the autocorrelation functionfor very short times.Next we inspect the covariance matrix in Eq. (31) to identify the dominant,potentially functional important, motions in ADK. In order to reduce the informationcontent while retaining the most essential physics about the extent of local fluctuationsand how much the motion of each bead correlated to the motion of other beads we he Gaussian Network Model . . .
53 1 . . . a) 1AKE P e q d ( l ) l . .
52 2 3 4 5 6 b) 4AKE P e q d ( l ) l . . . . .
01 0 . c) C d ( t ) t CORE-LIDCORE-NMPLID-NMP . . . . .
01 0 . d) C d ( t ) t CORE-LIDCORE-NMPLID-NMP
Figure 2.
Panels (a) and (b) show the equilibrium probability density function for thethree center-of-mass distances for both structures (see Table 1 for the numerical valuesof d ). Panels (c) and (d) depict the respective distance autocorrelation functions. Notethat the first structure is more compact and its C d ( t ) decorrelates faster. Moreover,panel (d) reveals that the CORE-NMP distance decorrelates faster than the remainingtwo distances. introduce the covariance-time defined as τ ijα = (cid:90) ∞ C ijαα ( t ) dt, (37)which may be interpreted in a manner analogous to the correlation time [61, 62, 63], i.e.as a measure of how much the motion between the beads i and j is correlated over time.To measure how much the motion of a single bead is correlated with the rest of thesystem we consider the total the total covariance-time τ tot i,α ≡ (cid:80) j (cid:54) = i | τ ijα | . Conversely,the total variance-time is quantified directly by τ iiα . Note that the model is isotropicand thus independent of α . The results are shown in Fig. 3.Notably, one can immediately observe that those residues that are involved inthe large-scale open-closed motion (i.e. residues with a large τ iiα ) also participate incorrelated motions denoted by large values τ tot i,α . For the open structure 4AKE (seeFig. 3 a and b) the two ends of the LID and NMP domains move in a particularlycorrelated fashion. These residues are in fact those that move towards the core regionin the functional open-closed motion of the protein [55, 56]. A remnant of this collective he Gaussian Network Model Figure 3. (a) and (c) depict τ iiα and (b) and (d) τ tot i,α for each bead in the 4AKE and1AKE structures, respectively. Notably, the beads in the LID and NMP domains inthe 4AKE structure display a particularly large covariance-time. motion can also be seen in the closed structure 1AKE (Fig. 3 c and d), where the samebeads as in 4AKE have a larger τ tot i,α . This is likely a result of a higher local connectivity. Although GNMs were originally developed to describe proteins they can in fact be usedto model any mechanical system in which some underlying network of links imposesconstraints on the position of nodes while allowing small, Gaussian fluctuations drivenby thermal noise. Examples may include nano-machines such as piezoelectric actuatorsthat move probe-tips in atomic force microscopes [64, 65].In the generic context of “mechanical frames” the theory of structural rigidity dealswith the question of whether frames are rigid or not [66]. A frame is said to be rigid if onecannot change the distance between pairs of nodes without simultaneously altering the he Gaussian Network Model Figure 4.
A schematic representation of a stable (left) and an unstable (right) framewe consider below. length of at least one connection. A structure that is not rigid is in turn said to allowfor inextensional mechanisms. These arise due to a too low number or a particulararrangement of links. In addition, in frames with redundant links there exist statesof self-stress. Under given circumstances these states of self-stress impart stiffness toinextensional mechanisms [67].As anticipated by Maxwell such a classification of mechanical frames is often non-trivial and may require more information than encoded in the topology of the network[68]. A complete analysis of the mechanisms of a given frame can be obtained by a“singular value decomposition“ of the respective Equilibrium Matrix A [69] that relatesforces f on the nodes with tensions t in the links At = f . (38)Singular value decomposition of A allows (amongst other things) to determine the rank r of A and thereby the number of inextensional mechanisms m and states of self-stress s via s = b − r and m = 3 j − − r , where j is the number of joints and b the numberof links in the structure, and note that there are in general 6 rigid-body motions in 3spatial dimensions. Maxwell’s well-known formula b = 3 j − b − j + 6 = s − m. (39)To illustrate the concept we consider two toy-model frames depicted in Fig. 4. Bothhave j = 4 nodes and s = 0 states of self-stress. The rigid structure with b = 6 linkshas no inextensional mechanism (i.e. 6 − − −
0) while the structure with b = 5links has exactly m = 1 mechanism (i.e. 5 − − − d . Notably, in a GNM such distance fluctuations do not depend onthe equilibrium structure R . Only the equilibrium distance between the tagged beads, d = | r i − r j | , is relevant. In turn there is a redundancy – many distinct equilibrium he Gaussian Network Model .
01 0 . Stablea) C d ( t ) td = 0 . . . . . . . Rouse00.20.40.60.81 .
01 0 . Unstableb) C d ( t ) td = 0 . . . . . . . Rouse
Figure 5.
Distance autocorrelation function C d ( t ) for various values of the rest length d for the rigid (top panel) and non-rigid (bottom panel) frames depicted in Fig. 4.The black dots depict C d ( t ) in the Rouse limit d = 0 (see Appendix Appendix D fordetails). The vertical dashed lines corresponds to the time t c at which C d ( t c ) = e − .Note that the unstable structure relaxes slower. structures R may yield the same result that depends only on the connectivity matrix Γ and d .The (normalized) distance autocorrelation function C d ( t ) (see Eq. (17)) for thetwo frames is shown Fig. 5. For d (cid:46) . C d ( t ) depends onlyvery weakly on d . For larger values of d the relaxation time (see dashed verticallines in Fig. 5) increases. This observation may be explained by noticing that entropydominates the motion for small d . That is, in the limit of small d the rest lengthmay be neglected and the “Rouse limit” suffices to explain the dynamics essentiallyquantitatively. Conversely, as d increases a certain “stiffness” emerges in the frameand the (random) oscillations become localized around the equilibrium value d . Notethat the entropic contribution to C d ( t ) is more important for the non-rigid frame (seeright panel in Fig. 5) as we increase the value of d (see Fig. 5b)). Conversely, thedeparture from the Rouse limit towards the “large stiffness” case is faster in the stableframe (see Fig. 5a)). A larger d leads to a slower decay of the autocorrelation function C d ( t ).Next we consider the fraction of occupation time θ d ( l ; t ) [42]. We assume that the he Gaussian Network Model a) P e q d ( l ) d = 0 . b) d = 1 . c) d = 5 . t = 0 . d) σ d ( l , t ) l t = 0 . e) l t = 2 f) l Figure 6.
Panels a-c show the equilibrium probability density P eq d ( l ) for the stable(full lines) and unstable (dashed lines) structure for several values of d . Panels e-f depict the variance of the occupation time σ θ ; d ( l, t ) for the stable (full lines) andunstable (dashed lines) structure, respectively, for different values of d . The length ofthe trajectory t increases from d to f. initial condition evolves from equilibrium and therefore (cid:104) θ d ( l ; t ) (cid:105) = P eq d ( l ) whereas σ θ ; d ( l, t ) depends on time (see Eq. (33) as well as [51, 42]). The aforementioneddominance of the entropic (heat bath) contribution at small values of d is also noticeablethe the fluctuations of θ d ( l ; t ) as depicted in Fig. 6. Notably, as d increases the supportof σ θ ; d ( l, t ) progressively shifts towards larger l and concentrates near d .Notably, the variance of the occupation time fraction σ θ ; d ( l, t ) changes shape fromunimodal shape at short times t to bimodal at long t . Such a behavior is characteristicfor stochastic process in spatial confinement [42], i.e. fluctuations of θ d ( l ; t ) are largerin the vicinity of confining boundaries (even if these boundaries are “soft”).Moreover as d increases the shape of both, P eq d ( l ) as well as σ θ ; d ( l, t ) becomesmore symmetric. The reason seems to be that the effect of the confining boundaryat l = 0 becomes irrelevant as the support of σ θ ; d ( l, t ) begins to concentrate near asubstantial d . In other words although the projection of the dynamics of a link in3-dimensional space onto a (1-dimensional) distance destroys the Gaussian behaviour,the latter becomes (partially) restored at large values of d . he Gaussian Network Model
5. Conclusions
We presented analytical results (up to a numerical diagonalization of a symmetricmatrix) for a selection of relevant time- and ensemble-average physical observables inthe Gaussian Network Model (GNM). One may think of GNM as certain generalizationof the Rouse model to networks with links with a certain degree of extensional androtational stiffness. We determined a set of coarse-grained observables – internaldistances – that may be of interest in the analysis of GNM in the context of internalmotions in proteins or mechanical frames in contact with a heat bath. We hope thatour results will enable and motivate a more systematic analysis of GNM derived fromproteins [58]. To this end a C++ computer code is provided in the Supplementarymaterial that implements all result (for more details about the implementation seeAppendix Appendix G).
Acknowledgments
The authors thank David Hartich and Lars Bock for the useful discussions. The financialsupport from the German Research Foundation (DFG) through the Emmy NoetherProgram GO 2762/1-1 to AG is gratefully acknowledged.
Appendix A. Derivation of the equilibrium probability density
The equilibrium probability density function of any link-vector l is defined by P eq d ( l ) = V ( l ; d ) ≡ (cid:90) d Q Ψ R ( Q ) δ ( N (cid:88) k =1 A k q k + d − l )Ψ L ( Q ) (A.1)Applying the Fourier transform ˜ f ( s ) = π (cid:82) ∞−∞ d xf ( x )e − isx component-wise to Eq. (A.1)(i.e. l → s ) yields1(2 π ) (cid:90) d Q N (cid:89) k =1 (cid:16) µ k π (cid:17) / exp (cid:32) − N (cid:88) k =1 µ k q k + i ( A k q k + d ) s (cid:33) = (A.2)1(2 π ) e − s (cid:80) Nk =1 A k / µ k + i d s . (A.3)Inverting the Fourier transform we obtain, defining η = (cid:80) Nk =1 A k / (2 µ k ), P eq d ( l ) = V ( l ; d ) = 1(2 π ) (cid:18) πη (cid:19) / e − ( l − d ) / η . (A.4)Since we are only interested in the distance and not the direction we need to marginalizeover angles, i.e. (cid:90) ∞ d xx (cid:90) π d φ (cid:90) − d(cos θ ) V ( x ) δ ( | x | − x ) , (A.5) he Gaussian Network Model φ is the polar angle, θ is the azimuthal angle and without loss of generality wechoose a frame of reference such that the vector d is parallel to the z -axis. The solutionof this integral finally gives Eq (23): P eq d ( l ) = V ( l ; d ) = 1 √ πη ld e − ( l + d ) / η sinh (cid:18) ld η (cid:19) . (A.6) Appendix B. Spectral solution for G d In the spectral solution for the Green’s function in Eq. (24) we have defined the elements V ( l ; d ) , V N0 ( l ; d ) which are derived as follows. Let V ( l ; d ) = (cid:90) d Q Ψ R N ( Q ) δ ( N (cid:88) k =1 A k q k + d − l )Ψ L ( Q ) , (B.1) V N0 ( l ; d ) = (cid:90) d Q Ψ R ( Q ) δ ( N (cid:88) k =1 A k q k + d − l )Ψ L N ( Q ) . (B.2)Fortunately, the above elements V and V N0 are identical (cf. Eq (9)). Therefore whatwe need to solve for is V N0 ( l ; d ) = N (cid:89) k =1 (cid:90) d q k (cid:16) µ k π (cid:17) (cid:115) n kx + n ky + n kz n kx ! n ky ! n kz ! × H n kx (cid:18)(cid:114) µ k q xk (cid:19) H n ky (cid:18)(cid:114) µ k q yk (cid:19) H n kz (cid:18)(cid:114) µ k q zk (cid:19) × e − µ k q k / δ (cid:32) N (cid:88) k =1 A k q k + d − l (cid:33) . (B.3)It is convenient to define the auxiliary variables { q (cid:48) k } ≡ { q xk − d x , q yk − d y , q zk − d z } , andthen perform the Fourier transform l → s to obtain1(2 π ) N (cid:89) k =1 (cid:90) d q (cid:48) k (cid:16) µ k π (cid:17) (cid:115) n kx + n ky + n kz n kx ! n ky ! n kz ! × H n kx (cid:18)(cid:114) µ k q (cid:48) xk (cid:19) H n ky (cid:18)(cid:114) µ k q (cid:48) xk (cid:19) H n kz (cid:18)(cid:114) µ k q (cid:48) zk (cid:19) e − µ k q (cid:48) k / − iA k s · q (cid:48) k . (B.4)Factorizing in the three spatial dimensions, completing the square in the exponential,and changing the variable to t hk = √ µ k q (cid:48) hk / √ h denotes therespective spatial coordinate) we find1(2 π ) N (cid:89) k =1 (cid:18) π (cid:19) (cid:89) h =1 (cid:114) n kh n kh ! e − s h ( A k ) / µ k × (cid:90) ∞−∞ dt hk H n kh ( t hk ) exp (cid:32) − (cid:20) t hk − (cid:18) − iA k √ µ k s h (cid:19)(cid:21) (cid:33) (B.5) he Gaussian Network Model π ) N (cid:89) k =1 3 (cid:89) h =1 (cid:114) n kh n kh ! (cid:18) − iA k √ µ k s h (cid:19) n kh e − s h ( A k ) / µ k . (B.6)It turns out to be convenient to write Eq. (B.6) as1(2 π ) (cid:34) N (cid:89) k =1 3 (cid:89) h =1 (cid:114) n kh n kh ! (cid:18) − iA k √ µ k (cid:19) n kh (cid:35) (cid:89) h =1 s (cid:80) Nk =1 n kh h e − s h (cid:80) Nk =1 ( A k ) / µ k , (B.7)and to define ‡ M ≡ N (cid:89) k =1 3 (cid:89) h =1 (cid:114) n kh n kh ! (cid:18) − iA k µ k (cid:19) n kh = (cid:115) (cid:80) Nk =1 (cid:80) h =1 n kh (cid:81) Nk =1 (cid:81) h =1 n kh ! ( − i ) (cid:80) Nk =1 (cid:80) h =1 n kh N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz . (B.8)We now invert the Fourier transform M (2 π ) (cid:89) h =1 (cid:90) ∞−∞ ds h s (cid:80) Nk =1 n kh h e − η s h + is h d h . (B.9)Completing the square in the exponential and defining t h = √ η s h , we can write: M (2 π ) (cid:89) h =1 e − d h / η √ η (cid:80) Nk =1 n kh +1 (cid:90) ∞−∞ dt h t (cid:80) Nk =1 n kh h e − ( t h − id h / √ η ) , (B.10)the integral in the previous equation can be solved analytically [70] M (2 π ) (cid:89) h =1 e − d h / η √ η (cid:80) Nk =1 n kh +1 √ π (2 i ) − (cid:80) Nk =1 n kh ( − (cid:80) Nk =1 n kh H (cid:80) Nk =1 n kh (cid:18) l h √ η (cid:19) . (B.11)Using the definition of M in Eq. (B.8), defining N h = (cid:80) Nk =1 n kh and N = N x + N y + N z ,and going back to the original, non-shifted coordinates we arrive at the following formof Eq. (B.3) V ( l ; d ) = 1(2 √ π ) (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz × √ η N +3 H N x (cid:18) l x − d x √ η (cid:19) H N y (cid:18) l y − d y √ η (cid:19) H N z (cid:18) l z − d z √ η (cid:19) e − ( l − d ) / η . (B.12)To integrate over the angular part it is convenient to use the following expansion of theHermite polynomials [38]: H n ( x ) = n ! (cid:98) n (cid:99) (cid:88) m =0 ( − m m !( n − m )! (2 x ) n − m . (B.13) ‡ We use the convention 0 = 1 for terms (cid:16) A k √ µ k (cid:17) n kx + n ky + n kz . he Gaussian Network Model z (cid:107) d z , (i.e.: d x = 0, d y = 0 and d z = d )we find V ( l ; d ) = 1(2 √ π ) (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz × √ η N +3 N x ! N y ! N z !e − d d η × (cid:98) N x / (cid:99) (cid:88) a =0 (cid:98) N y / (cid:99) (cid:88) b =0 (cid:98) N z / (cid:99) (cid:88) c =0 ( − a + b + c a ! b ! c !( N x − a )!( N y − b )!( N z − c )! × (cid:18) l x √ η (cid:19) N x − a (cid:18) l y √ η (cid:19) N y − b (cid:18) l z − d √ η (cid:19) N z − c e l · d / η . (B.14)Using the binomial expansion of ( l z − d ) / √ η ) N z − c we obtain V ( l ; d ) = 1(2 √ π ) (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz × √ η N +3 N x ! N y ! N z !e − ( l + d ) / η × (cid:98) N x / (cid:99) (cid:88) a =0 (cid:98) N y / (cid:99) (cid:88) b =0 (cid:98) N z / (cid:99) (cid:88) c =0 ( − a + b + c a ! b ! c !( N x − a )!( N y − b )! (cid:18) l x √ η (cid:19) N x − a (cid:18) l y √ η (cid:19) N y − b e l · d / η × N z − c (cid:88) m =0 m !( N z − c − m )! (cid:18) l z √ η (cid:19) N z − c − m (cid:18) − d √ η (cid:19) m . (B.15)At this point we can integrate over the angles of l fixing the length, hence V ( l, d ) = 1(2 √ π ) (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k k (cid:19) n kx + n ky + n kz × η N +10 N x ! N y ! N z !e − ( l + d ) / η × (cid:98) N x / (cid:99) (cid:88) a =0 (cid:98) N y / (cid:99) (cid:88) b =0 (cid:98) N z / (cid:99) (cid:88) c =0 ( − a + b + c a ! b ! c !( N x − a )!( N y − b )! (cid:18) lη (cid:19) N x + N y − a + b )+2 × (cid:90) π dφ (sin φ ) N y − b (cos φ ) N x − a × N z − c (cid:88) m =0 m !( N z − c − m )! (cid:18) lη (cid:19) N z − c − m (cid:18) − d η (cid:19) m × (cid:90) π dθ (sin θ ) N x + N y − a + b )+1 (cos θ ) N z − c − m e cos θld / η . (B.16)The first integral is (cid:90) π dφ cos n φ sin m φ = πn ! m !2 n + m − ( n )!( m )!( n + m )! , (B.17) he Gaussian Network Model n and m are even [71]. Therefore N x and N x must be even.While the second integral reads [71] (cid:90) π dθ (sin θ ) n (cos θ ) m e k cos θ = √ π γ (cid:18) n (cid:19) (cid:20) − m ) γ (cid:18) m (cid:19) ˜ F (cid:18) m , m + n k (cid:19) − ( − − m ) kγ (cid:16) m (cid:17) ˜ F (cid:18) m , m + n k (cid:19)(cid:21) , (B.18)where we have introduced the Euler’s gamma function γ ( x ) as well as the regularizedhypergeometric function p ˜ F q ( a , · · · , a p ; b , · · · , b q ; x ) [38]. Putting all together we finallyarrive at V ( l ; d ) = 116 (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz × √ η N +1 N x ! N y ! N z !e − ( d + d ) / η × N x / (cid:88) a =0 N y / (cid:88) b =0 (cid:98) N z / (cid:99) (cid:88) c =0 ( − a + b + c a ! b ! c !( N x − a )! (cid:16) N y − b (cid:17) !2 N x + N y − a + b ) × N z − c (cid:88) m =0 m !( N z − c − m )! (cid:18) l √ η (cid:19) N − a + b + c ) − m +2 (cid:18) − d √ η (cid:19) m × (cid:20) − N z − c − m ) γ (cid:18) N z − c − m (cid:19) × ˜ F (cid:18) N z − c − m , N − a + b + c ) − m l d η (cid:19) − ( − − N z − c − m ) ld η γ (cid:18) N z − c − m (cid:19) × ˜ F (cid:18) N z − c − m , N − a + b + c ) − m l d η (cid:19)(cid:21) ; (B.19)which finally allows us to write down the non-Markovian Green’s function expressed asan infinite series in Eq. 24. In addition, the series expansion allows us the compute thecross conditioned Green’s function G d ,d (cid:48) ( l, t | l (cid:48) ) = V ( l (cid:48) ; d (cid:48) ) − (cid:88) N V ( l ; d ) V N0 ( l (cid:48) ; d (cid:48) )e − Λ N t (B.20)that is the probability that the distance between the beads i and j is equal to l at time t conditioned to the fact that the distance between the beads k and l at time 0 wasequal to l (cid:48) , assuming that these two distances have rest lengths d and d (cid:48) , respectively.In particular in order to evaluate V N0 ( l (cid:48) ; d (cid:48) ) we need to consider that the distance l (cid:48) isexpressed via the normal coordinates as d (cid:48) = r k − r l = N (cid:88) i =1 B i q i , (B.21) he Gaussian Network Model ζ t = (cid:80) Nk =1 B k / µ k e − µ k t and ζ = (cid:80) Nk =1 B k / µ k instead of η t and η . Appendix C. Closed form solution for G d In order to obtain the equivalent result in a closed form solution we should consider thefollowing integral: J d ( l , t ; l ) = (cid:90) d Q (cid:90) d Q G ( Q , t | Q ) P eq ( Q ) × δ ( N (cid:88) k =1 A k q k + d − l ) δ ( N (cid:88) k =1 A k q k + d − l ) . (C.1)Performing the first Fourier transform, between l → u the above integral becomes (cid:90) d Q δ ( N (cid:88) k =1 A k q k + d − l )e − i d · u × (cid:18) π (cid:19) (cid:90) d q k N (cid:89) k =1 (cid:18) µ k π (cid:19) / (cid:18) µ k π (1 − e − µ k t ) (cid:19) / × exp (cid:20) − µ k − e − µ k t ) (cid:0) q k + q k e − µ k t − q k · q k e − µ k t (cid:1)(cid:21) × e − iA k q k · u e − µ k q k / ; (C.2)and the integration yields (cid:18) π (cid:19) (cid:90) d Q δ ( N (cid:88) k =1 A k q + d − l )e − i d · u × N (cid:89) k =1 (cid:18) µ k π (cid:19) / exp (cid:20) − µ k (cid:18) q k + 2 i A k µ k e − µ k t u · q k + A k µ k (1 − e − µ k t ) u (cid:19)(cid:21) . (C.3)Performing the second Fourier transform l → v we finde − i d · ( u + v ) (cid:18) π (cid:19) N (cid:89) k =1 (cid:90) d q k (cid:16) µ k π (cid:17) / × exp (cid:20) − µ k (cid:18) q k + 2 i A k µ k (e − µ k t u + v ) · q k + A k µ k (1 − e − µ k t ) u (cid:19)(cid:21) , (C.4)that reads e − i d · ( u + v ) ( 12 π ) N (cid:89) k =1 exp (cid:20) − A k µ k ( u + v ) + 2 A k µ k e − µ k t u · v (cid:21) . (C.5)It is convenient to define N (cid:88) k =1 A k µ k e − µ k t = η t → N (cid:88) k =1 A k µ k = η (C.6) he Gaussian Network Model J d ( v , t ; u ) = 1(2 π ) exp (cid:0) − η u − η v + 2 η t u · v + i d · ( u + v ) (cid:1) . (C.7)The inversion of the two Fourier transforms gives straightforwardly J d ( l , t ; l ) = 12 π (cid:18) η − η t (cid:19) / × exp (cid:20) − η ( l − d ) + η ( l − d ) − η t ( l − d ) · ( l − d )4( η − η t ) (cid:21) , (C.8)We now marginalize over the angles J d ( l, t ; l ) ≡ (cid:90) d d (cid:90) d d (cid:90) d d δ ( | d |− d ) δ ( | l |− l ) δ ( | l |− l ) J d ( l , t ; l ) , (C.9)by moving to a frame of reference where d is parallel to the the z axis, and express allthe vectors in spherical coordinates. This removes all delta-functions and d in the newframe of reference is just a scalar. By doing so we obtain J d ( l, t ; l ) = 12 π (cid:18) η − η t (cid:19) / exp (cid:18) − η l + η l + 2( η − η t ) d η − η t ) (cid:19) l l × (cid:90) π dφ (cid:90) π dφ (cid:48) (cid:90) − d (cos θ ) (cid:90) π d (cos θ (cid:48) ) × exp (cid:20) ( η − η t ) ld η − η t ) cos θ + ( η − η t ) l d η − η t ) cos θ (cid:48) + η t ll η − η t ) × (cos φ cos φ (cid:48) sin θ sin θ (cid:48) + sin φ sin φ (cid:48) sin θ sin θ (cid:48) + cos θ cos θ (cid:48) ) (cid:21) . (C.10)The two integrals over φ and φ (cid:48) (keeping in mind that cos( φ − φ (cid:48) ) = cos φ cos φ (cid:48) +sin φ sin φ (cid:48) ) give us116 π (cid:18) η − η t (cid:19) / exp (cid:18) − η l + η l + 2( η − η t ) d η − η t ) (cid:19) l l × (cid:90) − d (cos θ ) (cid:90) π d (cos θ (cid:48) ) exp (cid:20) ( η − η t ) ld η − η t ) cos θ + ( η − η t ) l d η − η t ) cos θ (cid:48) + η t ll η − η t ) cos θ cos θ (cid:48) (cid:21) I (cid:18) η t ll η − η t ) √ − cos θ √ − cos θ (cid:48) (cid:19) , (C.11)where I ( x ) is the modified Bessel function of the first kind. The first integral in cos θ (cid:48) is solvable [70], and by changing the variable cos θ → x we are left with18 π (cid:18) η − η t (cid:19) / exp (cid:18) − η l + η l + 2( η − η t ) d η − η t ) (cid:19) l l × (cid:90) − dx e ( η − ηt ) ld η − η t ) x sinh (cid:16)(cid:113) ( η − η t ) l d + η t l l +2 η t ( η − η t ) ll d x η − η t ) (cid:17)(cid:113) ( η − η t ) l d + η t l l +2 η t ( η − η t ) ll d x η − η t ) . (C.12) he Gaussian Network Model J d ( l, t ; l ) = 116 √ π (cid:18) η − η t (cid:19) / exp (cid:18) − η l + η l + 2( η − η t ) d η − η t ) (cid:19) × l l e − ab/c − c/ a √ ac (cid:20) erfi (cid:18) a √ b − c − c √ ac (cid:19) − erfi (cid:18) a √ b − c + c √ ac (cid:19) +erfi (cid:18) c − a √ b + c √ ac (cid:19) + erfi (cid:18) c + 2 a √ b + c √ ac (cid:19)(cid:21) (C.13)having defined a = ( η − η t ) ld η − η t ) , (C.14) b = ( η − η t ) l d + η t l l η − η t ) , (C.15) c = η t ( η − η t ) ll d η − η t ) ; (C.16)the direct substitution of these auxiliary variables gives, upon division by P eq d and somesimplification, Eq. (22). Appendix D. Derivation of equilibrium autocorrelation function
In order to compute the autocorrelation function in Eq. (28) the following integrals mustbe evaluated V d = (cid:90) ∞ dxV ( x, d ) x, V d N0 = (cid:90) ∞ dxV N0 ( x, d ) x. (D.1)These two integrals are identical and the integration yields [70] V d = 116 (cid:115) N (cid:81) Nk =1 n kx ! n ky ! n kz ! N (cid:89) k =1 (cid:18) A k √ µ k (cid:19) n kx + n ky + n kz √ η N +1 N x ! N y ! N z !e − d / η × N x / (cid:88) a =0 N y / (cid:88) b =0 (cid:98) N z / (cid:99) (cid:88) c =0 ( − a + b + c a ! b ! c !( N x − a )!( N y − b )!2 N x + N y − a + b ) N z − c (cid:88) l =0 l !( N z − c − l )! (cid:18) − d √ η (cid:19) l × (cid:20) (1 + ( − N z − c − l ) γ (cid:18) N z − c − l (cid:19) N − a + b + c ) − l +4 η γ (cid:18) N − a + b + c ) − l + 42 (cid:19) × ˜ F (cid:18) N z − c − l , N − a + b + c ) − l + 42 ; 12 , N − a + b + c ) − l d η (cid:19) − ( − − N z − c − l ) d √ η γ (cid:18) N z − c − l (cid:19) × N − a + b + c ) − l +3 γ (cid:18) N − a + b + c ) − l + 52 (cid:19) × ˜ F (cid:18) N z − c − l , N − a + b + c ) − l + 52 ; 32 , N − a + b + c ) − l d η (cid:19)(cid:21) (D.2) he Gaussian Network Model d → d (cid:48) , { A k } → { B k } and η t → ζ t . Appendix D.1. Rouse-limit autocorrelation function
In Fig. 5 we showed how the autocorrelation for a GNM compares to the autocorrelationin the Rouse limit (i.e. d → C ( t ) = (cid:104) l ( t ) l (0) (cid:105) − (cid:104) l (cid:105) (cid:104) l (cid:105) − (cid:104) l (cid:105) ; (D.3) (cid:104) l ( t ) l (0) (cid:105) = 4 (cid:104) η t (cid:112) η − η t + 2( η + η t ) arctan( η t / ( η − η t )) (cid:105) πη t , (D.4) (cid:104) l (cid:105) = 4 (cid:112) η /π, (cid:104) l (cid:105) = 6 η . (D.5) Appendix E. Short-time expansion of G d Introducing the auxiliary variable φ ( t ) = η t /η in Eq. (22) we can write the returnjoint-density as and expanding to linear order in t using φ ( t ) t → (cid:39) − (cid:80) k =1 A k t η φ ( t ) t → (cid:39) − (cid:88) k =1 A k tη ; (E.1)we find the partial limitsexp (cid:18) − d φ ( t ) + (1 − φ ( t )) d η φ ( t )(1 − φ ( t )) (cid:19) t → ∼ e − /t → , (E.2)erfi (cid:32) ± dφ ( t ) + d (1 − φ ( t ))2 (cid:112) η φ ( t )(1 − φ ( t ) ) (cid:33) t → ∼ erfi( ± t − / ) → ±∞ , (E.3)erfi (cid:32) d (1 − φ ( t ))2 (cid:112) η φ ( t )(1 − φ ( t ) ) (cid:33) t → ∼ erfi( √ t ) →
0; (E.4)where all the convergences are of exponential order. Therefore, while we can neglect thesecond erfi, we need to retain the product between the exponential and the two divergingerfis and only then plug them into in Eq. (E.1). Thus considering the expansion for largeand real arguments of erfi [38]erfi( x ) x →±∞ ,x ∈ R (cid:39) ∓ i + (cid:18) x + 12 x + O ( x − ) (cid:19) e x √ π , (E.5)and explicitly, multiplying by the remaining exponentials Eq. (22) becomes (note that P eq d ( l ) G d ( l, t | l ) ≡ J d ( l, t ; l )) J d ( d, t ; d ) t → (cid:39) d πd e − ( l + d ) / η (1+ φ ( t )) × he Gaussian Network Model (cid:26) (cid:34) (cid:112) φ ( t ) (cid:112) − φ ( t ) η ( − dφ ( t ) + d (1 − φ ( t ))) + 4 φ ( t ) (cid:112) − φ ( t )(1 + φ ( t )) / ( − dφ ( t ) + d (1 − φ ( t ))) (cid:35) e − ld /η (1+ φ ( t )) + (cid:34) (cid:112) φ ( t ) (cid:112) − φ ( t ) η (2 dφ ( t ) + d (1 − φ ( t ))) + 4 φ ( t ) (cid:112) − φ ( t )(1 + φ ( t )) / (2 dφ ( t ) + d (1 − φ ( t ))) (cid:35) e ld /η (1+ φ ( t )) ] (cid:27) . (E.6)Using Eq. (E.1) and expanding t = 0 and introducing κ = (cid:80) Nk =1 A k we finally arrive atEq. (35). Appendix F. Evaluation of the variance of the occupation time fraction
The direct implementation of Eq. (33) suffers from slow convergence issues. We suspectthat this problem has his roots in the (well-known) slow convergence of series involvingHermite polynomials [72]. We therefore combine the analytical short-time asymptoticsin Eq.(36) with the spectral solution. Defining a small cutoff time t s (cid:28) σ d ( l, t ) = 2 P eq d ( l ) t (cid:90) t s dτ (1 − τ /t ) G d ( l, τ | l )+ 2 P eq d ( l ) t (cid:90) tt s dτ (1 − τ /t )[ G d ( l, τ | l ) −P eq d ( l )] . (F.1)We can explicitly evaluate the first addend using Eq. (36) and evaluate the secondterm using the spectral expansion (24). Note that the first term in the series (withΛ = 0) must be treated in a manner different thant the rest. Therefore σ d ( l, t ) can beconveniently written (and implemented) in the form σ d ( d, t ) = 2 P eq d ( l ) (cid:32) √ κπt + 415 l (cid:114) κtπ − P eq d ( l ) (cid:33) + 2 t (cid:88) N (cid:54) = V N0 ( l ; d ) V ( l ; d ) (cid:20) ( t − t s ) e − Λ N t s Λ N − e − Λ N t s − e − Λ N t Λ N (cid:21) + P eq d ( l ) (cid:18) t s t − (cid:19) t s t . (F.2) Appendix G. Notes on the numerical implementation of the results
Accompanying this article there is a C++ implementation of all analytical results. Thecode allows the computation the Green’s function G d , the mean (cid:104) θ t ( l, d ) (cid:105) and variance σ d ( d, t ) of the occupation time fraction, as well as the autocorrelation function C d ( t )for a generic Gaussian Network. The connectivity matrix of the network Γ must beprovided as a plain text file and is diagonalized using the Armadillo libray [73, 74].A closed-form expression of the joint density in Eq. (22) is implemented in theavailable C++ code. However, for numerical stability and speed of computation it isconvenient to implement Eq. (C.12) and perform the final integral numerically using aGauss-Kronrod quadrature routine [60]. he Gaussian Network Model p ˜ F q . A notable exception is the Arblib library [75], that implements several ”special”functions using arbitrary precision arithmetic. The reliable evaluation of such functionsis challenging and often requires several different methods to cover the entire domain [76].Unfortunately this higher reliability comes with a higher computational cost comparedto machine precision arithmetic. However hypergeometric functions converge on theentire complex plane if p ≤ q [76]. In addition, we only need to evaluate them whenall the parameters are positive real numbers. Therefore we implemented the seriesdefinitions of these function directly since in our case these converge reasonably fast toa desired accuracy as long as the parameters are not too large.Many of our results, in particular the autocorrelation function and the varianceof the fraction of occupation time, can only be expressed analytically using theeigendecomposition of the Fokker-Plank operator. Unfortunately the computationaleffort required in the generation of all necessary terms to achieve convergence is huge. Inaddition, this number scales non-polynomially with the number of beads in the network.Therefore the attached program should be used with care as it does not generate reliableresults when the size of the network becomes too large. he Gaussian Network Model [1] Katherine Henzler-Wildman and Dorothee Kern. Dynamic personalities of proteins. Nature ,450(7172):964–972, Dec 2007.[2] Wilfred F. van Gunsteren and Herman J. C. Berendsen. Computer simulation of moleculardynamics: Methodology, applications, and perspectives in chemistry.
Angew. Chem. Int. Ed. ,29(9):992–1023, Sep 1990.[3] John L Klepeis, Kresten Lindorff-Larsen, Ron O Dror, and David E Shaw. Long-timescalemolecular dynamics simulations of protein structure and function.
Curr. Opin. Struc. Biol. ,19(2):120–127, Apr 2009.[4] David E. Shaw, Paul Maragakis, Kresten Lindorff-Larsen, Stefano Piana, Ron O. Dror, Michael P.Eastwood, Joseph A. Bank, John M. Jumper, John K. Salmon, Yibing Shan, and Willy Wriggers.Atomic-level characterization of the structural dynamics of proteins.
Science , 330(6002):341–346, 2010.[5] Marco Baiesi, Enzo Orlandini, Flavio Seno, and Antonio Trovato. Exploring the correlationbetween the folding rates of proteins and the entanglement of their native states.
J. Phys.A: Math. Theor. , 50(50):504001, December 2017.[6] Cristian Micheletti, Jayanth R. Banavar, Amos Maritan, and Flavio Seno. Protein structures andoptimal folding from a geometrical variational principle.
Phys. Rev. Lett. , 82:3372–3375, Apr1999.[7] Monique M. Tirion. Large Amplitude Elastic Motions in Proteins from a Single-Parameter, AtomicAnalysis.
Phys. Rev. Lett. , 77(9):1905–1908, August 1996.[8] Prince E. Rouse. A Theory of the Linear Viscoelastic Properties of Dilute Solutions of CoilingPolymers.
J. Chem. Phys. , 21(7):1272–1280, July 1953.[9] Flory, P. J. Statistical thermodynamics of random networks.
Proc. R. Soc. Lond. A ,351(1666):351–380, November 1976.[10] Ivet Bahar, Ali Rana Atilgan, and Burak Erman. Direct evaluation of thermal fluctuations inproteins using a single-parameter harmonic potential.
Folding and Design , 2(3):173–181, June1997.[11] Turkan Haliloglu, Ivet Bahar, and Burak Erman. Gaussian Dynamics of Folded Proteins.
Phys.Rev. Lett. , 79(16):3090–3093, October 1997.[12] Pemra Doruker, Ali Rana Atilgan, and Ivet Bahar. Dynamics of proteins predicted by moleculardynamics simulations and analytical approaches: Application to α -amylase inhibitor. Proteins ,40(3):512–524, 2000.[13] A.R. Atilgan, S.R. Durell, R.L. Jernigan, M.C. Demirel, O. Keskin, and I. Bahar. Anisotropy ofFluctuation Dynamics of Proteins with an Elastic Network Model.
Biophys. J. , 80(1):505–515,January 2001.[14] M. Delarue and P. Dumas. On the use of low-frequency normal modes to enforce collectivemovements in refining macromolecular structural models.
Proc. Natl. Acad. Sci. USA ,101(18):6957–6962, May 2004.[15] Gunnar F. Schr¨oder, Axel T. Brunger, and Michael Levitt. Combining Efficient ConformationalSampling with a Deformable Elastic Network Model Facilitates Structure Refinement at LowResolution.
Structure , 15(12):1630–1641, December 2007.[16] Dengming Ming and Rafael Br¨uschweiler. Reorientational Contact-Weighted Elastic NetworkModel for the Prediction of Protein Dynamics: Comparison with NMR Relaxation.
Biophys.J. , 90(10):3382–3388, May 2006.[17] Qian-Yuan Tang, Yang-Yang Zhang, Jun Wang, Wei Wang, and Dante R. Chialvo. CriticalFluctuations in the Native State of Proteins.
Phys. Rev. Lett. , 118(8):088102, February 2017.[18] Kay Hamacher, Joanna Trylska, and J. Andrew McCammon. Dependency Map of Proteins in theSmall Ribosomal Subunit.
PLoS Comput Biol , 2(2):e10, February 2006.[19] A. J. Rader, G. Anderson, B. Isin, H. G. Khorana, I. Bahar, and J. Klein-Seetharaman.Identification of core amino acids stabilizing rhodopsin.
Proc Natl Acad Sci USA , 101(19):7246–7251, May 2004. he Gaussian Network Model [20] Ozlem Keskin, Stewart R. Durell, Ivet Bahar, Robert L. Jernigan, and David G. Covell. RelatingMolecular Flexibility to Function: A Case Study of Tubulin. Biophysi. J. , 83(2):663–680, August2002.[21] Jingwei Weng, Jianpeng Ma, Kangnian Fan, and Wenning Wang. The Conformational Couplingand Translocation Mechanism of Vitamin B12 ATP-Binding Cassette Transporter BtuCD.
Biophys. J. , 94(2):612–621, January 2008.[22] Ines Putz and Oliver Brock. Elastic network model of learned maintained contacts to predictprotein motion.
PLoS ONE , 12(8):e0183889, August 2017.[23] Wenjun Zheng and Bernard Brooks. Identification of Dynamical Correlations within the MyosinMotor Domain by the Normal Mode Analysis of an Elastic Network Model.
J. Mol. Bio. ,346(3):745–759, February 2005.[24] Sibsankar Kundu, Dan C. Sorensen, and George N. Phillips. Automatic domain decomposition ofproteins by a Gaussian Network Model.
Proteins , 57(4):725–733, December 2004.[25] Zhiyong Zhang, Yunyu Shi, and Haiyan Liu. Molecular Dynamics Simulations of Peptides andProteins with Amplified Collective Motions.
Biophys. J. , 84(6):3583–3593, June 2003.[26] K. Hamacher. Relating sequence evolution of HIV1-protease to its underlying molecular mechanics.
Gene , 422(1-2):30–36, October 2008.[27] Jinjun Zhang, Bonsung Koo, Yingtao Liu, Jin Zou, Aditi Chattopadhyay, and Lenore Dai. A novelstatistical spring-bead based network model for self-sensing smart polymer materials.
SmartMater. Struct. , 24(8):085022, August 2015.[28] Jinjun Zhang, Bonsung Koo, Nithya Subramanian, Yingtao Liu, and Aditi Chattopadhyay. Anoptimized cross-linked network model to simulate the linear elastic material response of a smartpolymer.
J. INTEL. MAT. SYST. STR. , 27(11):1461–1475, July 2016.[29] Florence Tama and Charles L. Brooks. The Mechanism and Pathway of pH Induced Swelling inCowpea Chlorotic Mottle Virus.
J. Mol. Bio. , 318(3):733–747, May 2002.[30] Indira H. Shrivastava and Ivet Bahar. Common Mechanism of Pore Opening Shared by FiveDifferent Potassium Channels.
Biophysical Journal , 90(11):3929–3940, June 2006.[31] Nicolas Bocquet, Hugues Nury, Marc Baaden, Chantal Le Poupon, Jean-Pierre Changeux, MarcDelarue, and Pierre-Jean Corringer. X-ray structure of a pentameric ligand-gated ion channelin an apparently open conformation.
Nature , 457(7225):111–114, January 2009.[32] Giovanni Pinamonti, Sandro Bottaro, Cristian Micheletti, and Giovanni Bussi. Elastic networkmodels for RNA: a comparative assessment with molecular dynamics and SHAPE experiments.
Nucleic Acids Res , 43(15):7260–7269, September 2015.[33] Joanna I. Su(cid:32)lkowska, Andrzej Kloczkowski, Taner Z. Sen, Marek Cieplak, and Robert L. Jernigan.Predicting the order in which contacts are broken during single molecule protein stretchingexperiments.
Proteins , 71(1):45–60, April 2008.[34] Herbert Goldstein, Charles P. Poole, and John L. Safko.
Classical mechanics . Addison Wesley,San Francisco, NJ, 3. ed edition, 2002. OCLC: 248389949.[35] Ivet Bahar, Timothy R. Lezon, Ahmet Bakan, and Indira H. Shrivastava. Normal Mode Analysisof Biomolecular Structures: Functional Mechanisms of Membrane Proteins.
Chem. Rev. ,110(3):1463–1497, March 2010.[36] S. Nicolay and Y.-H. Sanejouand. Functional Modes of Proteins Are among the Most Robust.
Phys. Rev. Lett. , 96(7):078104, February 2006. Publisher: American Physical Society.[37] Gerald Wilemski and Marshall Fixman. Diffusion-controlled intrachain reactions of polymers. ITheory.
J. Chem. Phys. , 60(3):866–877, February 1974.[38] Milton Abramowitz and Irene A. Stegun, editors.
Handbook of mathematical functions: withformulas, graphs, and mathematical tables . Dover books on mathematics. Dover Publ, NewYork, NY, 9. dover print. edition, 2013. OCLC: 935935300.[39] Dominique Foata. Some hermite polynomial identities and their combinatorics.
Adv. Appl. Math. ,2(3):250–259, 1981.[40] Hannes Risken and Till Frank.
The Fokker-Planck Equation: Methods of Solution and he Gaussian Network Model Applications . Springer Series in Synergetics. Springer-Verlag, Berlin Heidelberg, 2 edition, 1996.[41] Andrea Amadei, Antonius B. M. Linssen, and Herman J. C. Berendsen. Essential dynamics ofproteins.
Proteins , 17(4):412–425, December 1993.[42] Alessio Lapolla, David Hartich, and Aljaˇz Godec. Spectral theory of fluctuations in time-averagestatistical mechanics of reversible and driven systems.
Phys. Rev. Research , 2(4):043084,October 2020.[43] Alessio Lapolla and Aljaˇz Godec. Manifestations of Projection-Induced Memory: General Theoryand the Tilted Single File.
Front. Phys. , 7, 2019.[44] Gardiner, C.W.
Handbook of Stochastic Methods for Physics, Chemistry and Natural Sciences .Springer-Verlag, second edition, 1985.[45] Kevin Truong and Mitsuhiko Ikura. The use of FRET imaging microscopy to detectprotein–protein interactions and protein conformational changes in vivo.
CURR OPIN STRUCBIOL , 11(5):573–578, September 2001.[46] Weixiang Ye, Markus G¨otz, Sirin Celiksoy, Laura T¨uting, Christoph Ratzke, Janak Prasad,Julia Ricken, Seraphine V. Wegner, Rub´en Ahijado-Guzm´an, Thorsten Hugel, and CarstenS¨onnichsen. Conformational Dynamics of a Single Protein Monitored for 24 h at Video Rate.
Nano Lett. , 18(10):6633–6637, October 2018.[47] Kac, M. On distributions of Certain Weiner functionals.
Trans. Amer. Math. Soc , 65:1–13, 1949.[48] Ju-Yi Yen and Marc Yor.
Local times and excursion theory for Brownian motion: a tale of Wienerand Itˆo measures . Number 2088 in Lecture notes in mathematics. Springer, Cham, 2013. OCLC:862993594.[49] Satya N. Majumdar and Alain Comtet. Local and Occupation Time of a Particle Diffusing in aRandom Medium.
Phys. Rev. Lett. , 89(6):060601, July 2002.[50] Satya N. Majumdar. Brownian Functionals in Physics and Computer Science. In
The Legacy ofAlbert Einstein , pages 93–129. WORLD SCIENTIFIC, December 2006.[51] Alessio Lapolla and Aljaˇz Godec. Unfolding tagged particle histories in single-file diffusion: exactsingle- and two-tag local times beyond large deviation theory.
New J. Phys. , 20(11):113021,November 2018.[52] Irina V. Gopich and Attila Szabo. Single-Macromolecule Fluorescence Resonance Energy Transferand Free-Energy Profiles.
J. Phys. Chem. B , 107(21):5058–5063, May 2003.[53] Christoph W. M¨uller and Georg E. Schulz. Structure of the complex between adenylate kinase fromEscherichia coli and the inhibitor Ap5A refined at 1.9 ˚A resolution.
J. Mol. Bio. , 224(1):159–177,March 1992.[54] Cw M¨uller, Gj Schlauderer, J Reinstein, and Ge Schulz. Adenylate kinase motions during catalysis:an energetic counterweight balancing substrate binding.
Structure , 4(2):147–156, February 1996.[55] Katherine A. Henzler-Wildman, Vu Thai, Ming Lei, Maria Ott, Magnus Wolf-Watz, Tim Fenn,Ed Pozharski, Mark A. Wilson, Gregory A. Petsko, Martin Karplus, Christian G. H¨ubner,and Dorothee Kern. Intrinsic motions along an enzymatic reaction trajectory.
Nature ,450(7171):838–844, December 2007.[56] J. A. Hanson, K. Duderstadt, L. P. Watkins, S. Bhattacharyya, J. Brokaw, J.-W. Chu, and H. Yang.Illuminating the mechanistic roles of enzyme conformational dynamics.
Proc Natl Acad Sci USA ,104(46):18055–18060, November 2007.[57] Yuqing Zheng and Qiang Cui. Multiple Pathways and Time Scales for Conformational Transitionsin apo-Adenylate Kinase.
J. Chem. Theory Comput. , 14(3):1716–1726, March 2018.[58] A. Bakan, L. M. Meireles, and I. Bahar. ProDy: Protein Dynamics Inferred from Theory andExperiments.
Bioinformatics , 27(11):1575–1577, June 2011.[59] Alessio Lapolla and Aljaˇz Godec. A Toolbox for Quantifying Memory in Dynamics Along ReactionCoordinates. arXiv:2101.05237 [cond-mat, physics:physics] he Gaussian Network Model resonance relaxation in macromolecules. 1. Theory and range of validity. J. Am. Chem. Soc. ,104(17):4546–4559, 1982.[62] Angelo Perico, Roberto Pratolongo, Karl F. Freed, Richard W. Pastor, and Attila Szabo.Positional time correlation function for one-dimensional systems with barrier crossing: Memoryfunction corrections to the optimized Rouse–Zimm approximation.
J. Chem. Phys. , 98(1):564–573, 1993.[63] Alessio Lapolla and Aljaˇz Godec. Single-file diffusion in a bi-stable potential: Signatures of memoryin the barrier-crossing of a tagged-particle.
J. Chem. Phys. , 153(19):194104, 2020.[64] Y. Tian, B. Shirinzadeh, and D. Zhang. Design and dynamics of a 3-DOF flexure-based parallelmechanism for micro/nano manipulation.
Microelectron Eng , 87(2):230–241, February 2010.[65] Dannelle P. Sierra, Nathan A. Weir, and James Frank Jones.
A review of research in the field ofnanorobotics.
Sandia National Laboratories, October 2005.[66] Henry Crapo. Structural rigidity.
Structural topology, 1979, n´um. 1 , 1979. Publisher: Universit´edu Qu´ebec `a Montr´eal.[67] C.R. Calladine. Buckminster Fuller’s “Tensegrity” structures and Clerk Maxwell’s rules for theconstruction of stiff frames.
Int J Solids Struct , 14(2):161–172, 1978.[68] J. Clerk Maxwell. L. On the calculation of the equilibrium and stiffness of frames.
Lond.Edinb.Dubl.Phil.Mag. , 27(182):294–299, April 1864.[69] S. Pellegrino and C.R. Calladine. Matrix analysis of statically and kinematically indeterminateframeworks.
Int J Solids Struct , 22(4):409–428, 1986.[70] Gradshteyn, I. S. and Ryzhik, I. M.
Table of integrals, series, and products . Elsevier/AcademicPress, Amsterdam, seventh edition, 2007.[71] Wolfram Research, Inc. Mathematica, Version 12.0, 2019.[72] John P. Boyd. The rate of convergence of Hermite function series.
Math. Comp. , 35(152):1309–1309, January 1980.[73] Conrad Sanderson and Ryan Curtin. Armadillo: a template-based C++ library for linear algebra.
J. Open Source Softw. , 1(2):26, 2016. Publisher: The Open Journal.[74] Conrad Sanderson and Ryan Curtin. A User-Friendly Hybrid Sparse Matrix Class in C++. InJames H. Davenport, Manuel Kauers, George Labahn, and Josef Urban, editors,
MathematicalSoftware – ICMS 2018 , pages 422–430, Cham, 2018. Springer International Publishing.[75] Fredrik Johansson. Arb: Efficient Arbitrary-Precision Midpoint-Radius Interval Arithmetic.
IEEET COMPUT , 66(8):1281–1292, August 2017.[76] Fredrik Johansson. Computing Hypergeometric Functions Rigorously.