Iterative trajectory reweighting for estimation of equilibrium and non-equilibrium observables
IIterative trajectory reweighting for estimation of equilibrium and non-equilibriumobservables
John D. Russo, Jeremy Copperman, ∗ and Daniel M. Zuckerman † Department of Biomedical Engineering,Oregon Health and Science University, Portland, OR (Dated: June 18, 2020)We present two algorithms by which a set of short, unbiased trajectories can be iterativelyreweighted to obtain various observables. The first algorithm estimates the stationary (steady state)distribution of a system by iteratively reweighting the trajectories based on the average probabilityin each state. The algorithm applies to equilibrium or non-equilibrium steady states, exploiting the‘left’ stationarity of the distribution under dynamics – i.e., in a discrete setting, when the columnvector of probabilities is multiplied by the transition matrix expressed as a left stochastic matrix.The second procedure relies on the ‘right’ stationarity of the committor (splitting probability) ex-pressed as a row vector. The algorithms are unbiased, do not rely on computing transition matrices,and make no Markov assumption about discretized states. Here, we apply the procedures to aone-dimensional double-well potential, and to a 208 µ s atomistic Trp-cage folding trajectory fromD.E. Shaw Research. I. INTRODUCTION
The inability of molecular dynamics (MD) simulationto reach timescales pertinent to complex phenomena inbiology and other fields [1–5] has motivated the devel-opment of numerous methods to enhance sampling inboth equilibrium [6–8] and non-equilibrium [9–13] con-texts. Markov state models (MSMs) effectively “stitchtogether” shorter trajectories dispersed in configurationspace [14, 15] from which both equilibrium and non-equilibrium observables can be computed – e.g., statepopulations or kinetic properties. MSMs can be appliedto transition phenomena even when no full, continuoustrajectory of a particular transition is present in the orig-inal set of trajectories.This article presents a simple, alternative methodfor reweighting MSM-like trajectory sets that providesboth equilibrium and non-equilibrium information with-out bias. The essence of the strategy is to exploit thestationarity of a distribution or property to enable thecalculation of that observable in a self-consistent way viaiteration.
The key ingredient is the use of continuoustrajectories as the sole basis for analysis, intrinsicallyaccounting for all properties of the underlying dynam-ics.
Iteration is employed to reach a fully self-consistentstationary solution. Trajectory reweighting has previ-ously been applied to biased trajectories (e.g., [16]) aswell as to unbiased trajectories in MSM construction, al-beit without self-consistent iteration and under a Markovassumption [17].Observables that can be computed through the it-erative approach, without any Markov assumption orlag-time limitation, include the equilibrium distribu-tion, the distribution in a non-equilibrium steady state ∗ [email protected] † [email protected] (NESS), the committor or splitting probability and themean first-passage time (MFPT) associated with arbi-trary macrostates. The only error in the procedures de-scribed below, besides statistical noise, arises from thediscretization of phase space into bins. We emphasizethat no Markov assumption is made.The approach can be understood in the context of es-timating the equilibrium distribution based on a set of unbiased trajectories initiated from an arbitrary set ofinitial configurations, presumably out of equilibrium. Forexample, the trajectories may be initiated from an ap-proximately uniform distribution in the space of somecoordinate of interest. We assume that a classificationof the space has already been performed into bins whosepopulations are a proxy for the distribution. Given thatthe equilibrium distribution does not change in time, ifwe can assign suitable weights (probabilities) to each ofa set of trajectories – such that the weighted distribu-tion is in equilibrium based on only the initial points ofeach trajectory – that distribution must remain in equi-librium thereafter. Although the weights are unknown inadvance, they can be set to arbitrary initial values andrefined by iteration.Continuing the equilibrium example, imagine that eachtrajectory is initially assigned an equal weight, with allweights summing to one. Now each bin can be moni-tored over time , and the average weight in each bin isrecorded. This average weight is the first non-trivial es-timate of the equilibrium probability in the bin. Physi-cally, bins that attract more trajectories will be assignedlarger weights as expected. In each iteration, the time-averaged probability from the prior iteration is dividedamong the trajectories which start in that bin. Time-averaged bin probabilities are recomputed and trajectoryweights reassigned at each iteration until convergence tosteady values. This procedure is described in Algorithm1. The same procedure can be applied for non-equilibrium a r X i v : . [ phy s i c s . c o m p - ph ] J un reweighting. To obtain the NESS distribution, externaland/or boundary conditions must be properly accountedfor in preparing trajectories for analysis (Algorithm 0),but this is not a significant complication. With the NESSdistribution, the MFPT can be obtained from the Hillrelation [18, 19].The committor, also known as the splitting probability[18, 20–22], exhibits a different type of stationarity [23]and is estimated by a different but equally simple typeof iterative procedure that averages over trajectories in-stead of bins (Algorithm 2). Defined as the probability toproceed from a designated initial phase point to a “tar-get” macrostate prior to reaching a different “off-target”macrostate, the committor can be naively estimated bythe fraction of trajectories from the initial point that firstreach the target. In an iterative approach operating inthe space of bins, we can exploit the committor’s sta-tionarity: at any fixed time, the average committor ofall downstream trajectories emanating from a given binmust match that bin’s committor value. Procedurally,each bin not in the target state is assigned a trivial ini-tial committor estimate of, say, zero. A bin’s estimate isupdated at each iteration as the average over every timestep of every trajectory after visiting the bin, with the“boundary conditions” that all time points after enter-ing the target macrostate are evaluated as one or, afterentering the off-target state, as zero.We emphasize that these trajectory averaging andreweighting processes make no Markov assumption andare unbiased at the shortest available time discretization.As with any method, however, the approach is limited bythe amount of data which in turn will dictate the sizes ofbins which can be used. More data enables smaller binsand higher phase-space resolution. Because the dynam-ics of individual trajectories continually update observ-able estimates, the discretization error may be less wouldnaively be expected from spatial discretization. II. ALGORITHMSA. Trajectory preparation
In order to demonstrate our algorithms, we extracteda set of trajectory fragments from one or more long tra-jectories according to Algorithm 0. Trajectory fragmentsmay be of fixed length, or variable length if strict absorb-ing boundary conditions are used. Source-sink boundaryconditions will use spliced fixed-length trajectories.The analyses performed below are most easily under-stood based on trajectory fragments sorted by the start-ing configuration (phase-space point) of each fragment.These fragments are “copied” from the original long tra-jectory and hence may have overlapping sequences. Forexample, fragment 1 may consist of time steps 2 - 101 ofthe original trajectory, and fragment 2 might be steps 7- 106. Correlations are thus introduced, but we estimatestatistical uncertainty using fully independent datasets.
Algorithm 0
Trajectory fragment selection Begin with one or more trajectories, discretized accordingto a set of bins i (or “microstates” in MSM terminology).For simplicity, we will assume a single long trajectory isused with t denoting the discrete time index. For each bin i , generate a list of possible start points t s which are the time indices of every configuration orphase point within that bin. The set of trajectory startsin bin i – denoted { t s } i – is not indexed to avoid complexnotation. That is, each of K i start points indexed by k = 1 · · · K i in bin i is fully denoted as t s ( i, k ) if no absorbing (‘open’) boundaries then The fragments associated with each bin i consist oftime points t s , t s + 1 , . . . , t s + M − { t s } i . These fixed-length fragments each have M steps. else if strict absorbing boundary conditions then Two macrostates consisting of sets of bins should bedefined, such that no bin is in more than one macrostateand some bins are “intermediate” – i.e., not in eithermacrostate. The fragments will start only from intermediate bins and consist of time points t s , t s + 1 , t s + 2 , . . . for eachstart point t s . Each fragment is terminated upon reaching either macrostate or at the end of the original trajectory,whichever comes first. else if source-sink boundary conditions then Two macrostates consisting of sets of bins should bedefined, such that no bin is in more than one macrostateand some bins are “intermediate” – i.e., not in eithermacrostate.
One macrostate will be the sink (a.k.a. tar-get) and the other is the source state.
Define a time-independent source distribution γ oversource bins such that (cid:80) j γ j = 1 with γ j ≥ The fragments initially consist of time points t s , t s +1 , t s +2 , . . . t s + M − t s . If the targetis reached prior to the final point, let t t be the time thetarget is first reached. Fragments reaching the target are spliced to fragmentsstarting at the source. That is, to make a full segment of M steps, the initial list t s , t s +1 , t s +2 , . . . t t − t s ( j, k ) with j ∈ source; this segment is re-indexedto start at t t . The particular segment is chosen uniformlyamong the t s for bin j after j is selected according to γ . end if B. Equilibrium distribution Trajectories can be reweighted into the equilibriumdistribution. Our procedure can be seen as a non-Markovian, fully self-consistent extension of the single-iteration trajectory reweighting recently proposed in aMarkov context [17]. Reweighting is an old idea [24]which is limited by the well-known overlap problem [4].Overlap remains a concern in any reweighting, but thepresent strategy uses additional information ignored inmany other methods, namely, the dynamical informa-tion intrinsic to trajectories. Algorithm 1 infers a con-formational distribution consistent with the underlying continuous dynamics without any Markov assumption.Discretization necessarily introduces some error but be-cause continuous trajectories evolve irrespective of binboundaries, this error may be reduced. That is, trajec-tory dynamics automatically account for intra-bin land-scape features.Algorithm 1 uses stationarity of the equilibrium distri-bution to re-assign weights of trajectory fragments in aself-consistent manner. In every iteration, the weight ofthe fragments starting in a given bin is replaced by thetime-averaged weight in the bin. Stationarity is enforcedin a self-consistent way because the initial bin probabilitymust match the time average.
Algorithm 1
Stationary distribution calculation Prepare a set of fixed-length trajectory fragments withopen boundary conditions (for equilibrium) or withsource-sink conditions (for NESS) following Algorithm 0.Bins not visited by any fragment will be assigned zeroprobability. Note that sink/target bins have zero proba-bility by definition. Assign each trajectory fragment an initial weight. Initialweights are arbitrary, so long as total weight (probability)sums to 1, a condition which is preserved at every timestep in every iteration. Here we assign initial weights sothat each bin has equal total initial weight, which is evenlydivided among fragments starting in the bin. repeat for all bins do Sum the weights of all fragments in the bin at eachtime The averaged-over-time bin weight is dividedequally among trajectory fragments starting in that binfor the next iteration. end for until A user-defined convergence threshold is met The entire iterative procedure can be repeated for tra-jectory sets generated by progressively trimming the firsttime-point from each trajectory (to decrease initial statebias), creating a basis for a final estimate averaged overtrimmed trajectory sets. This protocol was not used togenerate the data shown.
For NESS, the entire iterative procedure can be repeatedfor trajectory sets generated by progressively trimmingthe first time-point from each trajectory (to decrease ini-tial state bias), creating a basis for a final estimate av-eraged over trimmed trajectory sets. Additionally thesource-sink splicing of Algorithm
C. Non-equilibrium steady-state
The probability distribution of a non-equilibriumsteady state (NESS) in the same way (Algorithm 1)except that suitable boundary conditions must be en-forced. We focus here on a source-sink NESS becausethat is most pertinent to rate-constant estimation. Sucha NESS requires defining (i) the absorbing source andsink macrostates, which shall consist strictly of non-overlapping sets of bins and (ii) the source, or feedback, distribution γ which describes how probability reachingthe sink macrostate is redistributed at the source [25].In a discrete picture, we let γ i be the fractional prob-ability to be initiated (or fed back) to bin i , such that (cid:80) i γ i = 1. No bin with γ i > γ values do not in themselvesnecessarily define the source macrostate. For example,in the important special case of the source-sink NESSwhich maintains an equilibrium distribution within thesource macrostate (only), bins not on the surface of themacrostate strictly require γ = 0 [26]. In any case, ourapproach applies to arbitrary choices of the source dis-tribution γ . D. Committor calculation
The committor is not a probability distribution perse and exhibits a different kind of stationarity that hasbeen noted previously [20–23]. The committor Π( x ) for aphase-space point x is defined to be the probability of tra-jectories initiated from x reaching a ‘target’ macrostatebefore reaching a different ‘initial’ macrostate, both ofwhich can be arbitrarily defined if non-overlapping. Weassume dynamics are stochastic and Markovian in thecontinuous phase space. Discrete bins used for calcu-lation in the algorithm are not assumed to behave asMarkov states.The iterative algorithm can be understood by first con-sidering ‘brute force’ committor estimation by initiatinga large number, N , of trajectories from x and comput-ing the fraction which reach the target first. However,instead of waiting for all such trajectories to be absorbedat one state or the other, we can imagine examining thedistribution of phase points p ( x t ) at finite time t whichevolved from x – that is, from trajectories initiated at t = 0 from x with absorbing boundary conditions at ini-tial and target states. If t is sufficiently short, such thatno trajectories have yet been absorbed by either state,the expected fraction that eventually will be absorbed tothe target by definition is given by the average committorof current phase points x t [23]. That is, with trajectoriesindexed by i , the committor can be estimated byΠ( x ) . = (1 /N ) (cid:88) i Π( x ti ) . (1)This same expression can be used at longer t when sometrajectories have been absorbed, if we introduce the ‘over-loaded’ definitions Π( x ti ) ≡ i was absorbedto the target and zero if absorbed to the initial state.With this adjustment, the estimator (1) is applicable atany time t .Algorithm 2 implements the preceding formulation us-ing an iterative process for self-consistency. Because thecommittor average is stationary, we can use (1) at anytime or by averaging over all times. Here, committorvalues are updated based on following trajectories pass-ing through a given phase point, approximated as a dis-crete bin, and calculating time averages of all the visited‘downstream’ bins. By contrast, distribution estimationin Algorithm 1 averages over time for each bin separately,and do not follow trajectories. For convenience, trajecto-ries which reach a macrostate are ‘padded’ with commit-tor values of zero or one depending on the macrostate.Once again, we expect a slight discretization error butusing trajectories leverages the maximum possible infor-mation about intra-bin dynamics. Bins are not assumedto exhibit Markovian behavior. Algorithm 2
Committor calculation Begin with a set of absorbing boundary condition trajec-tory fragments, as described in Algorithm 0 Assign each bin within the target macrostate a committorof 1. All other bins are initialized to 0, including in theinitial macrostate. for all trajectory fragments do if fragment reaches target or initial macrostate then Pad the trajectory: Assign fixed committor valuesof 1 or 0, respectively, to all time points starting from theabsorbing event and ending at the chosen fixed length M . end if end for repeat for all bins do if bin is within a macrostate then Do not change committor - it remains 0 or 1 else
The next estimated committor value is the av-erage committor over all bins subsequently visited by alltrajectories starting in this bin end if end for until
Change between iterations is below user-definedconvergence threshold
III. SYSTEMS AND RESULTSA. Systems
The iterative equilibrium distribution estimation tech-nique is first applied to a set of simulated trajectories in aone-dimensional (1D) double-well potential with a 5 k B T barrier, shown in Fig. 1 and simulated using overdampedLangevin dynamics.Motion under overdamped Langevin dynamics obeys x i +1 = x i + − ∆ tmγ dVdx (cid:12)(cid:12)(cid:12)(cid:12) x i + ∆ x rand (2)where γ = 0 . − is the friction coefficient, m is set to 1,∆ x rand is a stochastic displacement with its magnitudedrawn from a Gaussian distribution centered at 0 with σ = (cid:112) k B T ∆ t/mγ where k B T is set to 1 and ∆ t = 5 × − s is the timestep. The double-well potential usedis given by V ( x ) = k B T (cid:34)(cid:18) . xx (cid:19) − (cid:18) . xx (cid:19) (cid:35) . (3)where x is an arbitrary reference length.The full dataset consisted of 32 trajectories, each runfor 2 × steps. We used 130 equal-width states, ofwhich 80 were in the intermediate region and 25 were ineach of states A and B, shown in Fig. 1.The other system analyzed is a 208 µ s atomistic molec-ular dynamics simulation of Trp-cage folding saved with200 ps resolution [27]. This trajectory is notable for beingvery long and well-sampled. B. Equilibrium distribution
Fig. 2 illustrates the convergence of the iteratively esti-mated equilibrium distribution and Fig. 3 demonstratesthe final result of the iterative calculation in the 1Ddouble-well system. In general, the final converged it-eration reproduces the Boltzmann distribution preciselyand without bias.Applying the iterative equilibrium distribution estima-tor to the Trp-cage folding trajectory (Fig. 4) fragmentssimilarly shows reasonable agreement with simple counts.The right-most bin is a notable exception and warrantsfurther investigation.
15 10 5 0 5 10 15 x / x P o t e n t i a l ( k T ) A B
FIG. 1. Double-well potential used for overdamped Langevindynamics simulations. Macrostate A is comprised of states at x/x < −
10, and B of states at x/x > E Q M P r o b a b ili t y FIG. 2. Plot of the iterative equilibrium distribution esti-mator’s convergence. Some intermediate iterations have beenomitted for clarity. Warmer colors show later iterations, andthe black line is the initial weight in each bin. P r o b a b ili t y Iteration 1Boltzmann dist.Final iteration
FIG. 3. Equilibrium distributions for the double-well poten-tial system. Since the exact form of the potential is known,the Boltzmann distribution (red) provides reference equilib-rium probabilities. Shown are the distribution after one it-eration (green) and the distribution after the convergencecriterion was met (blue). Error bars indicate one standarddeviation across 5 independent trials. P r o b a b ili t y Iteration 1Final iterationFull counts
FIG. 4. Equilibrium distributions for the Trp-cage folding tra-jectory fragments, shown on a log and a linear scale. Shownare the distribution after one iteration (green), the distri-bution after the convergence criterion was met (blue), andcounts in each bin from the original full trajectory (red), av-eraged across independent trials based on sub-dividing the fullShaw trajectory into five segments. Bins have been coarse-grained from 1000 initial bins for visualization. Error barsrepresent minima and maxima among five independent trials.
15 10 5 0 5 10 15X0.00.20.40.60.81.0 C o mm i tt o r t o B IterativeBrute-force
FIG. 5. Validation of iterative committor estimation in aone-dimensional model. Committor estimates are shown forthe brute-force/naive calculation (green line) as well as theiterative approach (blue line) vs brute-force result, for theone-dimensional model of the potential in Eq. (3). Error barsindicate one standard deviation across 5 independent trials.
C. Committor calculation
As before, we first apply the committor estimator tothe 1D double-well potential. With this simple 1D systemwe are able to directly compute the committor through a“brute-force” technique, where a number of trajectoriesare initialized from each point, and stopped when theyreach a macrostate. Although the computational cost ofthis would be prohibitive for a more complex system, thisis an unbiased reference.Fig. 5 shows the result of the iterative committor es-timator along with the brute-force reference for the 1Dsystem. The committor profile follows the expected sig-moid shape between the two wells, with a value of 0.5 atthe peak of the barrier. The iterative approach is thusvalidated as unbiased, by comparison to brute-force com-putation.We also applied the iterative scheme to estimating thecommittor in for the Trp-cage system. Once again, brute-force reference committor values were obtained by fol-lowing trajectory fragments originating in each bin un-til they reached a macrostate; the fraction that reach-ing state B before state A determined the commit-tor. As seen in Fig. 6, the iterative committor estima-tion algorithm yields results for the Trp-cage data thattrack these brute-force estimates well, especially near the macrostates. I t e r a t i v e c o mm i tt o r s FIG. 6. Scatter plot of brute force committor values vs it-erative committor values for Trp-cage. A line of slope 1 isshown in blue. Error bars represent a single standard devia-tion among independent trials based on sub-dividing the fullShaw trajectory into five segments.
IV. CONCLUSIONS
We have introduced algorithms that employ two well-known principles, iteration and stationarity, to estimatekey observables from a trajectory or set of trajectories.In principle, the input trajectories need not follow anyprescribed distribution. The procedures described do notrely on a Markov assumption. Although discrete bins areused for “accounting,” the continuous trajectories em-body all details of the landscape and dynamics which, inturn, are included implicitly in the analyses.Subsequent work will show that the procedures de-scribed here are formally equivalent to ’power method’[28] evaluation of the stationary distribution of a non-standard transition matrix that accounts for trajectorydynamics over all available timescales, as pointed out tous by David Aristoff and Gideon Simpson.
ACKNOWLEDGMENTS
We appreciate helpful discussions with David Aristoffand Gideon Simpson. We thank DE Shaw Research forsharing the protein folding trajectory with us and theNIH for support through Grant GM115805. [1] S. A. Hollingsworth and R. O. Dror, Molecular dynamicssimulation for all, Neuron , 1129 (2018).[2] A. Grossfield and D. M. Zuckerman, Quantifying uncer-tainty and sampling quality in biomolecular simulations, Annual reports in computational chemistry , 23 (2009).[3] A. Grossfield, P. N. Patrone, D. R. Roe, A. J. Schultz,D. W. Siderius, and D. M. Zuckerman, Best practicesfor quantification of uncertainty and sampling quality in molecular simulations [article v1. 0], Living journal ofcomputational molecular science (2018).[4] D. M. Zuckerman, Equilibrium sampling in biomolecularsimulations, Annual review of biophysics , 41 (2011).[5] L. Weng, S. L. Stott, and M. Toner, Exploring dynamicsand structure of biomolecules, cryoprotectants, and wa-ter using molecular dynamics simulations: implicationsfor biostabilization and biopreservation, Annual reviewof biomedical engineering , 1 (2019).[6] S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen,and P. A. Kollman, THE weighted histogram analysismethod for free-energy calculations on biomolecules. I.The method, Journal of Computational Chemistry ,1011 (1992).[7] E. Darve, D. Rodr´ıguez-G´omez, and A. Pohorille, Adap-tive biasing force method for scalar and vector free en-ergy calculations, The Journal of Chemical Physics ,144120 (2008).[8] Y. Sugita and Y. Okamoto, Replica-exchange moleculardynamics method for protein folding, Chemical PhysicsLetters , 141 (1999).[9] G. A. Huber and S. Kim, Weighted-ensemble Browniandynamics simulations for protein association reactions,Biophysical Journal , 97 (1996).[10] C. Dellago, P. Bolhuis, and P. L. Geissler, Transition pathsampling, Advances in chemical physics , 1 (2002).[11] T. S. van Erp, D. Moroni, and P. G. Bolhuis, A novel pathsampling method for the calculation of rate constants,The Journal of Chemical Physics , 7762 (2003).[12] A. K. Faradjian and R. Elber, Computing time scalesfrom reaction coordinates by milestoning, The Journalof Chemical Physics , 10880 (2004).[13] R. J. Allen, D. Frenkel, and P. R. ten Wolde, Simulatingrare events in equilibrium or nonequilibrium stochasticsystems, The Journal of Chemical Physics , 24102(2006).[14] J. D. Chodera and F. No´e, Markov state models ofbiomolecular conformational dynamics, Current opinionin structural biology , 135 (2014).[15] G. R. Bowman, V. S. Pande, and F. No´e, An introduc-tion to Markov state models and their application to longtimescale molecular simulation , Vol. 797 (Springer Sci-ence & Business Media, 2013).[16] D. M. Zuckerman and T. B. Woolf, Efficient dynamic im-portance sampling of rare events in one dimension, Phys.Rev. E , 016702 (2000).[17] H. Wan and V. A. Voelz, Adaptive Markov state modelestimation using short reseeding trajectories, The Jour-nal of Chemical Physics , 24103 (2020).[18] T. Hill, Free Energy Transduction and Biochemical Cy-cle Kinetics , Dover Books on Chemistry (Dover Publica-tions, 2005).[19] D. Bhatt, B. W. Zhang, and D. M. Zuckerman, Steady-state simulations using weighted ensemble path sampling,The Journal of chemical physics , 014110 (2010).[20] C. W. Gardiner,
Handbook of stochastic methods forphysics, chemistry, and the natural sciences (Springer-Verlag, Berlin New York, 1985).[21] N. G. Van Kampen,
Stochastic processes in physics andchemistry (Elsevier, Amsterdam Boston London, 2007).[22] W. E. and E. Vanden-Eijnden, Towards a Theory ofTransition Paths, Journal of Statistical Physics , 503(2006).[23] J.-H. Prinz, M. Held, J. C. Smith, and F. No, Ef- ficient computation, sensitivity, and error analysis ofcommittor probabilities for complex dynamical pro-cesses, Multiscale Modeling & Simulation , 545 (2011),https://doi.org/10.1137/100789191.[24] A. M. Ferrenberg and R. H. Swendsen, New monte carlotechnique for studying phase transitions, Physical reviewletters , 2635 (1988).[25] J. Copperman, D. Aristoff, D. E. Makarov, G. Simp-son, and D. M. Zuckerman, Transient probability cur-rents provide upper and lower bounds on non-equilibriumsteady-state currents in the smoluchowski picture, TheJournal of chemical physics , 174108 (2019).[26] D. Bhatt and D. M. Zuckerman, Beyond microscopic re-versibility: Are observable nonequilibrium processes pre-cisely reversible?, Journal of chemical theory and com-putation , 2520 (2011).[27] K. Lindorff-Larsen, S. Piana, R. O. Dror, and D. E. Shaw,How Fast-Folding Proteins Fold, Science , 517 (2011).[28] Power iteration, https://en.wikipedia.org/wiki/Power_iterationhttps://en.wikipedia.org/wiki/Power_iteration