Adaptive stochastic continuation with a modified lifting procedure applied to complex systems
Clemens Willers, Uwe Thiele, Andrew J. Archer, David J. B. Lloyd, Oliver Kamps
AAdaptive stochastic continuation with a modified lifting procedure applied to complexsystems
Clemens Willers,
1, 2, ∗ Uwe Thiele,
1, 2, 3, † Andrew J. Archer,
4, 5, ‡ David J. B. Lloyd, § and Oliver Kamps ¶ Institut f¨ur Theoretische Physik, Westf¨alische Wilhelms-Universit¨at M¨unster, 48149 M¨unster, Germany Center for Nonlinear Science (CeNoS), Westf¨alische Wilhelms-Universit¨at M¨unster, 48149 M¨unster, Germany Center for Multiscale Theory and Computation (CMTC),Westf¨alische Wilhelms-Universit¨at, 48149 M¨unster, Germany Department of Mathematical Sciences, Loughborough University, Loughborough LE11 3TU, United Kingdom Interdisciplinary Centre for Mathematical Modelling,Loughborough University, Loughborough LE11 3TU, United Kingdom Department of Mathematics, University of Surrey, Guildford, GU2 7XH, United Kingdom (Dated: October 7, 2020)Many complex systems occurring in the natural or social sciences or economics are frequentlydescribed on a microscopic level, e.g., by lattice- or agent-based models. To analyze the states ofsuch systems and their bifurcation structure on the level of macroscopic observables, one has torely on equation-free methods like stochastic continuation. Here, we investigate how to improvestochastic continuation techniques by adaptively choosing the parameters of the algorithm. Thisallows one to obtain bifurcation diagrams quite accurately, especially near bifurcation points. Weintroduce lifting techniques which generate microscopic states with a naturally grown structure,which can be crucial for a reliable evaluation of macroscopic quantities. We show how to calculatefixed points of fluctuating functions by employing suitable linear fits. This procedure offers a simplemeasure of the statistical error. We demonstrate these improvements by applying the approach inanalyses of (i) the Ising model in two dimensions, (ii) an active Ising model, and (iii) a stochasticSwift-Hohenberg model. We conclude by discussing the abilities and remaining problems of thetechnique.
I. INTRODUCTION
In many fields of science – such as physics, chemistry,biology, economics, and sociology – the behavior of var-ious complex systems is captured by nonlinear mathe-matical models that describe the time evolution of thesystem. The models are formulated in terms of statevariables and depend on specific parameters. They canexhibit various stable and unstable solutions correspond-ing, e.g., to steady states, time-periodic states or chaoticstates of the system. Such a general classification holdsfor model types ranging from discrete agent-based modelsto continuum models. It directly applies to deterministicmodels while for stochastic models it applies when con-sidering certain statistical quantities [1–6]. In this con-text, we regard molecular dynamics (MD) simulationsand similar systems as stochastic systems. Although for-mally they are deterministic, in practice their very manydegrees of freedom cause deterministic chaos that rendersthem stochastic [7–9].Nonlinear models describe a rich variety of quali-tatively different system behavior and related (possi-bly abrupt) transitions that may occur when conditions ∗ [email protected]; ORCID ID: 0000-0001-5777-0514 † ‡ ORCID ID: 0000-0002-4706-2204 § ORCID ID: 0000-0002-1902-0007 ¶ ORCID ID: 0000-0003-0986-0878 change: States change their stability, additional statesemerge or several states exist at identical parameter val-ues (multistability, hysteresis). For instance, much at-tention in the fields of natural science and engineeringhas focused on “tipping points” where small changes ofa control parameter result in large changes in system be-havior [10]. All this information can be summarized in bifurcation diagrams that present how the various statesof the studied system or solutions of the developed modelemerge, change, and vanish when varying a control pa-rameter [11–13]. For this, a real quantity is defined as theorder parameter or solution measure which characterizesand distinguishes the different states. This quantity –which can, e.g., be a physical observable, a statisticalquantity, or a norm – is plotted against the control pa-rameters [14, 15]. Here we restrict ourselves to analyzingsteady states of the system, i.e., the solution measuredoes not involve averaging over time.For deterministic models consisting, e.g., of systemsof ordinary or partial differential equations, a well-developed and efficient tool for calculating bifurcationdiagrams is numerical path continuation [16–18]. Withthese methods one can directly follow branches of solu-tions and detect changes in their stability and, in conse-quence, bifurcations where additional solution branchesemerge that one can switch onto. In contrast to proce-dures based on numerical time simulations, continuationcan detect and follow branches of both stable and un-stable states. It is used in many examples from classicaldynamical systems [11, 13] to spatially extended systemssuch as those occurring in fluid dynamics or reaction- a r X i v : . [ n li n . AO ] O c t diffusion systems [17, 19–22].To apply numerical continuation, one normally needsan evolution equation at the particular scale of interest ina closed form. That is, for the order parameter X and aparameter λ (we only regard one-dimensional parameterspaces), one considers a differential equation,˙ X = f λ ( X ) (1)or an iterative mapping, X t + τ = F λ ( X t ) (2)where τ is the time step of the mapping. Here we areinterested in steady states X s ( λ ) at different values of λ , i.e., in solutions of the equation f λ ( X s ( λ )) = 0 or F λ ( X s ( λ )) = X s ( λ ), respectively. Note that a differen-tial equation can be transformed into an iterative mapusing time discretization and integration. So, generallyspeaking, we need to obtain the roots of a function G λ ( X s ( λ )) := F λ ( X s ( λ )) − X s ( λ ) = 0 . (3)Hence, continuation means finding zeros of the function G λ for subsequent values of the parameter λ . A completebifurcation diagram is calculated by following all solutionbranches step by step. In every step, zeros obtained inthe previous step or several previous steps can be usedto predict a next value. This value is then used as initialguess for the correcting root finding procedure (see, e.g.,chapter 10 of Ref. [11]). Advancing step by step in λ inthis prediction-correction modus represents the simplestrealization of numerical path continuation.However, there exist many examples where such equa-tions are only implicitly given. In this case, one can stillapply path continuation in an equation-free way. Thatis, one uses a time stepper instead of explicit evolutionequations [23–28]. This technique is called stochastic con-tinuation and makes it possible to apply continuationtechniques for a much wider range of systems than thosecaptured by Eqs. (1) or (2), including real-world experi-ments [29–31].Typical examples where the equation-free approach isapplied are lattice-based or agent-based models [25, 27],MD simulations [32] and kinetic Monte Carlo methods[33]. In all these examples, the model is defined on a mi-croscopic level whereas one is interested in the dynamicsof a macroscopic quantity for which an analytical de-scription is not available. The role of the macroscopicquantity is often taken by a statistical quantity, i.e., oneanalyzes ensembles of states. This is the origin of theterm stochastic continuation , i.e., the method is not lim-ited to stochastic contexts. Variants are also applied tostochastic partial differential equations [34].In summary, stochastic continuation refers to continu-ation methodologies that one can apply without havingdirect access to the dynamics of the quantity of interest.An indirect access is made possible by three steps: lift-ing , evolving , and restricting . First, in the lifting step one creates a suitable microscopic state (or an ensembleof such states) belonging to a specific value of the macro-scopic quantity. Then, the state is evolved via a timestepper , i.e., the microscopic time evolution is advanced.Finally, in the restricting step the macroscopic quantitybelonging to the newly evolved microscopic state is cal-culated. Taken together, these three steps realize therequired time evolution on the macroscopic level and re-place the function F λ .The present work improves several aspects of thestochastic continuation method. To begin with, in mostcases, the output of the time stepper in the evolving stepcontains statistical fluctuations. We propose an alterna-tive way of handling this issue. Namely instead of di-rectly applying a root finding method to the fluctuatingfunction G λ , we propose to evaluate a few (between 10and 50) function values in a suitable neighborhood of theinitial guess and perform a linear fit; see Ref. [35]. Theroot of the linear fit is found to be a good approximationfor the desired steady state. This procedure moderatesthe influence of fluctuations and works in a simple andstable manner.Furthermore, the stochastic continuation algorithm in-volves quite a large number of numerical parameters,e.g., the length of the microscopic time evolution in theevolving step, the number of microscopic realizations em-ployed, and the step size of the path continuation step.The corresponding parameter settings have a crucial in-fluence on the quality and precision of the continuationresult. In Ref. [27], one possible way to systematicallydetermine these parameters is described. It is based onan analysis of the probability distribution of the order pa-rameter and how it changes during the microscopic timeevolution step. This analysis is done during a configura-tion step prior to the continuation run. As a further im-provement, we suggest here to adaptively adjust certainspecific parameters on the fly. Especially near bifurcationpoints, this leads to the results being more accurate.We also consider the lifting procedure, which is a criti-cal part of stochastic continuation. Being a one-to-manymapping, it is not uniquely defined. In most cases de-scribed in the literature, a simple constrained randomlifting procedure is used, i.e., a microscopic state with thecorrect macroscopic observable is chosen randomly with-out any control of its internal structure [23, 25, 27, 33].Other lifting procedures found in the literature concernspecific examples. In Ref. [32], a lifting technique for MDsimulations of dense fluids is described where the posi-tions and velocities of atoms are chosen according to ap-propriate probability distributions based on the values ofthe macroscopic quantities. In addition, a detailed studyof lifting errors for this particular situation is given. An-other example involving MD simulations (water in carbonnanotubes) is discussed in Ref. [36]. There, the lifting isdone using a constrained MD simulation with an arti-ficial bias potential. In Ref. [37] a lifting for a grandcanonical Monte Carlo simulation of the Larson modelfor micelle formation is proposed. The lifting is based ona database of microscopic structures which are obtainedby performing lengthy time simulations.Under the assumption that there is a clear separa-tion of the slow macroscopic and the fast microscopictimescales, possible lifting errors can be “healed” dur-ing the microscopic time evolution performed during theevolving step. Considering an ensemble of microscopicstates, a similar assumption is made: The ensemble isgenerated according to the low-order moment(s) of theappropriate distribution that are known from the macro-scopic observable(s). The unknown higher-order mo-ments are expected to establish their values much morerapidly than the low-order ones. Through this slaving, er-rors made in forming the ensemble are healed [27, 33, 38].The healing process is supposed to constitute a minorpart of the algorithm’s computation time.However, in many important cases, the microscopicstates show an internal spatial structure that is impor-tant for the macroscopic dynamics (see Sec. III). Thehealing process does develop these internal structures,but this can sometimes only occur after a very long mi-croscopic time evolution, i.e., costing a huge numericaleffort. In such cases, stochastic continuation has no ad-vantage for stable states over calculations based on justdirect time simulations.In the literature, various ways to tackle this problemare presented. Reference [39] uses an algorithm whichcreates initial states for the evolving step that are veryclose to a slow manifold of the system. However, thisonly works for specific coupled differential equations. InRef. [40] a single suitable fixed reference state is usedand adjusted according to the values of the macroscopicquantities. However, this does not allow the characterof the microscopic states to change over the course ofthe continuation process and results in similar problemsas a constrained random lifting. The lifting techniquesused for the MD simulation of water in carbon nanotubes[36] and of micelle formation [37] mentioned above areable to produce naturally grown microscopic structures.Yet, they require a vast numerical effort because they arebased on long-time simulations at every single lifting stepand on the creation of a database of clustered structures,respectively.Here, we propose a solution to this problem that con-sists of a lifting technique which uses a flexibly vary-ing reference state. This facilitates the creation of suit-ably structured subsequent microscopic states for differ-ent values of the macroscopic quantity as one follows thecontinuation path. In particular, the continuation of sta-ble branches gives much more accurate results with thisprocedure, which we refer to as structure lifting .After introducing our proposed improvements to thestochastic continuation procedure, we illustrate the ap-proach by applying it to three distinct systems and dis-cuss the results. This allows to focus on the strengthsand weaknesses of our method, but we do not give anexhaustive analysis of the individual example systems.The first system we apply the approach to is the Ising model in two dimensions (2D). This is a simple modelfor ferromagnetism [41]. It consists of a 2D lattice con-taining N sites that are each occupied by a spin whichcan be in one of two possible states, s i ∈ {− , } , with i = 1 , . . . , N . Every spin interacts with its four nearestneighbors with interaction strength J . The Hamiltonianreads H ( s ) = − (cid:88) nn Js i s j (4)where s = { s , s , · · · , s N } and (cid:80) nn denotes the sumover nearest-neighbor pairs. The macroscopic observableis the magnetization m which is the mean value of any ofthe spins, i.e., m := (cid:104) s i (cid:105) , where (cid:104) s i (cid:105) denotes the time (orensemble) average value of s i . The microscopic dynam-ics of the model is given by the usual Metropolis MonteCarlo algorithm [42]. One is interested in the stable andunstable states, which are characterized by their magne-tization value, which is a function of the reduced temper-ature T (cid:48) := k B T /J , where T is the temperature and k B is Boltzmann’s constant. Onsager’s analytical solutionshows that above a critical temperature T c the magne-tization is zero while the system shows a spontaneousmagnetization below T c [43].The second example we consider is the active Isingmodel introduced in Ref. [44]. Here the spins can be as-sociated with particles that not only can flip their spinstate but can also move. The preferred direction of mo-tion of a spin depends on its orientation. Depending onthe mean particle density, different phases are observed.At low densities, the mean orientation and the mean ve-locity are zero. This is called the “gas phase.” At highdensities, the spins are ordered, i.e., there emerges anoverall magnetization which is connected to an overalldirected motion. This is called the “liquid phase.” Atintermediate densities, a phase separation occurs, intodifferent regions containing the gas and the liquid states.The total fraction containing the liquid can be used asthe macroscopic order parameter (see Sec. IV for details).Finally, as third example we perform stochastic con-tinuation of the steady inhomogeneous solutions of astochastic partial differential equation, namely a stochas-tic version of the Swift-Hohenberg equation [45, 46]. Itincorporates an additive noise field η multiplied by a pa-rameter D which determines the strength of the noiseterm. The time evolution equation for the field Ψ( x, y, t ),which varies with position ( x, y ) and time t , is ∂ t Ψ = ε Ψ − Ψ − (1 + ∇ ) Ψ + √ Dη. (5)Different implementations of stochasticity (i.e., the noisefield η ) are employed in the literature: One finds addi-tive Gaussian spatiotemporal noise [47–49], independentGaussian additive and multiplicative noise [50], and spa-tially global Gaussian noise [45]. Here we take the noiseto be normally distributed and uncorrelated in space andtime. Depending on the control parameter ε and themagnitude of D , this equation has various stable steadystates which correspond, e.g., to stripe or square patternsof the field Ψ. A macroscopic quantity which can be usedas a solution measure (order parameter) in a bifurcationdiagram is the L norm of the deviations of Ψ from itsmean value.This paper is organized as follows: In Sec. II, we in-troduce the details of stochastic continuation and thecorresponding notations. Additionally, we explain somedifficulties with the method and present our proposedimprovements. We illustrate the improved method withthree examples: the classical Ising model in 2D in Sec. III,an active Ising model in Sec. IV, and a stochastic Swift-Hohenberg equation in Sec. V. Finally, in Sec. VI, wesummarize the capabilities of the proposed approach anddiscuss some remaining questions. II. METHODA. Numerical continuation and thepseudoarclength method
The method of stochastic continuation extends thestandard equation-based continuation method for deter-ministic systems. Therefore, we start by briefly ex-plaining the essence of continuation methods for one-dimensional deterministic systems [16–18]. These havea state variable (or order parameter) X that is a realmacroscopic quantity. The dynamics is defined by an or-dinary differential equation (ODE) of the form in Eq. (1)or by a iterative map of the form in Eq. (2), which de-pend on a control parameter λ (we only consider one-dimensional parameter spaces). Note that a dynamicsdescribed by a partial differential equation (PDE) or anintegrodifferential equation can be transformed into asystem of ODEs, e.g., by discretizing in real space usingfinite differences or spectral methods. Further, one mayemploy time discretization and integration to transforma differential equation into an iterative map. Therefore,here we only consider the latter.Of central interest is how steady states X s ( λ ) dependon the value of the control parameter λ ; i.e., we neglecttime-periodic and chaotic states. In other words, the aimis to search for states which satisfy Eq. (3). Presentedin a bifurcation diagram as a function of λ , these zerosform different branches (this is a consequence of the im-plicit function theorem [16]). Applying the continuationmethod, each branch can be calculated by following itstep by step. In every step, a predictor-corrector schemeis applied, whereby different prediction and correctionmethods are used in the literature.In the simplest version of continuation, one explicitlychanges the parameter λ in small steps. The correspond-ing steady states X s ( λ ) are obtained by a root findingprocedure applied to the function G λ in Eq. (3) for afixed value of λ . In this approach the solution at eachstep in λ can be obtained by using the solution from theprevious step as the initial guess for the solution at the FIG. 1. Sketch illustrating the basis of numerical continua-tion. Shown are schemes consisting of a secant predictor and(i) pseudoarclength corrector and (ii) orthogonal corrector.The secant predictor uses two previous points on the solutionbranch to obtain an initial guess for the corrector via a linearextrapolation by an arclength ds . The pseudoarclength cor-rector employs Newton steps on a circle of radius ds aroundthe previous point to obtain the next point on the branch.The orthogonal corrector searches on a line perpendicular tothe direction of the secant. next step. Thinking of this as a “prediction” followedby a “correction,” then the prediction step is trivial andis represented by a horizontal line in the bifurcation di-agram, while the corrections correspond to vertical dis-placements.Taking two previous steps into account and using alinear extrapolation, it is possible to use this secant pre-dictor to obtain a better initial state in the predictor step.Again, the corrector is a root finding process at fixed λ .The correction at fixed λ is problematic at saddle-nodebifurcations because one cannot follow the bifurcationcurve through the turning point (or fold). Instead, thepredictor goes beyond the bifurcation point and the cor-rector either does not converge to any steady solutionor converges to a solution on a branch different to thedesired one.To follow a solution branch through a fold, one mayinstead use the pseudoarclength method [51]. In thismethod, a branch is not tracked using the control pa-rameter λ as the main continuation parameter. Instead,an intrinsic and unique property of the bifurcation curve,the arclength s along the branch, is used. In particular,one employs a secant predictor advancing an arclength ds . Then, the predictor consists of Newton steps at fixedarclength, i.e., the true position of the next point on thebranch is searched along a circle with radius ds aroundthe previous point, which is sketched as option (i) inFig. 1. That is, the next point is not searched for at fixed λ but at fixed s , while λ is adapted in the Newton stepsand its exact value becomes part of the solution. Withthe resulting predictor-corrector scheme, it is possible tofollow a branch through a saddle-node bifurcation.Another alternative is to make the correction along aline which is perpendicular to the direction defined bythe secant predictor, as sketched as option (ii) in Fig. 1.We refer to this scheme a secant predictor with an or-thogonal corrector and use it below in our continuationalgorithm. It is simpler to realize than the pseudoar-clength predictor-corrector method and approximates itwith a negligible error.We parametrize the perpendicular line on which thecorrection is done with a real number u such that the lineis given by the set of points ( λ u , X u ). We search for azero of the function G λ along this line. That is, we searchfor the zero of the real-valued function (cid:101) G ( u ) := G λ u ( X u ).If we denote a stable steady state as X ∗ , then when thestate of the system X < X ∗ , we have X < F λ ( X ) and so G λ ( X ) is positive. On the other hand, when X > X ∗ ,then G λ ( X ) is negative. Hence, near the steady state,the function G λ has a negative slope. Near an unstablesteady state, it has a positive slope.We cannot transfer this idea to the function (cid:101) G directly,because, on moving along a curved solution branch, theperpendicular line can have many different orientations.Only for a horizontal segment of a branch and a verti-cal perpendicular line can the above consideration be di-rectly applied to the function (cid:101) G . Along curved branchesthe meaning of the sign of the slope of the function (cid:101) G maychange. Nevertheless, via the orientation of the perpen-dicular line, there is a connection between this sign andthe stability of the solutions on the branch being contin-ued. So, if a change in the sign of (cid:101) G is detected during thecontinuation, then one has to be aware that a bifurcationpoint may have been passed and consider the possibil-ity of other branches existing in the bifurcation diagram.This is discussed further in the outlook in Sec. VI.The numerical continuation schemes just describedform a well-developed method to calculate bifurcationdiagrams for deterministic systems formulated as ODEs,PDEs, or as iterative maps. Many different softwarepackages exist which implement this method, like auto-07p [51–53] or pde2path [54]. Furthermore, these con-tain many additional helpful and sophisticated toolsthat enable the detection of many types of bifurcations,branch switching to steady and time-periodic states,two-parameter continuation of loci of bifurcation points,like saddle-node, Hopf, and period-doubling bifurcations.This makes it possible to efficiently determine complexbifurcation diagrams. B. Stochastic continuation
The continuation methods described in the previoussubsection are well suited to investigate deterministic sys-tems. We now explain how the basic concepts can be A straight line can be parametrized into two different directions.So, to be specific, the relation between the sign of the slope ofthe function G λ and the stability of the steady state dependson the direction of the parametrization. For one direction, wehave stability for a positive slope, while for the other directionstability corresponds to the negative slope. X t (cid:31) F λ / / _ lifting (cid:15) (cid:15) X t + τ { x it } (cid:31) evolving / / { x it + τ } _ restricting O O FIG. 2. Scheme indicating how the lifting, evolving, and re-stricting steps are used to evaluate a function call of F λ inthe method of stochastic continuation. applied to stochastic systems. For such systems we mustdistinguish between the macroscopic and the microscopicdescriptions and how these are related. In general, we usethe terms “macroscopic quantity” and “macroscopic dy-namics” for the quantity and dynamics we are interestedin on the level of a macroscopic or ensemble-averaged de-scription. These are the quantities that characterize thesteady states in the context of bifurcation diagrams. By“microscopic quantity” and “microscopic state” we referto individual representations of the stochastic system.Note that the terms “microscopic” and “macroscopic”may not refer literally to length scales in the system. Forinstance, the macroscopic state can be the mean value ofa function of the microscopic state (e.g., the mean mag-netization of the spin lattice in the Ising model) or themean over an ensemble of microscopic states (e.g., a timeaverage of a quantity described by a Langevin equation).In the numerical continuation for deterministic systemsdescribed above, an equation defining the macroscopicdynamics is explicitly given. However, for a stochasticsystem the macroscopic dynamics is given only implic-itly and the connection between the microscopic and themacroscopic level is often (but not always) of a statisticalnature, since it arises from the stochastic dynamics of themicroscopic states. Numerical continuation for stochasticsystems, referred to as “stochastic continuation,” con-tinues the steady states of the macroscopic quantitiesand determines their bifurcation diagrams without ex-plicit knowledge of the time evolution of the macroscopicstate. Note that by “steady states” of a stochastic sys-tem, we mean that the macroscopic quantity obtainedby averaging is approximately steady. While microscopicstates are not steady in a stochastic system, the solutionmeasure or order parameter used to distinguish differenttypes of microscopic states is always obtained by some av-eraging procedure (e.g., the mean orientation of all spinslocated on the Ising lattice at a particular time). Becausein practice systems are always of finite size, the macro-scopic quantity still slightly fluctuates. This importantissue we will further discuss in Sec. II C.In principle, numerical continuation can be done inall cases where one has access to the function F λ . Fora stochastic system this function is not explicitly given,but one can instead use the microscopic time-stepper;see Refs. [23–27]. To do this we have to determine a wayto create for each value of the macroscopic quantity X a corresponding microscopic state x or an ensemble ofsuch states { x i } . Using such a mapping, one is able tobypass the problem of not having an explicit form for thefunction F λ . To obtain the value of the function F λ at X t , we apply the following three steps: First, a corre-sponding microscopic state x t or a corresponding ensem-ble { x it } is created in the so-called lifting step. Second,the time evolution is advanced a time interval τ employ-ing the explicitly known microscopic dynamics to obtainthe state x t + τ or the ensemble { x it + τ } . This is the evolv-ing step. Finally, in the restricting step, one returns tothe macroscopic level by evaluating X t + τ which is identi-cal to F λ ( X t ). This sequence of steps is shown schemat-ically in Fig. 2 and bypasses the task of evaluating theunknown function F λ ( X t ). Thus, every time the numeri-cal continuation algorithm calls the macroscopic function F λ in the procedure to find the roots of the function G λ or in any other situation, the call of F λ is replaced by thethree-step bypass illustrated in Fig. 2.Note that the algorithm just described includes twoimportant additional parameters. The first is the timeinterval τ of the microscopic time evolution. The secondis the size M of the microscopic ensemble. Rules forthe choice of these are discussed below in Sec. II C 1 andSec. II C 2, respectively.Another important issue relates to the specific defini-tion of the lifting map that is used. This depends on thespecific example being considered and it’s microscopicdynamics. We present some general ideas how to do thisin Sec. II C 3 below.A further issue for stochastic systems relates to the factthat (cid:101) G is also a fluctuating quantity, i.e., to find a zero of (cid:101) G one must deal with the statistical fluctuations. Herewe use the C R method introduced in Ref. [27] to preventthe occurrence of outliers which can lead to poor continu-ation results. In this method, every approximation whichlies outside of a suitably defined interval around the ini-tial guess is repeated a predefined number of times. Thisgreatly reduces the probability of outliers from statisticalfluctuations leading to poor results. This is used in com-bination with the pseudoadaptive parameter adjustmentintroduced in the next section.
C. Challenges of stochastic continuation – Towardsan effective procedure
The stochastic continuation approach presented in theprevious section faces a number of important challenges.Here, we discuss these and give methods to remedy thedifficulties.
FIG. 3. To determine the next point on the branch ofsteady solutions, the function (cid:101) G is evaluated at several pointswithin an interval around the initial guess (indicated by smallcrosses) on the line perpendicular to the line defined by thesecant predictor. The length of the interval is controlled bythe parameter σ that is chosen in such a way that the function (cid:101) G is sufficiently linear and that the required root is containedin it.
1. Fluctuations in (cid:101) G – Using linear fits When dealing with stochastic systems, the system sizeand the ensemble size that can be simulated are alwaysfinite. As a consequence, the function (cid:101) G exhibits statis-tical fluctuations which complicate the root finding andinduces uncertainties.Here we solve this problem by employing linear fits.Namely we evaluate a finite number N fit of function val-ues of (cid:101) G equidistantly spaced on the line perpendicu-lar to the secant prediction that defines the correctionstep in the orthogonal corrector – see Sec. II A. We usehere 10 ≤ N fit ≤
50. This is done up to a suitable dis-tance from the initial guess, as illustrated in Fig. 3. Thisyields the function (cid:101) G evaluated in the form of a cloudof points. The cloud is then approximated by a linearbest fit whose root is then employed as the approxima-tion for the zero of (cid:101) G . A typical example is displayed inFig. 4. This procedure is very robust if an appropriaterange for the evaluation of (cid:101) G and the fitting is defined.Of course, the function (cid:101) G should be sufficiently linearover this range. Thus, on the one hand, the range con-sidered should be sufficiently small for this to be true, buton the other hand, the root should lie within this range,because an extrapolation beyond the range consideredcan lead to poorly controlled uncertainty. As a result,we must choose a step size for the prediction step to besufficiently small so that the initial guesses for the cor-rection and the zero of (cid:101) G are sufficiently close together.We define a parameter σ that specifies the range to beconsidered, as indicated in Fig. 3. A good value turnsout to be σ = 0 . ds , as we show next.The fitting procedure works in a stable way becauseit moderates the fluctuations of the function (cid:101) G . Addi-tionally, it has the advantage of allowing us to estimatethe size of the statistical error ± ∆ in the calculated zero.As a measure for ∆ we use the standard deviation of the FIG. 4. The plot shows a typical point cloud obtained throughevaluations of the function (cid:101) G ( u ) = G λ u ( X u ) on an interval asillustrated in Fig. 3 with σ = 0 .
05. The point cloud is showntogether with the resulting linear fit used to determine thezero of the fluctuating function. The standard deviation of thehorizontal distances of all points to the fit line is indicated asa horizontal red bar. It serves as a measure for the statisticalerror in the zero. In this particular example, the data are fromstochastic continuation of the branch of stable steady statesof the Ising model at the reduced temperature T (cid:48) = 3 .
1. Here T (cid:48) takes the role of λ . The size of the spin lattice is 100 × τ =496, M = 39, σ = 0 . ds = 0 .
1, and N fit = 20. The valuesof τ , M , and σ are obtained by our adaptive parameter choice(as explained in Sec. II C 2) with ∆ d = 0 .
005 and θ d = 1 . θ = 0 .
93 and the error in the zero is∆ = 0 . horizontal distances of all the calculated points to thefitted line – see Fig. 4. With N evaluated values of thefunction (cid:101) G with the abscissae u , ..., u N and the ordinates˜ g , ..., ˜ g N and the linear fit (cid:101) G = θu + b , we obtain∆ := (cid:118)(cid:117)(cid:117)(cid:116) N N (cid:88) i =1 (cid:18) ˜ g i − bθ − u i (cid:19) . (6)Without our fitting technique, simple numerical rootfinding methods generally fail, because of the fluctuationsin (cid:101) G . Independently of the particular fitting technique,it is possible to determine the zeros by applying moreadvanced root finding methods which take derivatives ofthe considered function into account [27].
2. Divergence of timescales – Adaptive choice of numericalparameters
Roughly speaking, one can view the dynamics of a typ-ical system to be considered as being composed of a de-terministic part and a stochastic part. The deterministicpart controls the slope θ of the function (cid:101) G , while thestochastic part causes it to fluctuate. The larger the in-fluence of the deterministic part, the larger is the absolute FIG. 5. Comparison of this figure and Fig. 4 shows the influ-ence of the size of the time span τ used for the microscopictime evolution. Here the point cloud is obtained by evaluat-ing the function (cid:101) G ( u ) = G λ u ( X u ) with τ = 10, which is toosmall. All the other parameters are as in Fig. 4. The slopeof the straight-line fit is θ = 0 .
09, the error in the zero is∆ = 0 . value of the slope | θ | . For small | θ | , fluctuations domi-nate and any information we have about the location ofthe zero is not reliable (cf. the case in Fig. 5). For large | θ | , the uncertainties are small (cf. the case in Fig. 4).It is well known that on approaching a bifurcationpoint (sometimes called a “tipping point” [17]), the typ-ical timescales of the system dynamics diverge – oftenreferred to as “critical slowing down.” This implies thatif a fixed time span τ is used for the microscopic time evo-lution, the slope of (cid:101) G tends to zero when approaching thebifurcation point. This causes a significant loss of pre-cision for these parameter values, which is unfortunate,because this concerns the (generally) most important de-tails of a bifurcation diagram.Our remedy for this issue consists of dynamically ad-justing the value of τ to the timescale typical for thesystem at any given point in the bifurcation diagram.In other words, when the dynamics of the macroscopicobservable slows down the microscopic time evolution isprolonged by adapting τ . For example, Fig. 5 displaysresults for (cid:101) G with τ = 10, which is too small while Fig. 4shows the corresponding result obtained by adapting τ to 496.Based on our accumulated experience, it is possibleto define suitable values of τ “by hand”. We call thisapproach a pseudoadaptive continuation. We have donethis for the first continuation results of the Ising example(cf. Sec. III). Combined with the C R method and thelifting procedure presented in the next section, very goodresults can be obtained. However, the pseudoadaptivemethod is very labor intensive and depends greatly onthe user’s experience. This can be avoided by using thefully adaptive adjustment of the parameter τ discussednext.Furthermore, due to the law of large numbers, the size FIG. 6. Comparison of this figure and Fig. 4 shows the influ-ence of the size of the microscopic ensemble M used for themicroscopic time evolution. Here the point cloud is obtainedby evaluating (cid:101) G ( u ) = G λ u ( X u ) with M = 2, which is toosmall. All the other parameters are as in Fig. 4. The slope ofthe fit is θ = 1 .
08, the error in the zero is ∆ = 0 . of the microscopic ensemble M also has an influence onthe magnitude of fluctuations in (cid:101) G and the precision ofthe continuation results. For example, Fig. 6 shows (cid:101) G evaluated for M = 2, which is too small. This shouldbe compared with the corresponding results obtained byadapting M to 39 displayed in Fig. 4. This comparisonillustrates that the parameters τ and M greatly influencethe exactness of the method. They also have a large effecton the numerical cost, since the ensemble of microscopictime evolutions usually forms the largest contribution tothe computation time. Thus, these parameters shouldbe chosen as small as possible and as large as needed.Note also that the parameter σ discussed in Sec. II C 1 isimportant in this context.Our proposed adaptive adjustment takes the three pa-rameters τ , M , and σ into account. For every param-eter, we specify a condition which should be fulfilled toeffect a reliable determination of the root of the func-tion (cid:101) G and we implement a rule for modifying the pa-rameter values when these conditions are not fulfilled.For the parameters τ and M we also define upper lim-its ( τ max and M max ) that reflect the available comput-ing resources. Since these rules are based on estimates,they are iteratively applied until all of the conditions aremet. As initial values for the parameters, we use the ad-justed parameter values from the previous continuationstep. To make the adjustment of τ more efficient, we ini-tially extrapolate the change between the previous twocontinuation steps. Namely if the adaptive adjustmentsin the previously performed continuation steps producedthe series of values τ , τ , ..., τ n − , then we use τ guess n := τ n − + ( τ n − − τ n − ) (7)as initial value for τ .It is useful to start the continuation procedure with low values of τ and M in a region of the bifurcationdiagram where no complications are expected, e.g., on astable branch far away from bifurcation points, and to letthe adaptive control work while approaching bifurcationpoints. Figure 7 gives a flow chart that illustrates howour algorithm adjusts the various parameters. This isdiscussed below in the order of parameter update.Most importantly, the interval σ in which we fit (cid:101) G ( u )must be appropriate for a linear approximation. Fur-thermore, the zero should lie within the interval, becauseextrapolation outside the interval leads to uncontrollederrors. Thus, if the absolute value of the calculated zero u is greater than σ , then we set σ = 1 . | u | . However, σ should not be greater than the continuation step size ds . If this condition cannot be fulfilled, then we insteaddecrease the value of ds . If this is not possible and | u | re-mains greater than σ despite all adjustments of the otherparameters, then we define the root to be zero. That is,we do not change the direction for the next step alongthe branch of the bifurcation diagram. This can also oc-cur close to a bifurcation point, since in this situation areliable result cannot be achieved with an acceptable nu-merical effort. In this case, one blind step in the previousdirection is the best option.To adjust the parameter τ , we use the slope θ of thelinear fit. This slope should be in the interval (0 . , . θ d ,where θ d is the desired slope. Based on experience, avalue of θ d = 1 leads to good results. However, it maybe reasonable to choose a slightly lower value (down to θ d = 0 .
5) to reduce the numerical cost.For τ and θ we assume a linear relation as the firstestimate, i.e., if θ does not lie in the above interval, weset τ new := τ θ d θ . (8)To adapt the parameter M , we take the error ∆ intoaccount. We determine a maximal error (denoted by∆ d ) and change M if ∆ lies outside of the interval(∆ d / , ∆ d ). If one considers N independent and iden-tically distributed random variables, then the standarddeviation of their mean is the standard deviation of onevariable times N − / . Thus, we suppose a linear relationbetween ∆ and M − / . Including a safety margin, weuse M new = 1 . M (cid:18) ∆∆ d (cid:19) . (9)If the error ∆ is still greater than ∆ d after the adjustmentof M , then we decrease the parameter σ using σ new = 0 . σ ∆ d ∆ . (10)For the parameters τ and M , it can be useful to set theirminimum values to 10, for example. Otherwise, a smalladjustment by a factor close to 1 has no effect, since τ and M are natural numbers. take τ , M , σ fromprevious con-tinuation step(extrapolate τ )calculate u , ∆ , θ | u | ≤ σθ ∈ (0 . , . · θ d ∆ ∈ (∆ d / , ∆ d ) σ < dsset u := 0 σ new := min(1 . | u | , ds) σ is minimal τ new := τ ( θ d /θ ), but τ new = τ max at most, τ new = 1 at least accept u M new := 1 . M (∆ / ∆ d ) , but M new = M max at most, M new = 1 at least∆ ≥ ∆ d σ new := 0 . σ (∆ d / ∆)yes noyes or τ at bound no and τ not at boundyes yesnono and M not at boundno and M at bound yes and σ not min.no or σ is minimal FIG. 7. Flow chart for the algorithm employed to adaptively adjust the parameters τ , M , and σ . In each continuation step,the adjustments are applied iteratively until every condition is fulfilled.
3. Lifting is one-to-many mapping – Structure lifting
The final issue to discuss concerns the lifting proce-dure needed to initiate microscopic states from a knownmacroscopic state (see Fig. 2). Obviously, lifting is notunique but corresponds to a one-to-many mapping. Thisgives a large procedural freedom and requires a detaileddiscussion of how the lifting is done. In fact, the struc- ture of the microscopic state can influence the resultingdynamics of the macroscopic observable, as illustratedin Fig. 8. In the best case, errors from an inconvenientlifting procedure are healed over a small initial intervalwithin the time span τ of the evolving step. In the worstcase, lifting errors crucially impact the macroscopic dy-namics leading to wrong results in the root finding pro-cess. Eventually, this also leads to uncertainties in the0 FIG. 8. A sketch illustrating some potential problems withthe lifting procedure due to it being a one-to-many mapping.The different possibilities to define a microscopic state belong-ing to one value of the macroscopic quantity are indicated bythe axis with the label “hidden value.” The lines depict differ-ent trajectories in phase space and show that large excursionsare possible before randomness is “healed out.” choice of the parameter τ . Further, for reasons of com-putational efficiency, this parameter has to be chosen assmall as possible. Consequently, the quality and preci-sion of the evolving step strongly depend on having aneffective lifting procedure.In fact, the ambiguity of the lifting procedure is poten-tially the most dangerous problem. Often, microscopic(fluctuating) steady states exhibit specific spontaneouslyformed structures which are crucial to the time evolu-tion of the macroscopic observable. However, their for-mation from unstructured microscopic states may take atime span that is too large to be able to accommodateit within the chosen τ . This problem can be tackled ifone bases the stochastic continuation on something morerefined than simple time simulation from unstructuredinitial states.We explain our approach to solve this problem em-ploying our first example, the Ising model. There, themicroscopic state corresponds to a particular state of alattice of spins { s i } while the corresponding macroscopicobservable is magnetization m = (cid:104) s i (cid:105) obtained by averag-ing over the microscopic state. When the magnetizationis near zero and the temperature is near or below T C , atypical spin configuration exhibits a clustered structure,which has an influence on the dynamics of the magneti-zation (examples can be seen below in Sec. III).A naive mapping in the lifting procedure is to createa spin lattice with the orientations randomly chosen togive the correct magnetization. However, this leads tospin orientations without spatial correlations. We callthis random lifting and illustrate it in Fig. 9. During thetime evolution of the evolving step, this disordered struc-ture starts to form clusters. However, the time span τ isnot sufficiently large to complete this process. Further-more, the correlation information is forgotten after the FIG. 9. Scheme of the standard random lifting procedure(also called structure lifting of 0th order): In continuationstep n the microscopic dynamics is initiated with a randomstate that corresponds to the predicted value of the macro-scopic observable X (cid:63) .FIG. 10. Scheme of the structure lifting of first order: In con-tinuation step n the microscopic dynamics is initiated witha structured microscopic state obtained by adapting a finalstate of the microscopic dynamics from the previous continu-ation step n −
1. The adaptation is done randomly and bringsthe macroscopic observable X (cid:63) to the value requested by theprediction step. microscopic time evolution is terminated and in the nextapplication of the lifting, a disordered structure is againproduced.Instead, we introduce structure lifting , a lifting tech-nique that takes information from previous microscopictime evolutions into account. In particular, the finalstates of the evolving process are kept, which include allthe information concerning their internal structure. Thenthese states are employed in the lifting procedure at thenext value of the control parameter. Because successiveapplications of the lifting step are always done at a valueof the macroscopic observable that does not strongly dif-fer from the previous value, only minor adjustments ofthe microscopic states have to be done. In the simplestcase, these can be realized in a spatially random way, asillustrated in Fig. 10. This particular procedure we call first-order structure lifting .To further improve the efficiency of the scheme, it is1 FIG. 11. Scheme of the structure lifting of second order:In continuation step n the microscopic dynamics is initiatedwith a structured microscopic state obtained using informa-tion from the final states of the microscopic dynamics fromthe two previous continuation steps n − n −
1. Thisallows one to adapt the microscopic state mainly in regionswhere most changes occurred between steps n − n − possible to go one step further and perform a second-order structure lifting . Here, information is kept con-cerning microscopic structures that have resulted fromevolving steps in the two previous continuation steps.This allows one to extract information about the typicalchanges of the structure along the already calculated partof the solution branch. This information is then used toform the initial microscopic state in the next lifting pro-cedure in a more precise way, as illustrated in Fig. 11.Namely the necessary random changes are concentratedin regions where most changes occurred between the twoprevious steps. In the example of the Ising model, thismeans that the minor adjustments are not uniformly dis-tributed in space, as in the first-order structure lifting.Now, with the highest probability they are located atthe interfaces between the clusters. We achieve this inpractice by employing a Gaussian kernel to increase theprobability density of changes around the locations wherethe structure changed previously.In principle, this concept for the lifting procedure canbe extended to nth-order structure lifting . This methodproduces naturally evolved microscopic states withoutthe need for long relaxation times in every evolving step.Of course, to employ structure lifting at the beginningone needs to provide the algorithm with one or moremicroscopic states. Depending on the specific situation,these can be generated using random lifting or a longrelaxation.Next, we illustrate the application of the proposedamendments to stochastic continuation via three exam-ples. The first is the Ising model in Sec. III, which wehave touched on already; an active Ising model in Sec. IV;and a stochastic Swift-Hohenberg equation in Sec. V. FIG. 12. The exact (Onsager) bifurcation diagram of the two-dimensional Ising model. The solid and dashed lines indicatethe branches of stable and unstable states, respectively. Theinset gives a typical spin configuration near the critical tem-perature (at T (cid:48) = 2 . III. ISING MODEL
The Ising model Hamiltonian is given above in Eq. (4).As mentioned there, the macroscopic observable is themagnetization m := (cid:104) s i (cid:105) . Depending on the dimension-less temperature T (cid:48) := k B T /J , different stable and un-stable states exist. For the 2D Ising model, an exactsolution in terms of the free energy is given by Onsager[43], while Yang first derived the formula for the sponta-neous magnetization [55] (also announced by Onsager ata conference in 1949 [41]). As the exact result is available,this example provides a valuable test for the precision ofour approach.The exact solution is displayed in Fig. 12. It shows apitchfork-like bifurcation at the critical temperature T c ,whose numerical value is given by T (cid:48) c = k B T c /J ≈ . T c two branches of states of finite magnetizationemerge from the trivial state of zero magnetization. Fortemperatures above T c , the zero magnetization state isthe only state, i.e., it is globally stable. For T < T c ,this state is unstable and the states on the two branchesemerging at T c are stable. They are related by symme-try, i.e., they have the same energy. This implies thatbelow T c the system undergoes a spontaneous symmetrybreaking. The exact solution for the magnetization m isgiven by m = ± (cid:34) − (cid:18) sinh (cid:18) T (cid:48) (cid:19)(cid:19) − (cid:35) / . (11)For temperatures T (cid:48) < T (cid:48) c , and for T (cid:48) ≥ T (cid:48) c we have m = 0. Close to T (cid:48) c , one write Eq. (11) as m ≈ m | − T (cid:48) /T (cid:48) c | β , (12)with the critical exponent β . According to the exactsolution, β = 1 / . T (cid:48) c = 4) and givethe critical exponent to be β = 1 /
2, much larger thanthe exact value [41]. The mean-field exponent β = 1 / i and thencalculate the difference in the energy∆ E = 2 J (cid:88) nn of i s i s j , (13)resulting from flipping the spin at the chosen site. Thisspin flip is then accepted with a probability of p = min( e − ∆ E/k B T , . (14)Otherwise, the flip is rejected and the spin configurationremains the same. Note that we use periodic boundaryconditions. We then randomly pick another lattice siteand repeat the above process. We define one time step τ as being the time to attempt N spin flips, i.e., oneattempt per spin on the lattice.We now present results obtained using the stochasticcontinuation algorithms described above applied to theIsing model. Those obtained using the pseudoadaptiveand the adaptive parameter choice (see Sec. II C 2) are inFigs. 13 and 14, and Figs. 15 and 17, respectively. Notethat we treat a change in the size of the microscopicensemble as being equivalent to changing the lattice size.When we apply the pseudoadaptive parameter choice, thelattice size remains fixed at 1000 × ×
100 and 500 ×
500 because this allows formore flexible adjustment. Note that for all lattice sizesused here, finite size effects on the values of all quantitiesstudied here are negligible. They only become importantfor smaller lattices, e.g., of size of 10 ×
10 or 20 × m , we choosethe orientation of each spin according to s i = (cid:26) , with probability (1 + m ) / − , with probability (1 − m ) / m . Stochasticcontinuation results in the bifurcation diagram shown inFig. 13. Overall, there is a good agreement, except forthe stable m (cid:54) = 0 branches near the critical tempera-ture. Extrapolating these to m = 0, we find that thiscontinuation method gives the critical temperature to be T (cid:48) c cont = 2 . FIG. 13. Bifurcation diagram of the steady states of the 2DIsing model obtained using the presented stochastic contin-uation algorithm in the case with random lifting. The bluecircles with red error bars are the continuation results whilethe green dashed lines represent the exact solution. TheC R method is used and a pseudoadaptive adjustment of τ .Namely τ is linearly increased from 1 to τ max while approach-ing the bifurcation point (here, first a rough idea of the loca-tion of the bifurcation point can be obtained by a continuationrun with a fixed value of τ ). For the nontrivial branches with m (cid:54) = 0 we chose τ max = 70 while for the trivial m = 0 branch τ max = 50 ( τ max = 100) when approaching the critical pointfrom the left (right). The remaining parameters are ds = 0 . N fit = 100, M = 1, and σ = 0 .
05. The size of the spin latticeis 1000 × m (cid:54) = 0 steadystates of the 2D Ising model obtained using random lift-ing (blue circles) and first-order structure lifting (green di-amonds). The remaining line styles, settings of the continua-tion algorithm and parameters are as in Fig. 13. Employing first-order structure lifting, one can furtherimprove this result (see Fig. 14), getting an extrapolatedvalue of the critical temperature T (cid:48) c cont = 2 . FIG. 15. Bifurcation diagram of the trivial steady states ofthe 2D Ising model as obtained using first-order structurelifting (blue circles) and second-order structure lifting (greendiamonds). The dashed lines belong to the exact solution.The fully adaptive choice of τ , M , and σ is used. The re-maining parameters are ds = 0 . N fit = 10, ∆ d = 0 .
04, and θ d = 1 .
0. The size of the spin lattice is 500 × garding the spatial structure of the microscopic state. Italso shows the influence this has on the dynamics of themicroscopic state. In the Ising example, the clusteringof the spin orientations is particularly important for thedynamics near the bifurcation point. The inset of Fig. 12gives a typical spin configuration with cluster formationat a temperature near T (cid:48) c .However, continuation with this first-order structurelifting encounters problems along the trivial branch ofzero magnetization. Namely the algorithm is not ableto follow the unstable solution branch (see the blue cir-cles for T (cid:48) < T (cid:48) c in Fig. 15). Alternatively, employingthe second-order structure lifting, which has a higherprobability that adjustments are located near the bor-ders of the clusters, result in microscopic states which aremore natural and exhibits a reliable microscopic dynam-ics. The results obtained are shown in Fig. 15. There wesee that the second-order structure lifting gives a betterresult for the unstable part of the trivial m = 0 branch.This example shows that the first-order structure lift-ing exhibits a specific problem when applied to micro-scopic states that are strongly clustered. As explainedabove, the adjustments made to the spin configurationduring the lifting process are spatially random. Thus, itis very likely that the adjustments occur at lattice siteswhich are in the midst of the spatially ordered regions.Then, with a high probability [see Eq. (14)], the micro-scopic dynamics reverses these adjustments resulting ina state with the wrong average magnetization. Hence,the continuation can be led astray. With second-orderstructure lifting, this issue is improved.We return now to the m (cid:54) = 0 nontrivial branch. Con-tinuation of this using first-order structure lifting andadaptive parameter choice while using a small value of∆ d lead to a result with small statistical errors. This FIG. 16. The critical region from Fig. 14, showing results ofthe pseudoadaptive continuation of nontrivial steady statesof the 2D Ising model using random lifting (blue circles) andfirst-order structure lifting (green diamonds) compared to theexact solution (dashed line). The additional line (orange tri-angles) represents the result of a continuation using first-orderstructure lifting and adaptive choice of τ , M , and σ with pa-rameters ds = 0 . N fit = 20, ∆ d = 0 . θ d = 1 .
0, and spinlattice size 100 × m (cid:54) = 0 branch in Fig. 16obtained with first-order structure lifting and adaptive pa-rameter choice (blue circles), presented in a log-log plot (nat-ural logarithm) with a corresponding linear fit of slope 0 . /
8. For comparison, the reddotted line shows a slope of 1 / T c is obtained by anextrapolation of the continuation result. result is shown in Fig. 16 using orange triangles. Thecurve exhibits very few fluctuations and has very smallerror bars. Yet, the agreement with the exact solutionis not as good as the result of pseudoadaptive first-orderstructure lifting discussed before. Though it is still muchbetter than the results of random lifting. Note that the4error bars do not reach the exact solution. This highlightstwo important points. First, not surprisingly, our auto-matic adaptive parameter choice is not as good as the pa-rameter choice chosen by an operator with some physicalinsight. Second, our error measure ∆ [cf. Eq. (6)] onlyrepresents the statistical errors of the stochastic contin-uation algorithm. It does not include other errors, e.g.,those occurring in the lifting procedure. This impliesthat there are still lifting errors. Of course, our structurelifting can only be an approximation of a perfect liftingoperator.In Fig. 17 we present the results from this last contin-uation that has small statistical errors in a log-log plot.A linear fit then gives the critical exponent β = 0 . /
2. Hence, thecontinuation gives a much better result than the mean-field approximation, both for T c and also for β . To obtaina clear result for the critical exponent, further improve-ments of the continuation technique are needed. IV. AN ACTIVE ISING MODEL
The second example we consider is the active Isingmodel introduced by Solon and Tailleur in Ref. [59] tocapture essential features of the flocking transition oc-curring in many discrete and continuous models for thecollective behavior of active particles [60–64]. The activeIsing model is a nonequilibrium version of a ferromag-netic model with the spin variable having the meaning ofa (preferred) state of motion. Here we consider the 2Dversion from Ref. [44] (whose notation we follow) wherethe particles’ motion in the positive and negative x direc-tion, is influenced by the spin value. Our lattice has size L x × L y = 100 ×
100 and contains N moving particles thatcarry a spin of ±
1. Every site of the lattice i can be occu-pied by an arbitrary number of particles, denoted by n ± i for particles with spin ±
1. Consequently, for every lat-tice site i , we have a local particle density ρ i := n + i + n − i and a local magnetization m i := n + i − n − i . Depending onthe spin orientations, every particle can move to a neigh-boring lattice site or flip its spin at specific rates. Therules controlling the dynamics are as follows: • A particle located at the site i and having a spin s flips its spin at a rate W ( s → − s ) = exp (cid:18) − sβ m i ρ i (cid:19) , (16)where β = 1 /T is an inverse nondimensional tem-perature. • A particle with spin s moves with rate D to theupper or lower lattice site (unbiased motion in y direction), with rate D (1 + sε ) to its right and withrate D (1 − sε ) to its left (spin-biased motion in the x direction, where ε measures the strength of thebias).With these rates, a continuous-time Markov process isdefined. For the realization of the dynamics, we use arandom-sequential-update algorithm. For this, we dis-cretize the time in steps of ∆ t . In every step of thealgorithm, a particle is chosen randomly and one of thefollowing actions is done: • With probability W ( s → − s )∆ t , its spin is flipped. • With probability D ∆ t , it is moved upward. • With probability D ∆ t , it is moved downward. • With probability D (1 + sε )∆ t , it is moved to theright. • With probability D (1 − sε )∆ t , it is moved to theleft. • With probability 1 − [4 D + W ( s → − s )]∆ t , nothingis done.After this, the time is incremented by ∆ t/N . In orderto keep all probabilities smaller than 1 and to minimizethe probability that nothing happens, following [59] ∆ t is chosen to be (4 D + exp( β )) − .Here, we choose β = 2, ε = 0 . D = 1 and em-ploy the developed stochastic continuation algorithm toanalyze the dynamics of the system and how it dependson the average density ρ = N/ ( L x L y ), which is deter-mined by N . At low densities, a trivial homogeneousunpolarized state emerges that has zero mean magneti-zation (sometimes called “polarization”), i.e., (cid:104) m i (cid:105) ≈ (cid:104) m i (cid:105) (cid:54) = 0, left-right parity is brokenand the particles show a collective motion either to theright or to the left. We call this the “active liquid state.”At intermediate densities, a phase separation betweengas and liquid state is observed and regions of the twostates coexist. In particular, one finds dense bands ofmoving particles as illustrated in the inset of Fig. 18,which shows a snapshot of the local magnetizations. Themain panel presents the spatial profile of the local mag-netization averaged along the band, i.e., averaged in the y direction. Employing a trapezial piecewise linear fit,we determine the mean m l of the local magnetizationin the band of active liquid phase, i.e., the “height” ofthe plateau. This is then used to define the macroscopicobservable Φ = (cid:104) m (cid:105) m l = 1 m l L x L y (cid:88) i m i (17)which quantifies the liquid-gas ratio. Varying the averagedensity ρ , the width of the band of the liquid phase5 FIG. 18. Example of a microscopic magnetization state of the2D active Ising model with mean density ρ = 3 .
98, which isin the parameter region where the gas and the liquid statecoexist. The inset shows a typical snapshot of the local mag-netization, clearly showing that the collective behavior resultsin a band structure. The central band represents the liquidphase which moves to the left through a gaseous background.The main plot gives the spatial profile of the local magne-tization averaged along the band (black solid line) togetherwith a trapezial piecewise linear fit (blue solid line). The thindashed horizontal lines give the magnetization values of thebackground gas phase ( m = 0), of the band of liquid phase( m = m l ), and the mean (cid:104) m (cid:105) . The fit is used to extractthe macroscopic observable defined as the liquid-gas fractionΦ = (cid:104) m (cid:105) /m l . The remaining parameters are L x = L y = 100, β = 2, ε = 0 .
9, and D = 1. changes while the value m l remains constant in the entirecoexistence region.To analyze the system behavior employing the de-veloped stochastic continuation algorithm we define arandom lifting procedure in a straightforward manner:Given the total particle number N , we choose randomlattice sites to place the particles and assign a randomspin orientation to all of them using a probability thatguarantees a specific value of (cid:104) m (cid:105) .For densities belonging to coexistence states, we usea hybrid structure lifting: We modulate a coexistencestate which is given by the previous continuation step byadding or removing particles at random lattice sites aswell as by changing spin orientations of random particles.However, to maintain the stable coexistence structure, weonly do these random adjustments in the region of theboundary between the liquid and the gas phase. Thephase boundary is well defined by the sloped parts of thetrapezial fits (see Fig. 18). This procedure is a targetedfirst-order structure lifting which acts like a higher-orderstructure lifting. In Fig. 19 we present continuation re-sults for just the branches of stable states, in particular,for the transition between the banded state and the liquidstate.We initiate the continuation with a banded state ob-tained from a time simulation at ρ = 3 . FIG. 19. Bifurcation diagram of stable steady states of the2D active Ising model obtained using stochastic continuation.The continuation is initialized on the left, uses second-orderstructure lifting and adaptive choices for τ , M and σ . Theother parameters in the continuation algorithm are as follows: ds = 0 . N fit = 15, ∆ d = 0 .
01, and θ d = 0 .
7. The (rathersmall) errors are indicated in red. employ continuation to follow the stable branch towardhigher mean densities. The result in Fig. 19 is in verygood agreement with the results obtained by direct timesimulation in Ref. [44]. Namely the branches correspond-ing to the banded state and the liquid state are well ap-proximated by straight lines and meet at a bifurcationpoint at ρ ≈ .
4. In particular, it seems that at compa-rable system sizes, the continuation approach can capturethe bifurcation point much better than the direct simu-lations of Ref. [44]. Their Fig. 10 (left) shows that fora system size of 200 ×
100 there is still a jump in Φ ofnearly 20% in the region where the bifurcation point isexpected. They have to go to rather large systems to wellcapture the bifurcation point.Note, however, that our approach works less well wheninitiating the continuation at a high particle density inthe liquid phase and descending in ρ . Then, whenpassing the bifurcation point into the coexistence re-gion, the stable band structure needs a very long timeto form. Thus, even with structure lifting, the bandedstable states cannot be found with an acceptable nu-merical effort when passing the bifurcation point. Thisagrees with the general observation that the continuationmethod generally works better when it is initialized suf-ficiently far away from potential bifurcation points andthen subsequently approaching them. In other words,problems should be expected when crossing a bifurcationpoint onto a branch having a different symmetry. V. SWIFT-HOHENBERG EQUATION
The third and final example to illustrate the applica-tion of the developed stochastic continuation algorithm6
FIG. 20. The bifurcation diagram for the deterministic Swift-Hohenberg equation [Eq. (5) with D = 0] obtained by stan-dard continuation. The solid and dashed lines are branchesof stable and unstable states, respectively. The branch with || δ Ψ || = 0 (blue line) is the trivial, homogeneous state whichchanges its stability at ε = 0, where two branches emergesupercritically. These are a branch of stable stripe patterns(green line) and a branch of unstable square patterns (orangeline). The insets show a stripe and a square state, both at ε = 0 . concerns a stochastic PDE, namely, the stochastic ver-sion of the Swift-Hohenberg equation in Eq. (5) [45, 46].The corresponding deterministic Swift-Hohenberg equa-tion which is obtained when D = 0, represents a genericmodel capturing the dynamics of pattern formation in thevicinity of a monotonic short-wave instability, i.e., a Tur-ing instability [65]. Depending on the control parameter ε , the field Ψ has a variety of different stable states. For ε <
0, the homogeneous state is stable. For a 1D system,a pitchfork bifurcation occurs at ε = 0 and the homo-geneous state is unstable for ε >
0. With the presentpurely cubic nonlinearity, the bifurcation is supercritical,i.e., a branch of stable periodic states emerges toward ε >
0. Other nonlinearities may result in a subcriticalbifurcation and, in general, more intricate behavior [66].Here we investigate the 2D case for a square domainof size 2 π × π . Then, at ε > pde2path [22]. Figure 20 givesthe corresponding bifurcation diagram employing as so-lution measure the L norm of the deviations from themean, i.e., || δ Ψ || := (cid:112) (cid:104) (Ψ − (cid:104) Ψ (cid:105) ) (cid:105) . (18)Note that for the present square domain, the square pat-terns are always unstable while the stripes are stable.In a rectangular domain suitable for hexagonal patterns(ratio of side lengths of √
3) one finds stripes and twotypes of hexagonal patterns and more intricate stability
FIG. 21. The bifurcation diagram obtained using stochas-tic continuation applied to the deterministic Swift-Hohenbergequation [Eq. (5) with D = 0] in 2D. The circles and dia-monds represent stable and unstable branches, respectively.For comparison, the dashed line gives the standard contin-uation results obtained with pde2path . For the branch ofhomogeneous (stripe) states, the continuation is started onthe left at ε = − . ε = 0 . τ and σ areused. However, M max is fixed at 1 because no ensemble of mi-croscopic states is needed for the deterministic system. Theother parameters of the continuation algorithm are ds = 0 . ds = 0 . N fit = 6, ∆ d = 0 . ds , and θ d = 1 .
0. The size ofthe spatially discrete representation of the field Ψ is 128 × behavior [65].Although, there exist many efficient numerical contin-uation tools to determine bifurcation diagrams for de-terministic equations such as Eq. (5) with D = 0, herewe show that stochastic continuation can also be used.This allows us to check the quality and precision of ourmethod. In Fig. 21 we compare our results with the re-sults obtained with pde2path . To bring the determinis-tic system into a form suitable for stochastic continuationwe need to define macroscopic observables and micro-scopic states. We take the norm || δ Ψ || as the macroscopicobservable and the spatially discretized version of thefield Ψ as the microscopic state. We compute the micro-scopic dynamics via a standard pseudospectral methodon a grid of 128 ×
128 points with periodic boundaries.Here our random lifting consists in assigning a nor-mally distributed random variable to every lattice site.By choosing a suitable variance, we obtain a random fieldfor a specific given value of || δ Ψ || . When using first-orderstructure lifting, we adjust the final state of the evolvingprocedure from the previous continuation step by multi-plying it by an appropriate factor which lies near 1.The continuation with first-order structure lifting ofthe branch belonging to the homogeneous state workswithout any problems. In this case we have a horizontalbranch and the stability is directly given by the slope ofthe function (cid:101) G . Also the branch of stripe states is ob-7 FIG. 22. Results from stochastic continuation applied to thestochastic Swift-Hohenberg equation in 1D, Eq. (19). Thecontinuation is initiated at ε = 0 .
45, uses first-order struc-ture lifting and fully adaptive choices for τ , M , and σ . Theother parameters of the continuation algorithm are ds = 0 . N fit = 20, ∆ d = 0 .
004 and θ d = 1 .
0. The field Ψ( x, t ) is dis-cretized on a spatial grid of 128 cells. The dotted black linerepresents the continuation result for the corresponding de-terministic system [Eq. (19) with D = 0]. tained in good agreement with the result from standardcontinuation in Fig. 21. However, this is only possiblewith the structure lifting. With random lifting, duringthe short-time evolution of the evolving step the Laplaceoperator results in diffusion on short scales and smoothesout noisy structures. Therefore, at the very beginning ofthe time evolution, a state generated with random liftingalways moves toward a homogeneous state, allowing tofollow the unstable trivial state. The slope of (cid:101) G then in-dicates that on short timescales the state appears to bestable.The unstable branch belonging to the square patterncannot automatically be followed using our structuredlifting algorithm because for positive values of ε it only“sees” the dynamics going from the unstable homoge-neous solution to the stable stripe pattern. As the squarepattern does not lie “in between” the homogeneous andstripe state, the algorithm does not detect it. However,one can devise other types of structure lifting that makeit easier to find unstable solutions like the square pattern;see Sec. VI for a detailed discussion.Next we consider the D > ∂ t Ψ = ε Ψ − Ψ − (1 + ∂ x ) Ψ + √ Dη ( x, t ) , (19)where the noise of strength D is normally distributedand uncorrelated in space and time, i.e., (cid:104) η (cid:105) = 0 and (cid:104) η ( x (cid:48) , t (cid:48) ) η ( x, t ) (cid:105) = δ ( x (cid:48) − x ) δ ( t (cid:48) − t ). We use the value D = 1 and a domain size of 2 π . Again, we employa pseudospectral method with periodic boundaries for the computation of the microscopic dynamic. Hereby,the field Ψ( x, t ) is discretized on a spatial grid of 128cells. The stochastic continuation is initiated on the sta-ble branch of periodic states at ε = 0 .
45 and proceedstoward smaller ε . The results are displayed in Fig. 22.We see that far from the pitchfork bifurcation of the de-terministic system, the branch of periodic states is nearlynot influenced by the already rather strong noise. On theother hand, the branch of stable trivial states slightlydeviates from || δ Ψ || = 0 near the pitchfork bifurcationwhich has become an imperfect pitchfork bifurcation, asexpected when noise is present [49]. VI. CONCLUSION AND OUTLOOK
In the present work we have developed ways to improvethe stochastic continuation of stable and unstable steadystates in several aspects, mainly concerning the correctorstep of the continuation algorithms. In all our work wehave employed a secant predictor, however, this can beamended to any commonly used predictor. Our proposalsfor the corrector all concern its lifting and evolving steps.The restricting step always consists of a straight forwardaveraging procedure.First, we have amended the root finding procedure forthe fluctuating function G λ that in the evolving step re-sults from the microscopic time stepper. Instead of di-rectly applying a root finding method to the fluctuatingfunction G λ , we evaluate a set of function values in asuitable neighborhood of the initial guess provided bythe predictor and perform a linear fit that is then usedto determine the root. We have found that this proce-dure moderates the fluctuations and works in a stableway.Our second improvement concerns the choice of numer-ical parameters. The stochastic continuation algorithminvolves quite a large number of numerical parameters,e.g., the length of the microscopic time evolution in theevolving step, the number of employed microscopic real-izations and the step size of the path continuation step.As their choice has a crucial influence on the quality andprecision of the continuation results, we suggest to adap-tively adjust specific parameters during the continuationrun. In particular, in the vicinity of bifurcation points,this has led to more accurate results.In the third amendment we have considered the liftingprocedure that forms a very critical part of stochasticcontinuation. In most of the cases in the literature, a sim-ple constrained random lifting is used, i.e., a microscopicstate with the correct macroscopic observable imposed ischosen randomly. In contrast, we argue that one shouldexercise some control over its internal spatial structurethat is important for the macroscopic dynamics. In par-ticular, we have developed first- and second-order struc-ture lifting procedures. In the case of the first-order pro-cedure, final states from the microscopic evolution fromthe previous continuation step are randomly adapted to8the new value of the macroscopic observable. In the caseof the second-order procedure, final states from the mi-croscopic evolution from the two previous continuationsteps are used to obtain a state at the value of the macro-scopic observable through random adjustments only inthe region most changed over the previous continuationsteps. This procedure can be further refined into an n th-order procedure.Furthermore, in specific examples, it can be useful toincorporate information about particular solution behav-ior into the lifting and/or restricting procedure. Thisis done by our hybrid lifting for the active Ising model(Sec. IV) and by the weighted lifting and restriction forthe agent-based model discussed in Ref. [25].The abilities and shortcomings of the original methodand of our various proposed amendments have then bediscussed using three examples: (i) the classical Isingmodel in 2D, (ii) an active Ising model, i.e., a kineticMonte Carlo model for the collective behavior of ac-tive self-propelled particles, and (iii) the Swift-Hohenbergequation, a generic PDE model for short-scale patternformation close to a Turing bifurcation. The final ex-ample has on the one hand been used to show thatour stochastic continuation algorithm can be employedto continue solutions of a deterministic equation (thedeterministic Swift-Hohenberg equation in 2D) and, onthe other hand, to continue nontrivial steady states ofa stochastic PDE (the Swift-Hohenberg equation withadditive noise in 1D). These examples have shown thatour improvements to the stochastic continuation methodare rather important for the quality and precision of theresulting bifurcation diagrams.We highlight that in example (i) stochastic continu-ation now provides a bifurcation diagram quite closelyresembling Onsager’s analytical solution. Our root find-ing procedure has allowed for a much closer approach tothe bifurcation point, compare Fig. 15 of Ref. [27] withthe present Fig. 14. Also the fluctuations in the branchof trivial states are much reduced (see Fig. 15). In par-ticular, we have obtained an estimate for the critical ex-ponent that is much closer to the exact value than themean-field result. However, especially near the criticalpoint the result is not very precise as no clear uniquepower law emerges. To obtain a more accurate value forthe critical exponent further specific adaptations of thecontinuation technique are needed, e.g., one could specif-ically adapt the routines to look for scaling laws. In theIsing example, one could incorporate the knowledge thata scaling law exists for states close to the critical pointinto the method and in this way use prior informationabout the form of the branch. Technically, this could bedone by using a scaling law with adaptive exponent for aneffective determination of initial guesses for subsequentpoints on the branch being continued. This is, however,out of the scope of the present work.Further improvements to the method include the lift-ing procedure, as the structure lifting still fails if themicroscopic dynamics of the system is too slow. This we have seen in the example (ii), where it was not possibleto follow the stable branch through the bifurcation pointwhen initiating the continuation at large values of thecontrol parameter.Also, there can be problems when using the structurelifting to follow unstable steady states. In principle, itshould be possible to follow such states, since unstablesteady states of spatially extended systems normally cor-respond to saddle points in a very high-dimensional spaceof solutions. Within this space there usually exist onlyone or a few slow unstable modes among many fast sta-ble modes. Therefore, on short timescales one can expectto have a healing process in the evolving step where un-stable solutions are approached. This forms the basis ofstochastic continuation of unstable states.We have seen related issues in the deterministic Swift-Hohenberg system [example (iii)], where we have notbeen able to find the branch of the unstable square pat-tern, which lies between the unstable branch belongingto the homogeneous solution and the stable branch of thesquare pattern. The reason for this is that, during con-tinuation with structure lifting, we let the microscopicpattern evolve on its own. As a result, the stable stripepattern is favoured over the unstable square pattern. Toremedy this, one could once feed in the missing squarepattern as a basis for the continuation of the missingbranch. The idea is to use it only for the first liftingprocess which is followed by structure liftings as donebefore.In the example of the deterministic Swift-Hohenbergequation, it is easy to obtain the missing unstable pat-tern from a pde2path solution, for example. In otherexamples where only stochastic continuation can be ap-plied, the issue of how one obtains information aboutpotentially existing unstable steady states, such as thesquare patterns in the Swift-Hohenberg example, is veryimportant. An approach to this problem could be thefollowing. As a system evolves on the way to a stablesolution, the state of the system often gets close to un-stable solutions, because they behave like saddle points.So, observing the time evolution, starting from an arbi-trary state can give clues about possible unstable states.For example, the stripe pattern of the Swift-Hohenbergsystem evolves out of a homogeneous state for positivevalues of the parameter ε . But, before the stripes emerge,typically, one sees many localized peaks. These peaks areconnected into stripes later on, but they are very similarto the peaks which form the square pattern.We expect that the Eckhaus instability for a stochas-tic Swift-Hohenberg equation [46] would be another goodexample to learn how to deal with unstable branches asthere exist many unstable branches and branches thatchange their stability. It would be interesting to seewhether the different unstable branches can be distin-guished from each other and how the method performsalong a branch of stripes which changes its stability.Here we have focused on the continuation of steadystates. However, also the continuation of periodic,9quasiperiodic, and chaotic dynamics may be importantfor the understanding of specific systems. In principle, astochastic continuation of such solutions should be pos-sible, if they are captured by appropriate solution mea-sures, which clearly separate different types of solutions.Of course, in particular, for a chaotic dynamic it might bemore difficult to find appropriate parameters for the con-tinuation algorithm. Another difficulty will be the defini-tion of a suitable lifting procedure. Molecular dynamicssimulations, for example, show a chaotic dynamics andare analyzed in an equation-free framework in Ref. [32].Especially, they discuss the lifting procedure and relatedproblems.Furthermore, our adaptive adjustments of the numer-ical parameters makes it possible to obtain more preciseresults for the zeros of the function (cid:101) G . Yet, this canbe improved further. Choosing the parameter values byhand in a pseudoadaptive way (as we have done for theresults in Fig. 14, for example) is a laborious process.However, human experience (physical insight) accumu-lated over time still allows one to effectively gauge theobtained plots and linear fits and to adjust parametersby hand and to obtain better results than all of the auto-mated algorithms presented. In the future one can envis- age employing machine learning, e.g., using neural net-works, for this task. Currently, such methods are quitesuccessful in the solution of visual problems.Finally, emphasize again the connection betweenthe slope of the function (cid:101) G and the stability of thecorresponding solution branch. Moreover, we have seenthat the occurrence of a bifurcation should coincide witha slope of zero. However, these connections depend onthe direction of the orthogonal corrector which varieswhile following a curved solution branch. In the futurethis issue should be elaborated in detail to enable one toperform a systematic stability analysis of all states onthe branches being followed.The data that support the findings of this study areopenly available [67]. ACKNOWLEDGMENTS
We thank Tobias Frohoff-H¨ulsmann for helpful discus-sions and for providing the pde2path continuation runsfor the deterministic Swift-Hohenberg equation. [1] A. Deutsch, P. Maini, and S. Dormann,
CellularAutomaton Modeling of Biological Pattern Formation:Characterization, Applications, and Analysis , Modelingand Simulation in Science, Engineering and Technology(Birkh¨auser Boston, 2007).[2] B. Chopard and M. Droz,
Cellular Automata Modelingof Physical Systems , Al´ea-Saclay (Cambridge UniversityPress, Cambridge, 2005).[3] P. Tu,
Dynamical Systems : An Introduction with Ap-plications in Economics and Biology (Springer-Verlag,Berlin, 1994).[4] H. Haken,
Information and Self-organization: A Macro-scopic Approach to Complex Systems , Springer Series inSynergetics (Springer, Berlin, 2006).[5] L. M. Pismen,
Patterns and Interfaces in Dissipative Dy-namics , Springer Series in Synergetics (Springer-Verlag,Berlin, 2006).[6] P. Glansdorff and I. Prigogine,
Thermodynamic Theory ofStructure, Stability and Fluctuations (Wiley, New York,1971).[7] K. Kaneko, Phys. Rev. Lett. , 1391 (1990).[8] S. Sinha, Phys. Rev. Lett. , 3306 (1992).[9] A. S. Pikovsky and J. Kurths, Phys. Rev. Lett. , 1644(1994).[10] P. Ashwin, S. Wieczorek, R. Vitolo, and P. Cox, Philo-sophical Transactions of the Royal Society A: Mathemat-ical, Physical and Engineering Sciences , 1166 (2012).[11] Y. A. Kuznetsov, Elements of Applied Bifurcation The-ory , 3rd ed. (Springer, New York, 2010).[12] S. H. Strogatz,
Nonlinear Dynamics and Chaos (West-view Press, New York, 2014).[13] J. Argyris, G. Faust, M. Haase, and R. Friedrich,
An Ex-ploration of Dynamical Systems and Chaos: completely revised and enlarged second edition (Springer, Berlin,2015).[14] H. Poincar´e, Acta Math. , 259 (1885).[15] L. Landau, in Collected Papers of L.D. Landau , edited byD. Ter Haar (Pergamon, London, 1965) pp. 193 – 216.[16] B. Krauskopf, H. M. Osinga, and J. Galan-Vioque, eds.,
Numerical Continuation Methods for Dynamical Systems (Springer, Dordrecht, 2007).[17] H. A. Dijkstra, F. W. Wubs, A. K. Cliffe, E. Doedel,I. F. Dragomirescu, B. Eckhardt, A. Y. Gelfgat, A. Hazel,V. Lucarini, A. G. Salinger, E. T. Phipps, J. Sanchez-Umbria, H. Schuttelaars, L. S. Tuckerman, andU. Thiele, Commun. Comput. Phys. , 1 (2014).[18] E. L. Allgower and K. Georg, Introduction to Numeri-cal Continuation Mmethods , Classics in Applied Mathe-matics (Society for Industrial Mathematics, Philadelphia,PA, 1987).[19] Z. Mei,
Numerical Bifurcation Analysis for Reaction-Diffusion Equations (Springer, Berlin, 2000).[20] J. Sanchez, F. Marques, and J. M. Lopez, J. Comput.Phys. , 78 (2002).[21] J. Sanchez Umbria and M. Net, Eur. Phys. J.-Spec. Top. , 2465 (2016).[22] S. Engelnkemper, S. V. Gurevich, H. Uecker, D. Wet-zel, and U. Thiele, in