Applications of optimal nonlinear control to a whole-brain network of FitzHugh-Nagumo oscillators
Teresa Chouzouris, Nicolas Roth, Caglar Cakan, Klaus Obermayer
AApplications of optimal nonlinear control to a whole-brain network ofFitzHugh-Nagumo oscillators
Teresa Chouzouris, ∗ Nicolas Roth ∗ , † Caglar Cakan,
1, 2 and Klaus Obermayer
1, 2 Institut f¨ur Softwaretechnik und Theoretische Informatik,Technische Universit¨at Berlin, Marchstraße 23, 10587 Berlin, Germany Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, 10115 Berlin, Germany (Dated: March 3, 2021)We apply the framework of optimal nonlinear control to steer the dynamics of a whole-brainnetwork of FitzHugh-Nagumo oscillators. Its nodes correspond to the cortical areas of an atlas-based segmentation of the human cerebral cortex, and the inter-node coupling strengths are derivedfrom Diffusion Tensor Imaging data of the connectome of the human brain. Nodes are coupled usingan additive scheme without delays and are driven by background inputs with fixed mean and additiveGaussian noise. Optimal control inputs to nodes are determined by minimizing a cost functionalthat penalizes the deviations from a desired network dynamic, the control energy, and spatially non-sparse control inputs. Using the strength of the background input and the overall coupling strengthas order parameters, the network’s state-space decomposes into regions of low and high activity fixedpoints separated by a high amplitude limit cycle all of which qualitatively correspond to the statesof an isolated network node. Along the borders, however, additional limit cycles, asynchronousstates and multistability can be observed. Optimal control is applied to several state-switching andnetwork synchronization tasks, and the results are compared to controllability measures from linearcontrol theory for the same connectome. We find that intuitions from the latter about the rolesof nodes in steering the network dynamics, which are solely based on connectome features, do notgenerally carry over to nonlinear systems, as had been previously implied. Instead, the role of nodesunder optimal nonlinear control critically depends on the specified task and the system’s locationin state space. Our results shed new light on the controllability of brain network states and mayserve as an inspiration for the design of new paradigms for non-invasive brain stimulation.
I. INTRODUCTION
The widespread use of noninvasive electrical brainstimulation in clinical applications has sparked ongoinginterest in studying the effects of external inputs on brainactivity. Stimulation with electric fields in the range of1 − ∗ The first two authors contributed equally to this work † Correspondence to [email protected] functional segregation, dividing the brain into distinct ar-eas [14, 15]. The relative connectivity strength betweenthese areas is defined by the number of axonal fibers,which can be estimated using structural neuroimagingscans of individual subjects. This results in a networkmodel of the human brain where the edges reflect therelative strengths of axonal fiber bundles and the nodesrepresent individual brain regions. These nodes are thenequipped with a dynamical system modeling the averageneuronal activity in that region [16]. It has been repeat-edly shown that when the parameters of brain networkmodels are fitted to functional brain data, optimal oper-ating points were close to the bifurcation lines of thesemodels. This ensures that the model is in a state in whichnoise fluctuations can be amplified and produce realisticspatial correlation structures which are similar to empir-ical functional connectivity measurements [17–20].Previous theoretical investigations into the impact ofexternal perturbations mostly relied on the assumptionof a linear node dynamics, allowing the application ofmethods from linear control theory [21]. Thus, one candraw conclusions on the effects of external inputs to thesystem, based on the network topology and independentof its dynamical state [22, 23]. On the other hand, linearnode dynamics is that it cannot reproduce the dynamicsof neural processes close to bifurcations [17, 24]. In orderto describe the transitions from one dynamical regimeto another, Muldoon et al. [25] consider nonlinear nodedynamics. They conclude that the effects of stimulation-based control can be predicted by diagnostics from linear a r X i v : . [ q - b i o . N C ] M a r control theory based only on the structural connectivityof the network.In this work we go beyond linear control theory andexplore the framework of optimal nonlinear control [26]for the assessing the impact of perturbations on networksof coupled nonlinear systems. Optimal control is an op-timization method that derives control policies based onthe minimization of a cost functional which depends onthe state and control variables. Here, we define a costfunctional that penalizes the deviation to the desired net-work dynamics, the control energy, and the spatial spar-sity. The latter allows us to find optimal control signalsthat apply to only a few control sites, as introduced by[27] and further studied by [28]. We used methods pre-sented by Tr¨oltzsch in [29] for their calculation, while thehandling of the stochastic term is based on the work ofStannat et al. [30]. The resulting mathematical formula-tion is analogous to the formulation presented by Casaset al. [31] for partial differential equations.We evaluate the framework of optimal control for abrain network of FitzHugh-Nagumo (FHN) [32, 33] oscil-lators with additive coupling and white Gaussian noise,where each oscillator represents a brain region and wherethe connections between them were chosen accordingto the human connectome derived from diffusion tensorimaging (DTI) measurements. FHN oscillators are wellstudied models for neural dynamics and detailed anal-yses of the dynamical states exist for single oscillators[34] and various network configurations, like two coupledunits [35, 36], lattices [37], rings and hierarchical architec-tures [38], as well as random and small-world topologies[39, 40] and multiplex networks [41–43]. Similar empir-ical DTI-measured brain connectivities were used withFHN dynamics to model highly synchronized epileptic-seizure-like states [44, 45], unihemispheric sleep [46], andthe functional organization of the resting brain [47–49].We consider two different classes of control problems,targeted attractor switching between multistable networkstates and increasing network synchronization. The solu-tions obtained for given energy, precision, and sparsenessconstraints are well interpretable and result in intuitivelysensible optimal control inputs for all network nodes overtime.We then correlate the control energies to controllabil-ity measures from linear control theory. We confirm thefindings of Muldoon et al. [25] for the investigated statetransition (from the low fixed-point state to the oscilla-tory regime). For other control tasks, however, we show,that diagnostics from linear control theory do not corre-late with results from optimal nonlinear control. Conclu-sions drawn from the structural connectivity alone lead tocontradictions, which can only be resolved if the dynam-ical state of the network and the nonlinear interactionsbetween nodes are taken into account. Applications ofnonlinear control theory to whole-brain models enablesus to investigate control mechanisms also close to bifur-cations. It thus may help in the search for more effec-tive paradigms for realistic transcranial brain stimulation protocols. II. NONLINEAR OPTIMAL CONTROLA. Network model and control inputs
We consider networks of N equivalent d -dimensionaland “noisy” nodes with additive and zero-delay internodecoupling. Internode coupling strength is described by a N × N dimensional adjacency matrix A which is scaled bya global coupling strength σ . The equations describingthe network dynamics thus read: ddt x ( t ) = h ( x ( t )) + σ ( A ⊗ G ) x ( t ) + η ( I N ⊗ D ) ξ ( t ) , (1)where ⊗ denotes the Kronecker product. The state of thenetwork is described by a vector x = ( x , . . . , x N ), where x i = ( x i , . . . , x id ) characterize the individual states ofthe nodes. The local node dynamics is given by h ( x ) =( h ( x ) , . . . , h ( x N )) with h ( x i ) = ( h ( x i ) , . . . , h d ( x i )).All nodes additionally receive Gaussian white noise ofsimilar intensity η , which may be correlated withinbut is uncorrelated across the nodes of the network.The stochastic variables ξ = ( ξ , . . . , ξ N ) with ξ i =( ξ i , . . . , ξ id ) are independently drawn from a Gaussiandistribution, ξ ij ∈ N (0 , d × d dimensional local noise scheme D , while the across-nodestatistical independence is assured by a N-dimensionalidentity matrix I N . The internode coupling term con-sists of the Kronecker product of the adjacency matrix A and the d × d dimensional local coupling scheme G ,where the purpose of the former is to describe the relativeinteraction strength between nodes while the purpose ofthe latter is to distribute the between-node interactionsacross the d variables which describe the local node dy-namics. Initial conditions are denoted by x ( t = 0) = x .We then consider instantaneous and additive controlinput to the network, which is driven by N × d inde-pendent control variables u = ( u , . . . , u N ) with u i =( u i , . . . , u id ). The equations for the dynamics of the con-trolled network thus read: ddt x ( t ) = h ( x ( t )) + σ ( A ⊗ G ) x ( t )+( B ⊗ K ) u ( t ) + η ( I N ⊗ D ) ξ ( t ) . (2)The N × N matrix B allows for the control of multiplenodes with different strength, while the d × d dimensionalmatrix K implements the local control scheme, which issimilar for every node in the network. B. The cost functional and the optimality condition
The control u is considered to be optimal ( u = u ),if it minimizes an appropriate cost functional. To thisend, we construct a cost functional F ( x ( u ) , u ) for a stateswitching task, where the control input drives the net-work model from one stable state to another (see Sec-tion IV A), and for a node synchronization task, wherethe control input increases the degree of synchronizationamong its nodes (see Section IV B). For finite noise, i.e.for finite values of η in Eq. (2), the overall cost functional F is defined as a mean over noise realizations [30], F ( x ( u ) , u ) = h F n ( x ( u ) , u ) i = h F xn ( x ) i + F u ( u ) , (3)where F n denotes the cost functional for one noise re-alization n and the angle brackets h·i denote the mean.The state dependent term F xn ( x ) only implicitly dependson the control and penalizes the deviation from the de-sired output. It will be different for the switching, F xn,sw ,and synchronization, F xn,syn , tasks (see below). The con-trol dependent term F u ( u ) accounts for the cost of thecontrol itself.For the state switching task, we consider a noise-freesystem ( η = 0), and the state dependent cost functional F xn,sw is defined in terms of the deviation of the controlledstate to a predefined target state x T ( t ): F xn,sw ( x , t ) = F xsw ( x , t )= Z T I p ( t )2 k x ( t ) − x T ( t ) k L (0 ,N ) dt, (4)where we consider the control being active within thetime period 0 ≤ t ≤ T . In order to penalize the preci-sion only towards the end of the controlled time intervalrather than during the transition between the initial andtarget states, the precision-penalizing variable I p can betime-dependent (see Section IV A).The state dependent cost functional for the synchro-nization task F xn,syn is defined in terms of the deviationsof the normalized pairwise cross-correlations R ij for allnodes i and j from R T , the desired mean cross-correlationin the synchronized target state: F xn,syn ( x , t ) = I p N N X i,j =1 ( R ij − R T ) . (5)The cross-correlation in component m is defined as R mij = Z T (cid:0) x im ( t ) − h x im i t (cid:1)(cid:0) x jm ( t ) − h x jm i t (cid:1) σ h x im i t σ h x jm i t dt, (6)with i, j ∈ [0 , . . . , N ] and where h . i t and σ h . i t denotes thetemporal mean and the standard deviation. The mean R m over all values R mij R m = 1 N N X i =1 N X j =1 R mij , (7)is the component-wise network cross-correlation. Laterwe specialize to m = 1 and suppress the index m . The input dependent cost functional F u ( u ) penalizesthe energy of the control signal and enforces its direc-tional sparsity [27, 28]. It is given by F u ( u ) = I e Z T k u ( t ) k L (0 ,N ) dt + I s (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:16) Z T u ( t ) dt (cid:17) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L (0 ,N ) , (8)where I e and I s are the weights for the energy and spar-sity terms. Typically, increasing the sparsity reduces thenumber of controlled nodes, i.e. the nodes for which thecontrol input is non-zero for at least part of the time pe-riod 0 ≤ t ≤ T , while a higher penalty on the energyterm typically leads to an overall reduction of controlstrengths.Our goal is to find the optimal control u that minimizesthe cost functional for chosen I p , I e , and I s , leading tothe minimization problem u = arg min u h F n ( x ( u ) , u ) i . (9)Similarly to Troeltzsch et al. and Casas et al. [29, 31,50], we analyze the gradient g = ∇ u F of the cost func-tional, which has to vanish when evaluated at the optimalcontrol u . By applying the method of Lagrange multipli-ers, we obtain an expression for the optimality conditionthat depends on the adjoint states φ ( x , u , t ) correspond-ing to the Lagrange multipliers. The control u is optimal,if g ( t ) = ( B ⊗ K ) T h φ ( x , u , t ) i + I e u ( t ) + I s λ ( t ) ! = (10)for 0 ≤ t ≤ T , where λ ( t ) is the derivative of the sparsityterm, Eq. (8), with respect to the control inputs u . Theadjoint states are governed by a set of linear differentialequations: − ˙ φ ( t ) = (cid:2) D x ( h ) + σ ( A ⊗ G ) (cid:3) T φ ( t ) + ∇ x f xn ( x , t ) , (11)where D x ( h ) is the Jacobian matrix of the state equa-tions of the dynamical system, and f xn ( x ) is the integrandof the cost functional, F xn ( x ) = R T f xn ( x ) dt . The adjointstate satisfies the boundary condition φ ( T ) = , and thedifferential equation is solved backwards in time.Following Ref. [28], λ ( t ) is given by λ k ( t ) = ( u k ( t ) √ E k if E k = 0 , − I s (cid:2) ( B ⊗ K ) T h φ ( x , u , t ) i (cid:3) k otherwise . (12)Here, k ∈ [1 , N ], and E k is the nodewise control energyof the resulting optimal control u , which is defined as E k = Z T u k ( t ) dt, (13)resulting in the total control energy E = P k E k . A de-tailed derivation of Eqs. (10) and (11) is provided inAppendix A. C. Numerical solution of the minimization problem
The optimization problem is numerically solved us-ing the conjugate gradient method. We integrate theequations for the network state and the adjoint givenin Eqs. (2) and (11) with the fourth order Runge-Kuttamethod. The direction d k for each step of the conju-gate gradient algorithm is defined by the Polak-Ribieremethod [51], while its step size s k is derived using simplebisection [52]. We apply the Fletcher and Reeves algo-rithm [53] as presented in the following.We initialize at iteration k = 0 by choosing an initialcontrol u and drawing 20 noise realizations ξ n ( t ). Thecorresponding states x ( t ), Eq. (2), and adjoint states φ ( t ), Eq. (11), are calculated for every noise realiza-tion. Then g ( t ) as given in Eq. (10) is evaluated. Thedescent direction is initialized with d ( t ) = − g ( t ). Wethen iterate until convergence:while k g ( t ) k ∞ > (cid:15) :1. compute the step size s k using bisection2. set u k +1 ( t ) = u k ( t ) + s k d k ( t )3. calculate the states x k +1 ( t ) and the adjointstates φ k +1 ( t ) for every noise realization andcompute the mean h φ k +1 ( t ) i
4. evaluate the gradient g k +1 ( t )5. compute β k using the Polak-Ribiere method: β k = g i +1 ( g i +1 − g i ) k g i +1 k
6. set the direction d k +1 ( t ) = − g k +1 ( t )+ β k d k ( t )7. if d k +1 ( t ) is not a descent direction, set d k +1 ( t ) = − g k +1 ( t )8. set k = k + 1 III. WHOLE-BRAIN NETWORK
In the following, we apply the described algorithm tocontrol the dynamics of a whole-brain network withoutdelays. This assumption holds, when considering brainoscillations which are slow compared to the transmissiondelays (approx. 5-15 ms [54]) between brain areas, forexample the slow oscillation regime ( < A. Simplified dynamics of a neuronal population
We consider a single FitzHugh-Nagumo (FHN) oscilla-tor, with the activity variable x and the linear recoveryvariable x : h ( x ) = (cid:18) h ( x ) h ( x ) (cid:19) = (cid:18) ˙ x ˙ x (cid:19) = (cid:18) R ( x ) − x + µ τ ( x − δx ) (cid:19) , (14) FIG. 1: Bifurcation diagram of an uncoupledFitzHugh-Nagumo oscillator, as given in Eq. (14), withthe parameters α = 3, β = 4, γ = 1 . δ = 0 .
5, and τ = 20. The blue line shows the minimal and maximalvalues of the activity variable x (identical in fixedpoints) for the respective background input µ to thenode. The red line shows the frequency of theoscillation (time and therefore also frequency aremeasured in arbitrary units).where µ is a node-independent, constant background in-put and R ( x ) = − αx + βx − γx . The parameters in R are chosen to obtain the bifurcation diagram shown inFig. 1. This bifurcation diagram, depending on the back-ground input µ , shows three distinct states. In the downstate, the node is in a stable fixed point and has a lowconstant value of the activity variable x (in blue). In theoscillatory state, the activity variable x oscillates at aninput-dependent frequency (in red). In the up state, thenode is again in a stable fixed point with a high constantvalue of the activity variable x .The succession of these states in the single node dy-namics – a supercritical Andronov-Hopf bifurcation fromthe down state to the oscillatory state and another su-percritical Andronov-Hopf bifurcation from the oscilla-tory state to the up state [34, 57] – resembles the statesfound in large random networks of excitatory and in-hibitory spiking neuron models and their correspondingmean-field description [58]. B. The brain network model
The topology of the whole-brain network model is de-rived from diffusion tensor imaging (DTI) data of 12 hu-man subjects (all male and 26-30 years old) from theHuman Connectome Project [59] (see Supplemental Ma-terial for subject IDs). The N = 94 nodes in the networkcorrespond to the cortical and subcortical regions definedby the AAL2 atlas-based segmentation [14] (cerebellumexcluded). We performed probabilistic fiber tracking (us-ing FSL [60], details on the processing pipeline are pro-vided in Supplemental Material) to determine the relativeconnection strength (edges) between these brain regions(nodes). The pairwise structural connectivity is sum-FIG. 2: Weighted adjacency matrix A describing thetopology of the whole-brain network. Color denotes therelative connection strength of the structuralconnectivity for every pair of the N = 94 nodes.marized in the weighted adjacency matrix A shown inFig. 2.As motivated in Section III A, the dynamics of eachnode in the network is described by a FHN oscillator[Eq. (14)]. The general network dynamics, Eq. (2), thensimplifies to ddt x k ( t ) = h ( x k ( t )) + σ N X i =1 A ki x i ( t ) + ηξ k ( t ) + u k ( t ) ,ddt x k ( t ) = h ( x k ( t )) , (15)for all nodes k ∈ [1 , N ]. This implies setting the con-trol matrix B = I N , which means that all individualnodes can potentially receive independent control inputs.The local coupling, control, and noise schemes are set to G = K = D = [[1 , , [0 , x k (rather than the linear recovery variable x k ), is affected by the activity of distant brain areascoupled via axonal fiber bundles ( G ) [61], by externalelectrical simulation ( B ) [62], or by noise ( D ). C. State space exploration
In this section, we describe and characterize the differ-ent dynamical states that can emerge in a whole-brainnetwork of coupled FHN oscillators for a large rangeof parameter configurations. Having such an overviewof the dynamical landscape of the system will allow usto formulate well defined control tasks in Section IV.Throughout the state space exploration we do not ap-ply any control input [ u k ( t ) = 0 ∀ k, t ].Initially we consider the noise free case ( η = 0). Here,only two free model parameters remain: the global cou-pling strength σ and the time independent backgroundinput µ which is the same for each FHN oscillator. Thestate space of the FHN network is explored by simu-lating the network dynamics for wide ranges of these parameters. We integrate the network dynamics givenin Eq. (15) using the fourth order Runge-Kutta methodwith a small step size of h = 0 .
1, in order to ensure nu-meric stability and small truncation errors. The initialconditions x are drawn randomly from a uniform distri-bution in the interval [0 ,
1) or taken as the state vector x ( t end ) of the last time step of the previous simulationwith same σ and slightly smaller µ ( continuation ). Inorder to avoid analyzing transient effects, we show andevaluate the network states only after a sufficiently longtransient time ¯ t .If oscillations are present, the dynamical state is char-acterized by the strength of synchronization quantifiedby the total cross-correlation, Eq. (7), and by the domi-nant frequency f dom . The latter is given by the frequencyof the highest peak of the combined power spectrum ofall nodes, f dom = arg max f N X i =1 S xx,i ( f ) , (16)where S xx,i = | ( F x i )( f ) | , with Fourier transform F ,denotes the power spectral density of an individual node.Figure 3 provides an overview of the state space ofthe FHN network. Similar to the case of a single FHNoscillator (cf. Fig. 1), the network shows two regions inparameter space, for which a stable fixed point (FP) ex-ists. For small values of µ and σ the activity variable x k has a low value for all nodes [ down state , cf. Fig. 3c(1)];for large values of µ and σ the activity value is high [ upstate , cf. Fig. 3c(8)]. Figure 3a shows that the dynamicscan transition from an down state to an oscillatory state,as well as from an oscillatory state to an up state by ei-ther increasing µ or σ . For convenience, we call thesetransitions low and high bifurcation , respectively.Within the oscillatory regime, we observe networkstates that are qualitatively different in their appearance,which we explain in the following by their different un-derlying mechanisms. A single, uncoupled FHN node os-cillates in its limit cycle (LC) if it receives a backgroundinput in the range of µ ∈ [0 . , .
33] (cf. Fig. 1). Wetherefore expect a node in the network to show sustainedoscillation (indiv. LC) – and consequently affecting thedynamics of the network – if its combined input is in thisinterval. We thus call a node “to be in its individual LCregime” if 0 . . µ + σ N X i =1 A ki x i ( t ) . . , (17)for at least one point in time within the considered timeinterval. This criterion is further motivated by observa-tions of the dynamics (cf. Fig. S1 in Supplemental Mate-rial) which show qualitative changes in behavior depend-ing on whether it is met by all, some, or none of the nodesin the network.If all nodes fulfill Eq. (17), we observe the network tobe in a synchronous, high amplitude limit cycle [haLCFIG. 3: Overview of the state space of the brain network model, Eqs. (14) and (15), for the noise free case ( η = 0).Time t measured in arb. units and simulations of each parameter configuration are started with initial conditions x based on continuation and evaluated [except for (c1) and (c8)] after ¯ t = 5000 (see text for details). (a) Each pixelcorresponds to a network with a particular parameter configuration for µ ∈ [0 . , .
4] and σ ∈ [0 , . R that are explored in Section IV B. (b) One dimensionalbifurcation diagram for a network as a function of µ for a fixed coupling strength of σ = 0 .
08. The dominantnetwork frequency (red line) is calculated according to Eq. (16). Minimal and maximal values of the activity variable x k are plotted for each node individually, where line color indicates the weighted degree (green to blue indicateshigh to low values). (c) Example traces of the activity of all nodes in the network for different values of µ [see (b)]and for σ = 0 .
08, showing different dynamical states. Line color indicates the weighted degree of the nodes [see (b)].Traces at the FPs (1, 8) are shown after ¯ t = 0 (see text for details) to show the stability of the respective fixed point.in Fig. 3a, cf. Fig. 3c(5)]. If, however, no node fulfillsEq. (17), the network either inherits the fixed point so-lution from the individual nodes, or a new state emergesdue to network effects. We observe both cases: The firstone is true in both down and up state of the networkin Fig. 3a. The second case appears for low and inter-mediate coupling strength σ at the transition from downor up state to oscillatory state. In these states the FPof the network dynamics is unstable due to coupling ef-fects and a self-sustained low amplitude limit cycle [laLCin Fig. 3a, cf. Fig. 3c(2,7)] emerges. The geometric in-terpretation of this laLC is a rotation around the FPthat would be transient for an isolated node, but is pre-vented from converging to a fixed-point by the networkcouplings. It is also possible that only a fraction of the networknodes fulfill the criterion in Eq. (17), which can lead –even in the absence of noise and without network delays –to asynchronous and apparent aperiodic behavior [mixLCin Fig. 3a, cf. Fig. 3c(3,6), further indications for aperiod-icity are provided in the Supplemental Material]. On thenetwork level, this can be seen as the result of the differ-ent frequencies of the two coexisting network limit cycles,laLC and haLC, interacting. Close to the bifurcation thefrequency of the individual nodes (cf. Fig. 1), and theirtrajectories (cf. Fig. S3 in Supplemental Material), areparticularly sensitive to their respective input. If the ad-ditive coupling input between the nodes is not sufficientto entrain a common frequency, this may result in asyn-chronous and potentially aperiodic network oscillations.States that are classified as mixLC are, however, not nec-essarily asynchronous. If the driving force of nodes thatfulfill the criterion in Eq. (17) is high enough, it will leadto frequency entrainment. Therefore, the oscillations ofthe driven nodes [where Eq. (17) is not fulfilled] are simi-lar to the ones of the driver nodes, resulting in dynamicsthat are similar to the haLC state [cf. Fig. 3c(4,5)].The distinction between different dynamical states andtypes of network oscillations (laLC, mixLC, haLC) is par-ticularly interesting since the transitions between theseare the regions in state space, where we find multistablenetwork states. Multistability is detected by simulatingthe network dynamics for 21 different initial conditions x and comparing the resulting time series by calculatingtheir node-wise correlation in the activity variable. Weignore the first ¯ t = 5000 arb. units of each time series toavoid analyzing transient effects and then compare theinterval t ∈ [5000 , t ∈ [5000 , x k are shown in Fig. 4.The automatic detection of multistability might not cap-ture all multistable states, especially because some of thestates may be very similar (cf. Fig. 4b, bottom panel).More detailed analyses could thus also uncover bistabil-ities between up state and mixLC, as well as betweenmixLC and haLC at the low bifurcation. We inspectedthe detected multistable states visually to assure thattheir difference are not due to transient effects and con-firm their assignment to the specified multistable regionsin Fig. 3a.Adding noise to the network most strongly affects thedynamics of the states close to the bifurcations. The in-fluence of noise on synchronous, mono-stable oscillatorystates is, as expected, comparably small (cf. Figs. S4d, ein Supplemental Material). If the network dynamics is ina FP far away from the bifurcation line, the additionalGaussian white noise results in uncorrelated fluctuationsin the activity variable (cf. Fig. S4i in Supplemental Ma-terial). Parametrizations that without noise would leadto a stable FP close to the bifurcation, show oscillationswhen sufficient noise is introduced (cf. Figs. S4a, h inSupplemental Material). The clear distinction betweenthe FP and laLC network states in the noise-free case (cf.Fig. 3a) is therefore blurred for noisy dynamics. For asyn- chronous mixLC states, the additional noise can have asynchronizing effect (cf. Fig. S4c in Supplemental Ma-terial). This can be explained by more nodes fulfillingthe criterion in Eq. (17), resulting in a stronger drive forthe remaining nodes and therefore more synchronous os-cillation. These effects lead to a shift of the bifurcationlines compared to the noise-free case and consequentlyalso affects the location of multistable network states.In the case of multistability, we observe that thisstochastic additional input can lead to sufficient pertur-bations that drive the dynamics from one attractor tothe other. This results in noise-induced state switching,as shown in Fig. 4c, at the bifurcation lines of the noisystate space. Such noise-induced transitions are a knownphenomenon in systems of coupled oscillators [63] andmay be exploited for the control of the neural dynamicsin practice [64]. IV. OPTIMAL CONTROL OF THE BRAINNETWORK DYNAMICSA. Switching between bistable network states
In this section, we present optimal control inputs thatinduce a switch between previously identified multistablestates. All control inputs optimize the cost functionalgiven in Eq. (3) with the state dependent terms givenin Eq. (4) and the energy and sparsity terms given inEq. (8). Energy and sparsity terms are evaluated overthe whole time interval during which the control is ac-tive. The cost functional itself, however, considers thedeviation from the target state only the end of the con-trol period. For this we set I p ( t ) = I ∗ p for ( T − τ ) ≤ t ≤ T and I p = 0 else. The convergence criterion of the numer-ical solution of the minimization problem, as describedin Section II C, is set to (cid:15) = 10 − for all our applica-tions. We ensured for all presented examples that themethod actually converges and that results do not changefor lower values for (cid:15) . We computed the optimal controlfor different phase shifts of the initial and target stateswith respect to the control onset and present the resultsfor the phase shift which leads to the smallest total con-trol energy E .Figure 5 shows the result of applying optimal controlat point A1 in Fig. 3a, where a low activity fixed point co-exists with an oscillatory mixed state. The weight I s forthe sparseness term in Eq. (8) was set to zero. Since thetarget state in this task is always stable, the network re-mains in this state once it is reached also after the controlis turned off. If the initial state is the down state FP, theobtained optimal control input oscillates synchronouslyfor all nodes with increasing amplitude. The frequencyof the control input [ f dom ( u ) = 35 .
0] corresponds to thefrequency of the fixed point’s focus [ f dom ( x F P ) = 35 . x k of each node from the regions of bi- and multistability indicatedby the labeled points in Fig. 3a. Plots show the dynamics for random initial conditions after a sufficiently longtransient time (¯ t = 5000 arb. units). Line color indicates the weighted degree (green to blue indicates high to lowvalues). (a) Traces at low bifurcation, top: bistability between laLC (left) and mixLC (right) in point A1( µ = 0 . , σ = 0 . , η = 0); bottom: bistability between states in mixLC in point A3 ( µ = 0 . , σ = 0 . , η = 0). (b)Traces at high bifurcation, top: bistability between mixLC (left) and haLC (right) in point B1( µ = 1 . , σ = 0 . , η = 0); bottom: multistability between states in mixLC in point B3 ( µ = 1 . , σ = 0 . , η = 0).The main difference between left and middle panels is the trajectory of the node with the smallest in-degree(highlighted in red). (c) Traces showing noise induced state switching. Locations N1 and N2 are also close to thebifurcations but are not shown in Fig. 3a since the state space changes under the influence of noise. Top: Exampleat the low bifurcation in point N1 ( µ = 0 . , σ = 0 . , η = 0 . µ = 1 . , σ = 0 . , η = 0 . x k of each node k areshown over time. The colored lines show the controlledactivities with different colors denoting different nodes.The gray lines correspond to the uncontrolled activities.(c, d) Corresponding optimal control inputs u k to eachnode. The red shaded area indicates the interval duringwhich the control is active (from t = 0 to t = 400). Thedeviation from the target state is penalized in the last τ = 25 time units of the control. Parameters were: µ = 0 . σ = 0 . η = 0 . I ∗ p = 0 . I e = 1 . I s = 0.node oscillations and the respective optimal control in-puts in state space.When switching from the oscillatory to the down-state,as in Fig. 5d, the optimal control input consists of oneshort biphasic pulse applied to all of the nodes followed by minor corrections. It is a known result from controltheory, that applying a biphasic control pulse around anextreme point is an efficient way to drive an oscillatingsystem to a stop. Interestingly, the optimal control strat-egy in this example is not to apply the pulse at an ex-treme point of the activity variables x k , but rather in away that the gradient of the control is highest when thephase velocity of the limit cycle oscillation is the lowest(best observed in Video 2 in the Supplemental Material).Although the deviation from the target state is only pe-nalized at the end of the control interval, the pulse isapplied early in order to give the system enough time tocome to rest.Figure 6 shows the result of applying optimal con-trol at point B1 in Fig. 3a, where a low amplitudelimit cycle (laLC) around the high activity up-state co-exists with an oscillatory mixed state (mixLC, cf. Sec-tion III). When switching from the laLC to the mixLCstate (Figs. 6a, c), the frequency of the control input[ f dom ( u ) = 30 .
0] is again adapted to the frequency ofthe initial state [ f dom ( x laLC ) = 30 . f k = f dom = 30 . ∀ k ), theirphase speed is not the same at all times [reflected by lowersynchrony R ( x laLC ) = 0 . R ( u ) = 0 . R ( u ) = 0 . x k of each node k are shown over time. The coloredlines show the controlled activities with different colorsdenoting different nodes. The gray lines correspond tothe uncontrolled activities. (c, d) Correspondingoptimal control input u k to each node. The red shadedarea indicates the interval during which the control isactive (from t = 0 to t = 400). The deviation from thetarget state is penalized in the last τ = 25 time units ofthe control. Parameters were µ = 1 . σ = 0 . η = 0 . I ∗ p = 7 × − , I e = 1 . I s = 0.of the network oscillation decreases to almost zero (for t ∈ [200 , I s of the costfunctional, we can tune the number of controlled nodes.Figure 7 shows the control energy E k [Eq. (13)] of theoptimal control input u k to each node k as a functionof the sparsity parameter I s . For low values of I s , allnodes receive a finite control signal. When I s is increased,less nodes are controlled, until I s becomes so large, thatthe target state can no longer be reached. As expected,decreasing the number of controlled nodes needs to becompensated for with a higher control energy.The results also show that at the low bifurcation(Figs. 7a, c), nodes with a high weighted degree receive,independent of the switching direction, a stronger con-trol signal and remain controlled even for high values of I s . This shows that at the low bifurcation it is mostimportant to control the network hubs since they act asdriver nodes for attractor switching. At the high bifur-cation (Figs. 7b, d), on the other hand, nodes with lowweighted degree receive the strongest control inputs.This different behavior can be explained based on theadditive coupling scheme between the nodes (cf. Sec-tion III), which causes nodes with higher degree to typ-ically receive stronger inputs. When choosing parame-ters close to the low bifurcation, these high degree nodes FIG. 7: State switching with sparse optimal control.The nodes are sorted from lowest (bottom) to highest(top) weighted degree. The length of the bars indicateup to which value of the spatial sparsity parameter I s the node still receives finite control input. Their colordenote the corresponding optimal control energies E k [Eq. (13)] for each node. (a) Switching from the downstate (FP) to the oscillatory state (mixLC) withparameters close to the low bifurcation as in Fig. 5 and(b) Switching from low-amplitude (laLC) tohigh-amplitude oscillatory state (mixLC) withparameters close to the high bifurcation as in Fig. 6. (c)Same as (a) but switching from mixLC to FP. (d) Sameas (b) but switching from mixLC to laLC.therefore have higher oscillation amplitudes (cf. Fig. 1)and are consequently driving the oscillation of the re-maining nodes in the network. Close to the high bifur-cation the nodes with lower degree, who receive less in-puts, are more likely to remain in the limit cycle regime.Consequently – and in contrast to the dynamics closeto the low bifurcation – the low degree nodes drive theoscillation of the network hubs. Videos 1 and 3 (Sup-plemental Material) illustrate how the control inputs onthe respective driver nodes force the network dynamicsto the high-amplitude oscillation target states.The optimal control inputs to the control sites havewell interpretable shapes. When the objective is to tran-sition from a low-amplitude state to one with a higheramplitude (mixLC, Figs. 5a and 6a), the optimal controlstrategy utilizes the characteristics of the flow field instate space and synchronously drives the network with itsresonant frequency from one attractor towards the other.Switching in the opposite direction, and therefore leavingthis basin of attraction, is achieved most efficiently withone strong, biphasic pulse. This deflects the system fromits initial mixLC trajectory and causes it to drop to theattractor with a lower amplitude.0FIG. 8: Synchronizing the noise-free network withoptimal control inputs. (a) Synchronization task atpoint S1 (low bifurcation) in Fig. 3a. Activities x k ofeach node k are shown over time. The colored linesshow the controlled activities (average cross-correlation R = 0 .
995 [Eq. (7)]) with different colors being differentnodes. The gray lines show the uncontrolled activities( R = 0 . R = 0 . R = 0 . u k to each node. The red shaded areaindicates the interval during which the control is active(from t = 0 to t = 500). The vertical black dashed lineindicates the critical time t c (see text). Parameterswere: Point S1 µ = 0 . σ = 0 . µ = 1 . σ = 0 . η = 0, I p = 0 . I e = 1 . I s = 0. B. Synchronizing the network dynamics
In this section, we apply the nonlinear control methodto find the optimal inputs to synchronize the dynam-ics of the individual nodes. For this application we usethe state dependent cost functionals given in Eq. (5) topenalize the deviation from the target cross correlation( R T = 1, fully synchronous state) and in Eq. (8) to pe-nalize the energy of the control and enforce its sparsityin space.We parameterize the system to be in the two asyn-chronous regions marked by the labels S1 and S2 inFig. 3a, close to the low and high bifurcation lines. Theoptimal control method is then applied to synchronize thedynamics for the noise-free (Fig. 8) and noisy (Fig. 10)case. Both figures show the controlled (synchronous) anduncontrolled (asynchronous) time series and the corre-sponding optimal control inputs. Since the dynamics inpoints S1 and S2 are monostable, the synchronous stateis not a stable solution and the system returns to itsoriginal state as soon as the control is switched off. Thenetwork dynamics and the optimal control strategies arebest seen in Videos 5-8 (Supplemental Material).Figure 8 shows how the optimal control inputs syn-chronize the oscillation of all nodes in the network. Inthe uncontrolled state (in gray, Figs. 8a, b) the nodesat both bifurcations oscillate with similar frequencies (mean and standard deviation of oscillation frequenciesacross nodes at S1: h f i N = 32 . , σ h f i N = 1 . , and atS2: h f i N = 27 . , σ h f i N = 0 . t c (see Supplemental Material for details about the com-putation of t c ). Second, a periodic input maintains thesynchronous oscillation during the period the control isactive.For times t < t c , Fig. 9a shows that at the low bi-furcation the energies E k of the control inputs to thenodes are positively correlated with the weighted nodedegrees. Thus, it is beneficial to focus the control inputon the network hubs for alignment. A possible interpre-tation for this strategy is that the optimal control uti-lizes the influence of the network hubs on the remainingnodes to force them on the synchronous limit cycle tra-jectory. However, we again observe a different behaviorat the high bifurcation, as shown in Fig. 9b. Here, E k isnegatively correlated with the weighted node degree fortimes t < t c . In this case, the network coupling is muchsmaller compared to the background input µ and thenode’s phase space oscillations are similar to the trajec-tory of the limit cycle of an uncoupled node (cf. Video 6in Supplemental Material). The negative correlation inFig. 9b suggests that if the dynamics of the nodes aresimilar in the first place, it is beneficial to focus the con-trol on more weakly coupled nodes and align their phasespace trajectory with the trajectory of the network hubs.Similar to the attractor switching at the high bifurca-tion (cf. Figs. 7b, d), the control of the low degree nodesdrives the network oscillation towards the target state.In the second phase, for t ≥ t c , the high degree nodeswithout control would have a higher (or lower) phase ve-locity than the nodes with intermediate degree, while theopposite is true for nodes with low degree. The optimalcontrol strategy is thus to act on the low and high degreenodes in opposite directions, which can be observed asantiphase control inputs shown in Figs. 8c and d, wherenodes with intermediate degree receive only small controlinputs. (Videos 5 and 6 in the Supplemental Materialshow how the control inputs keep the nodes together inphase space.) This is reflected in the arch-like form of themean energies E k of the control inputs at both bifurca-tions, as seen in Figs. 9c and d, with high control energiesfor nodes with low or high degree. As a result, the op-timal control achieves a mean network cross-correlationof R = 0 .
995 in the time interval t ∈ [ t c , T ], comparedto the uncontrolled scenario with R = 0 .
289 at point S1(Fig. 8a), and R = 0 .
996 compared to R = 0 .
368 at pointS2 (Fig. 8b).Figure 10 shows results for the application of optimalcontrol to the synchronization task for the case of finitenoise. We obtain a mean network cross-correlation of R = 0 .
83 (analysis of the deviation from R T in the Sup-plemental Material, uncontrolled: R = 0 .
25) at point1FIG. 9: Mean energies E k of the control inputs[Eq. (13)] as a function of the weighted node degree d k for the noise-free synchronization task. The error barsshow the standard deviation over 10 different initialconditions of the network dynamics. (a, b) The controlsignal for time t ∈ [0 , t c ) is considered. Linear regression(red line) coefficients are r = 0 . , p < − and r = − . , p < − . (c, d) The control signal for time t ∈ [ t c , T ] is considered. (a, c) Point S1 (lowbifurcation) in Fig. 3a. (b, d) Point S2 (highbifurcation) in Fig. 3a. All parameters are as in Fig. 8.S1 (Fig. 10a) and R = 0 .
84 (uncontrolled: R = 0 . d k .This change in control strategy is also reflected inFig. 11, which shows the control energy E k per nodefor different noise levels. The control energy increasesfor all nodes with increasing noise strength, both at thelow (Fig. 11a) and the high bifurcation (Fig. 11b). Thecritical time t c is not defined for finite noise, since fullysynchronous dynamics are not achieved, and hence E k is calculated over the whole interval during which thecontrol is active. With increasing noise level η , we ob-serve a gradual transition of the optimal control strategy. FIG. 10: Synchronizing the noisy network with optimalcontrol inputs. (a) Synchronization task at point S1(low bifurcation) in Fig. 3a. Activities x k of each node k are shown over time with different colors denotingdifferent nodes (average cross-correlation R = 0 . R = 0 . R = 0 . R = 0 . u k to each node.The red shaded area indicates the interval during whichthe control is active (from t = 0 to t = 500). Parameterswere η = 0 . I p = 0 . I e = 1 . I s = 0, all otherparameters are as in Fig. 8.FIG. 11: Mean energies E k of the optimal controlinputs [Eq. (13)] as a function of the weighted nodedegree d k for different noise levels η for thesynchronization task. The error bars show the standarddeviation over 10 different initial conditions of thenetwork dynamics with 20 independent noiserealizations of the optimization each. (a) Point S1 (lowbifurcation) in Fig. 3a. (b) Point S2 (high bifurcation)in Fig. 3a. Parameters were I p = 0 . I e = 1 . I s = 0,all other parameters are as in Fig. 8.Instead of balancing out the phase velocity differencesbetween nodes towards the phase of nodes with interme-diate degree (cf. Videos 5 and 7 in Supplemental Mate-rial), the control in the noisy case forces the system ona limit cycle with a slightly higher oscillation amplitude(cf. Videos 6 and 8 in the Supplement). This strategy re-quires a higher control energy but is independent of thespecific realization of the noise.By imposing sparsity constraints on the system, wecan investigate to what extent synchronization can beachieved with fewer control sites. Figure 12 shows the2FIG. 12: Average network cross-correlation R [Eq. (7)]with sparse optimal control as a function of the sparsityparameter I s (Eq. (8)) for different noise levels η . Themean and standard deviation are shown for 5 differentinitial conditions of the network dynamics, each with 20independent noise realizations. Horizontal lines indicatethe cross correlation R for the uncontrolled case. (a)Point S1 (low bifurcation) in Fig. 3a. (b) Point S2 (highbifurcation) in Fig. 3a. Parameters were I p = 0 . I e = 1 .
0, all other parameters are as in Fig. 8.relation of the synchrony in the network, measured bythe average cross-correlation R [Eq. (7)], to the sparsityparameter I s . The average cross-correlation R decreaseswith increasing sparsity parameter I s at both locationsin state space and for all noise levels. This shows that,under the imposed constraints, it is not possible to fullysynchronize the network with sparse control. Optimallysynchronizing aperiodic states is a collective effort. Un-like state switching it cannot be achieved by controllingonly a few sites, because the reduced number of controlsites cannot sufficiently be compensated for by a highercontrol energy. Especially in the case of noisy networkdynamics, where the control must drive all nodes, thenetwork’s synchrony quickly drops to the value of theuncontrolled network when I s is increased. For fixed I s the number of controlled nodes and the control sites opti-mal for synchronizing the network changes for each initialcondition x (cf. Fig. S1 in Supplemental Material), asit depends on details of the exact network dynamics. V. LINEAR VS. NONLINEAR CONTROL
When relating functional properties of a neural sys-tem to properties of the underlying connectome, neuralactivity is often approximated by linear dynamical sys-tems [23, 65, 66]. This has obvious benefits for analyz-ing the effects of perturbations, since calculating optimalcontrol inputs for linear systems, as in Refs. [12, 22], canbe done analytically and with little computational effort.The linearity assumption allows for the use of establishedmeasures from linear control theory, such as the modaland average controllability, and as Gu et al. [23] pointout, linear systems indeed accurately approximate thebehavior of the nonlinear case in certain scenarios. Ingeneral, however, optimal control inputs and sites alsodepend on the control task and on the specifics of thenonlinear dynamical system.
Modal controllability refers to the ability of a node tocontrol each evolutionary mode of a dynamical network[67], and the average controllability is given by the av-erage control input energy to the respective node overall possible target states [23, 25] (mathematical descrip-tion in Appendix B). For linear systems, the average con-trollability of a node is known to be strongly correlatedwith its weighted degree d k ( r = 0 . , p < − forour structural connectivity matrix) while the modal con-trollability is known to have a strong inverse correlation( r = − . , p < − , see Fig. S7 in the SupplementalMaterial). Both control diagnostics are predictive for therole of network nodes in a control setting. Nodes withhigh average controllability require only low energy inputto move the (linear) system into “easy-to-reach” states,while nodes with high modal controllability require a highcontrol energy input to have an effect on the dynamicsbut are crucial when the target state of the system is“difficult-to-reach” [21, 23, 68].Figure 13 shows the correlations of the energies E k ofthe optimal control inputs with the average and modalcontrollability for the attractor switching tasks. At thelow bifurcation A1 (Figs. 13a, c) we find, that the en-ergy of the control input for a node is positively corre-lated with its average controllability and negatively cor-related with its modal controllability, both irrespectiveof the switching direction. Therefore, the most efficientstrategy for switching between the attractors is to con-trol the network hubs with high average controllabilitywhich then force the other nodes towards the target state.These results can be considered consistent with predic-tions from linear control theory [23] and previous resultson the global impact of stimulation [25].At the high bifurcation at location B2 (Figs. 13b, d),however, we observe the opposite trend, with the con-trol energy being negatively correlated with the averageand positively correlated with the modal controllability.Predictions based on linear control theory alone are in-different to the actual dynamics and therefore not ableto distinguish between the two cases. Yet, nodes withhigh modal controllability play a much more importantrole in affecting the global dynamics at this location instate space.For the synchronization task, we observe a similar pat-tern. While there is a clear correlation between the opti-mal node-wise control energies E k with the average con-trollability when aligning the phases (i.e. t < t c ) at thelow bifurcation (Fig. 14a), no correlation is observed atthe high bifurcation (Fig. 14b). (The corresponding re-sults involving modal controllability are shown in Fig. S8in the Supplemental Material.) In the case of maintainingthe synchronous network dynamics ( t ≥ t c , Figs. 14c, d),we observe – at both bifurcations – a similar but less obvi-ous arc-like shape as in Figs. 9c, d. For noisy network dy-namics (cf. Fig. S9 in Supplemental Material), however, E k is negatively correlated with the average controllabil-ity at both bifurcations (cf. Fig. S9 in the Supplement).This shows that in our nonlinear setting, linear con-3FIG. 13: Mean energies E k of the optimal controlinputs [Eq. (13)] plotted against measures from linearcontrol theory for state switching tasks from down state/ low amplitude oscillation towards high amplitudeoscillation (black) and the opposite direction (red). (a) E k vs. average controllability when switching at the lowbifurcation (Point A1 in Fig. 3a). (b) Same as (a) butat the high bifurcation (Point B2 in Fig. 3a). (c) E k vs.modal controllability at A1 and (d) at B2. Solid linesdenote the results of a linear regression with thefollowing obtained coefficients: (a) r = 0 . , p < − for the down-to-up switch (black) and r = 0 . , p < − for the up-to-down switch (red). (b) r = − . , p = 0 .
001 (black) and r = − . , p = 0 . r = − . , p = < − (black) and r = − . , p < − (red). (d) r = 0 . , p = 0 . r = 0 . , p = 0 .
04 (red). Same parametersas in Figs. 5 and 6, for low and high bifurcation,respectively.trollability measures do not provide additional insightscompared to the weighted node degree, and that intu-itions based on these measures can be misleading. Theprediction of which nodes should get the strongest con-trol input depends not only on the structural networkconnectivity, but also on the location in state space, thecontrol task, and other factors like the amount of noise.
VI. DISCUSSION
In this contribution we apply techniques from the op-timal control of nonlinear dynamical systems to the dy-namics of brain network models. Nodes were equippedwith FitzHugh-Nagumo oscillators, since they are simpleand well studied nonlinear models for neural dynamics.Changing the background input for the FHN nodes or theglobal coupling strength of the network can both lead totransitions between two different stable fixed points andtwo different limit cycle attractors (laLC and haLC). Theinteraction between nodes in different oscillation states FIG. 14: Mean energies E k of the optimal controlinputs [Eq. (13)] plotted against the averagecontrollability for the synchronization task (cf. Fig. 9).(a) E k vs. average controllability, with control signalconsidered during the initial time interval t ∈ [0 , t c ), atthe low bifurcation (Point S1 in Fig. 3a). (b) Same as(a) but at the high bifurcation (Point S2 in Fig. 3a).Solid red lines denote the linear regression results withthe following obtained coefficients:(a) r = 0 . , p < − , (b) r = − . , p = 0 .
12 (notsignificant). (c) E k vs. average controllability, withcontrol signal considered for time t ∈ [ t c , T ], at S1.(d) Same as (c) but at S2. All parameters as in Fig. 9.(mixLC) can lead to an asynchronous network dynam-ics. At the bifurcations, we also find different coexistingstable states.The general mathematical framework of the optimalcontrol of partial differential equations [29] is adaptedfor noisy dynamical systems on graphs, where the localnetwork dynamics, the noise level, the network connectiv-ity, and the local coupling schemes [Eq. (2)] can be freelychosen. The state dependent part of the cost functionalis averaged over multiple noise realizations and penalizesthe deviation of the network dynamics from a task de-pendent control target [Eq. (4) for attractor switching,Eq. (5) for synchronization]. The part of the cost func-tional which depends on the control input [Eq. (8)] pe-nalizes the control energy and non-sparse solutions. Thepresented method is applicable to any network that canbe described in the form of Eq. (2), including models ofpower grids, social networks, or climate dynamics.A common problem for gradient based methods arepotential local optima of the cost functional which mayprevent the convergence to a globally optimal solution.To alleviate this problem we performed the minimizationas described in Section II C with different initial condi-tions u for the control time-series and choose the resultwith minimal cost. The set of initial conditions included u = 0 as well as valid control time-series taken fromdifferent parametrizations.4We use the optimal control method to cause targetedattractor switching between previously identified coexist-ing stable states. When no sparsity is enforced, we showthat it is optimal to resonantly drive all nodes to tran-sition to the high amplitude oscillatory state. When thetask is to switch to a state with lower amplitude or nooscillation, the optimal strategy is to apply a preciselytimed biphasic pulse. The nodes that receive the largestcontrol energy are the same for both of these switchingdirections and are also the ones that are still controlledwhen we enforce sparsity in space. Depending on thelocation in state space, either nodes with high degree(at the low bifurcation) or low degree (at the high bifur-cation) are the ones that most efficiently drive the net-work dynamics from the initial to the target state. Whensparsity is enforced, controlling only a small number ofthese driving nodes with increased control energies E k issufficient to switch from one attractor to another in anoptimal way.In the second application, we show that our methodcan also be used to control global properties of the net-work dynamics, such as the average cross-correlation be-tween node activities in the oscillating regime. Imme-diately with control onset, the control signal acts on allnodes to align their phases. As soon as this is achieved,the control maintains the synchronous oscillation withperiodic control signals. Which nodes receive larger con-trol input for the initial alignment again depends on thelocation in the state space. Both at the low and highbifurcation, individually adapted control inputs lead toa successful synchronization of the network dynamics.While the average cross-correlation R is increased alsowith sparse control, the synchronous target state is onlyachieved in an optimal way, when all nodes in the networkreceive a finite control input. This suggests that syn-chronizing all nodes needs collective intervention, whileattractor switching can be caused by controlling a fewselected nodes only. The introduction of noise to the sys-tem makes the dynamics of each node less predictable,resulting in a loss in specificity and more similar controlinputs to the nodes. Consequently, the optimal controlof noisy network dynamics requires higher total controlenergy E (cf. Fig. 11) while resulting in a lower precision(cf. Fig. 12 for I s = 0).The information on the different states and bifurca-tions, which we show to be a decisive factor for choosingthe optimal control sites in our applications, is lost whentechniques from linear control theory are applied. Whilepredictions do qualitatively agree for certain dynamicalsystems, control tasks, and locations in state space (cf.[25]), this agreement does not hold in a general setting.It would, however, be worthwhile investigating for whatclasses of dynamical systems and control tasks “control-lability measures” can be defined, which only depend onthe properties of the connectome.In this contribution, the techniques of nonlinear op-timal control were applied to a simplified model of theglobal brain dynamics. The bifurcations in our FHN model (cf. Fig. 1) phenomenologically capture the statetransitions found in more complex, biophysically moti-vated network models [58] as we have adapted the FHNparameters to resemble their dynamics. Locations atthe lower bifurcation of the network model had been im-plicated as proper “operating points” for modelling thebrain’s resting state activity [17–19], while locations atthe high bifurcation may be considered a cartoon modelfor the global brain dynamics in certain sleep states [20].Delays, however, have been neglected so far. Conduc-tion delays in human white matter have been estimatedto be of the order of approx. 5-15 ms [54]. In lieue of abiologically more detailed model of the effects of controlinputs on whole-brain dynamics, delays should thus beincluded as soon as oscillations with frequencies above afew Hz become relevant. The formalism of nonlinear op-timal control summarized in Section II can be extendedto cover finite delays to model finite signal speed in thebrain. To this end, the coupling term in Eq. (2) has to beadapted, e.g. σ ( A ⊗ G ) x ( t − d ), for the case of constantdelays. The adapted coupling term reappears when com-puting the adjoint state with the differential equation[Eq. (11)], and thus the iterative algorithm for the nu-merical optimization (cf. Section II C) has to be adaptedaccordingly in the calculation of step 3. Extending theformalism in such a way may then help to assess the roleof brain areas in steering the global brain dynamics in thecontext of specific control goals, for example, in achievingtask-specific changes in functional connectivity patternsas observed in human subjects.Here we used a simplified model of the global brain dy-namics to showcase the applicability of nonlinear optimalcontrol and its added value beyond methods from linearcontrol theory. When applied to biophysically more re-alistic models of whole-brain activity (cf. [20]), nonlinearoptimal control may serve as a tool to evaluate the impactof external stimulation and to facilitate the design of newbrain stimulation protocols. Non-invasive brain stimu-lation like transcranial current stimulation [69–71] is ahighly promising technique to perturb the global braindynamics with the goal to improve sensory [72], motor[73], and cognitive abilities [74] of human subjects. Us-ing a cost functional which penalizes spatial sparsenessand control energy may – for example – help reducing therequired current applied to a subject’s brain as well asfocusing the electrical perturbation on the relevant brainareas only. While exact target trajectories are unlikely tobe available in a practical setting, state-dependent costfunctionals which refer to global quantities [cf. Eq. (5)for the synchronization task] may well be. Particularly,reformulating the control formalism in frequency spacewill be beneficial here. It would for example allow forcomputing optimal interventions for changing the powerof certain brain rhythms, which can be monitored byelectroencephalography and which are common controltargets in clinical settings (cf. [75]). As shown in othercomputational and physiological studies [58, 76] and em-phasized again by our results, the timing of the control5inputs may also be crucial for successful interventions.Here we see a high potential of nonlinear optimal controlin guiding electrical brain stimulation under electro- ormagnetoencephalography [77, 78], a setting which allowsfor such precisely timed control inputs. Results of nonlin-ear optimal control method may thus inspire physiolog-ical studies on brain-stimulation to explore new controlparadigms for clinical stimulation protocols with poten-tially better control results, reduced energy expenditure,and higher spatial sparseness. VII. ACKNOWLEDGMENTS
We thank Dr. Michael Scholz, Cristiana Dimulescu,and Lena Salfenmoser for scientific discussions, and Prof.Dr. Eckehard Sch¨oll for his comments on the manuscript.This work was supported by the German Research Foun-dation (163436311 – SFB 910, 327654276 – SFB 1315,and 390523135 – EXC 2002/1 “Science of Intelligence”).Human connectivity data were provided by the HumanConnectome Project, WU-Minn Consortium (PrincipalInvestigators: David Van Essen and Kamil Ugurbil;1U54MH091657) funded by the 16 NIH Institutes andCenters that support the NIH Blueprint for NeuroscienceResearch; and by the McDonnell Center for Systems Neu-roscience at Washington University.
Appendix A: Mathematical formulation of theoptimal control problem1. Optimization problem: general formulation
In this section the mathematical foundations for con-trolling the network dynamics are laid. We use nonlinearsparse optimal control. All derivations below are similarto the ones in Refs. [29, 31], where sparse optimal controlwas applied to a system of partial differential equationswith diffusive coupling.We consider a network of N nodes with d -dimensionaldynamics each. Its dynamics is defined by a system of N stochastic differential equations, ddt x ( t ) = h ( x ( t )) + σ ( A ⊗ G ) x ( t )+( B ⊗ K ) u ( t ) + η ( I N ⊗ D ) ξ ( t ) . (A1)with the state vector x = ( x , ..., x N ) and x i =( x i , . . . , x id ). ⊗ denotes the Kronecker product.The local node dynamics is governed by h ( x ) =( h ( x ) , ..., h ( x N )) with h ( x i ) = ( h ( x i ) , . . . , h d ( x i )).The coupling term consists of the N × N -dimensional ad-jacency matrix A and the d × d -dimensional local couplingscheme G . u = ( u , ..., u N ) with u i = ( u i , . . . , u id ) de-notes the vector of control inputs, B is a diagonal N × N dimensional control matrix, and K the d × d -dimensionallocal control scheme. The noise term consists of the noise intensity η and the Kronecker product of an N-dimensional identity matrix A and the d × d -dimensionallocal noise scheme G . The stochastic process is given by ξ . Boundary conditions are given by x ( t = 0) = x . (A2)Since our system is stochastic while the control is deter-ministic, the optimal control must minimize a cost func-tional which is an expectation over many realizations ofthe noise. It is defined as h F ( x ( u ) , u ) i = Z T h f ( x ( u ) , u ) i dt, (A3)where the angle brackets denote the expectation.Our goal is to find the optimal control u that minimizesthe above mean cost functional. Hence, the variationalinequality h F ( x ( u i ) , u i ) i − h F ( x ( u i ) , u i ) i ≥ i separately. Welinearize around the optimal solution and write δu i =( u i − u i ) to obtain h F ( x ( u i ) , u i ) i − h F ( x ( u i ) , u i ) i≈ ∂∂u i h F ( x ( u i ) , u i ) i (cid:12)(cid:12)(cid:12)(cid:12) u i δu i ≥ . (A5)Since above inequality (A5) has to hold for all δu i and − δu i , we find the stronger necessary condition h F ( x ( u i ) , u i ) i − h F ( x ( u i ) , u i ) i = 0 (A6)for the optimal solution.Inserting the expression from the right side of Eq. (A3)yields Z T h h f ( x ( u i ) , u i ) i − h f ( x ( u i ) , u i ) i i dt ≈ Z T ∂∂u i h f ( x ( u i ) , u i ) i (cid:12)(cid:12)(cid:12)(cid:12) u i ( u i − u i ) dt = 0 . (A7)Since Eq. (A7) must hold for all components i , we write Z T ∇ u h f ( x ( u ) , u ) i ◦ ( u − u ) dt = 0 , (A8)where ◦ denotes the element-wise multiplication (Schurproduct). Here we assume that the cost functional canbe separated into a part F u ( u ), which depends on con-trol inputs only and which is deterministic, and a part h F x ( x ( u )) i , which is a function of the network state x and only indirectly depends on the applied control, i.e.: h F ( x ( u ) , u ) i = F u ( u ) + h F x ( x ( u )) i . (A9)Equivalently, one can write Z T h f ( x ( u ) , u ) i dt = Z T h f u ( u ) + h f x ( x ( u )) i i dt. (A10)6Applying the chain rule in Eq. (A8) yields Z T ∇ u h f ( x ( u ) , u ) i ◦ ( u − u ) dt = Z T h ∇ u f u ( u ) + h D T u ( x ( u )) ∇ x f x ( x ( u )) i i ◦ ( u − u ) dt = 0 , (A11)where D u ( x ( u )) is the Jacobian matrix of the state x with respect to the control vector u , evaluated at theoptimal control u .To calculate the above derivative we must find an ex-pression for the Jacobian matrix D u ( x ( u )) . However,the dependence of the network state on the control in-puts cannot be provided in closed form. The LagrangeFormalism enables us to derive an alternative expressionfor the second term on the right-hand side of Eq. (A11),which can be evaluated to find the optimal control.
2. The method of Lagrange multipliers
We can use the formalism of Lagrange multipliers toformulate conditions for the optimization of the costfunctional, Eq. (A3), that is subject to the constraintgiven in Eq. (A1) and the boundary conditions given inEq. (A2). Our approach is similar to the method used in[68], and we will obtain a result that resembles what isfound in [31] for a system reaction-diffusion equations.We define the mean Lagrange function hL ( x ( t ) , u ( t )) i as hL ( x ( t ) , u ( t )) i = h F ( x , u ) − Z T h ddt x − h ( x ) − σ ( A ⊗ G ) x − ( B ⊗ K ) u − η ( I N ⊗ G ) ξ i T φ ( t ) dt i = h Z T h f ( x , u ) − (cid:16) ddt x − h ( x ) − σ ( A ⊗ G ) x − ( B ⊗ K ) u − η ( I N ⊗ G ) ξ (cid:17) T φ ( t ) i dt i (A12)Equation (A1) for the dynamics of the network state x has to be satisfied for all times 0 ≤ t ≤ T , which isensured by the time integration of the constraint. φ ( t ) =( φ ( t ) , ..., φ N ( t )) with φ i ( t ) = ( φ i ( t ) , . . . , φ id ( t )) is the d × N -dimensional vector of time-dependent Lagrangemultipliers.Linearizing around the optimal solution [cf. Eqs. (A4),(A8)], we obtain the optimality conditions h Z T ∇ u L ( x ( u ) , u ) ◦ δu dt i = 0 (A13) and h Z T ∇ x L ( x ( u ) , u ) ◦ δx dt i = 0 . (A14)with δu = u − u and δx = x ( u ) − x ( u ).After inserting the Lagrange function, Eq. (A12), intoEq. (A14), we apply partial integration and the chainrule, and obtain h ∇ x L ( x ( u ) , u ) ◦ δx i = h Z T ∇ x f ( x ( u ) , u ) ◦ δx dt − Z T ∇ x (cid:0) ddt x ( u ) (cid:1) T φ ◦ δx dt − Z T ∇ x h(cid:16) − h ( x ( u )) − σ ( A ⊗ G ) x ( u ) − ( B ⊗ K ) u − η ( I N ⊗ G ) ξ (cid:17) T φ i ◦ δx dt i = h Z T h(cid:16) ∇ x f x ( x )) + ∂∂t φ + (cid:0) D x ( h ) + σ ( A ⊗ G ) (cid:17) T φ i ◦ δx dt − φ ( T ) ◦ δx ( T ) i = . (A15)The boundary term φ ( T ) ◦ δx ( T ) = (A16)must hold for every realization. Since δx ( T ) could befinite, the boundary condition φ ( T ) for the vector of La-grange multipliers is φ ( T ) = . (A17)Equation (A15) is fulfilled if the integrand vanishes.We thus obtain a d × N -dimensional system of linearordinary differential equations for the so-called adjointstates φ , − ddt φ ( t ) = (cid:0) D x ( h ) + σ ( A ⊗ G ) (cid:1) T φ ( t ) + ∇ x f x ( x )) , (A18)where D x ( h ) denotes the Jacobian matrix of the localdynamics with respect to the network state x . The ad-joint states are obtained for every realization by solvingEq. (A18) backwards in time obeying the boundary con-ditions (A17).Inserting the Lagrange function [Eq. (A12)] into7Eq. (A13), we obtain h ∇ u L ( x ( u ) , u ) ◦ δu i = h Z T ∇ u f ( x ( u ) , u ) ◦ δu dt − Z T ddt [ D u ( x ( u ))] T φ ◦ δu dt − Z T ∇ u h(cid:16) h ( x ( u )) − σ ( A ⊗ G ) x ( u ) − ( B ⊗ K ) u (cid:17) T φ i ◦ δu dt i = . (A19)The first term in Eq. (A19) vanishes [see Eq. (A8)].Furthermore, after applying partial integration and thechain rule, above Eq. (A19) simplifies to h ∇ u L ◦ δu i = h Z T (cid:0) D u ( x ( u )) (cid:1) T ddt φ ◦ δu dt + Z T h(cid:0) D u ( x ( u )) (cid:1) T (cid:16) ∇ x (cid:0) h ( x ( u ))+ σ ( A ⊗ G ) (cid:1) T φ (cid:17) + ( B ⊗ K ) T φ i ◦ δu dt i = h Z T h(cid:0) D u ( x ( u )) (cid:1) T (cid:0) ddt φ + D x ( h ) + σ ( A ⊗ G ) (cid:1) T φ + ( B ⊗ K ) T φ i ◦ δu dt i = , (A20)where the boundary term of the partial integration wasset to zero because of the boundary conditions imposedon φ and x .Inserting the differential equations (A18) that governthe dynamics of the adjoint states into Eq. (A20), weobtain h Z T (cid:16) − D T u ( x ( u )) ∇ x f x ( x ( u )) + ( B ⊗ K ) T φ (cid:17) ◦ δu dt i = . (A21)The first term corresponds to the expression found inEq. (A11). We can solve Eq. (A21) for this term andinsert it into Eq. (A11) to obtain the optimality condition Z T (cid:16) ∇ u f u ( u ) + ( B ⊗ K ) T h φ i (cid:17) ◦ δu dt = Z T g ( t ) dt = . (A22)Equation (A22) is fulfilled if the stronger optimalitycondition g ( t ) = ∇ u f u ( u ) + ( B ⊗ K ) T h φ i = (A23) holds for all times t ∈ [0 , T ]. This equation is similar tothe equation derived by Casas et al. [31] for a system ofreaction-diffusion equations. Appendix B: Definitions of average and modalcontrollability
We consider a simplified linear network dynamics ofthe form x ( t + 1) = Ax ( t ) + Bu ( t ) , (B1)with activity vector x , adjacency matrix A , control ma-trix B and control inputs u . In order to calculate theaverage controllability of node k in the network, wechoose the input matrix B ( k ) to target a single node,i.e. B ( k ) ij = 1 for i, j = k and B ( k ) ij = 0, otherwise. Thecontrollability Gramian of this system is defined as W ( k ) = ∞ X τ =0 A τ B ( k ) B ( k ) T ( A τ ) T (B2)and the average controllability as its trace (cf. [23]) c avk = Tr W ( k ) . (B3)Network nodes with high average controllability have alarge impact on the network dynamics. This makes themimportant control sites since by controlling these nodes alarge number of possible activity vectors x can be reachedwith little control energy. They are consequently saidto be able to move the system to many easy-to-reachstates [23].The modal controllability is defined based on on aneigenvalue decomposition of the network adjacency ma-trix A , c modk = N X j =1 (cid:0) − λ j (cid:1) v kj , (B4)where v kj are the elements of the eigenvector matrix V = [ v kj ] and λ . . . λ N are the corresponding eigenval-ues [23]. Nodes with high modal controllability tend to besparsely connected. They are capable of pushing the net-work dynamics towards activity vectors x which requiresubstantial control energy to be reached. Thus, nodeswith high modal controllability are said to be capable ofsteering the network into difficult-to-reach states [23].For a detailed description and derivation of theseregional controllability measures please refer to Refs.[23, 25, 68]. We use the matlab code provided with thepublication of Ref. [23] to compute the controllabilitymeasures: https://complexsystemsupenn.com/codedata.For their calculation, only the adjaciency matrix A (cf.Fig. 2) is required.8 [1] T. Zaehle, S. Rach, and C. Herrmann, Transcranial al-ternating current stimulation enhances individual alphaactivity in human eeg, PloS one , e13766 (2010).[2] T. Neuling, S. Rach, and C. Herrmann, Orchestratingneuronal networks: sustained after-effects of transcranialalternating current stimulation depend upon brain states,Frontiers in human neuroscience , 161 (2013).[3] R. Polan´ıa, M. A. Nitsche, C. Korman, G. Batsikadze,and W. Paulus, The importance of timing in segregatedtheta phase-coupling for cognitive performance, CurrentBiology , 1314 (2012).[4] D. Str¨uber, S. Rach, S. Trautmann-Lengsfeld, A. Engel,and C. Herrmann, Antiphasic 40 hz oscillatory currentstimulation affects bistable motion perception, Brain to-pography , 158 (2014).[5] J. Ladenbauer, J. Ladenbauer, N. K¨ulzow, R. de Boor,E. Avramova, U. Grittner, and A. Fl¨oel, Promoting sleeposcillations and their functional coupling by transcra-nial stimulation enhances memory consolidation in mildcognitive impairment, Journal of Neuroscience , 7111(2017).[6] W. Sun, W. Fu, W. Mao, D. Wang, and Y. Wang, Low-frequency repetitive transcranial magnetic stimulationfor the treatment of refractory partial epilepsy, ClinicalEEG and neuroscience : official journal of the EEG andClinical Neuroscience Society (ENCS) , 40 (2011).[7] F. Fregni, P. Otachi, A. Valle, P. Boggio, G. Thut, S. Rig-onatti, A. Pascual-Leone, and K. Valente, A randomizedclinical trial of repetitive transcranial magnetic stimula-tion in patients with refractory epilepsy, Annals of neu-rology , 447 (2006).[8] A. Hasan, P. Falkai, and T. Wobrock, Transcranial brainstimulation in schizophrenia: targeting cortical excitabil-ity, connectivity and plasticity, Current medicinal chem-istry , 405 (2013).[9] J. Cole, C. Bernacki, A. Helmer, N. Pinninti, andJ. O’reardon, Efficacy of transcranial magnetic stimula-tion (tms) in the treatment of schizophrenia: A review ofthe literature to date, Innovations in clinical neuroscience , 12 (2015).[10] I. Fried, Brain stimulation in alzheimer’s disease, Journalof Alzheimer’s Disease , 789 (2016).[11] M. S. George, J. J. Taylor, and E. B. Short, The ex-panding evidence base for rtms treatment of depression,Current opinion in psychiatry , 13 (2013).[12] D. S. Bassett and O. Sporns, Network neuroscience, Na-ture neuroscience , 353 (2017).[13] K. Amunts, M. Hawrylycz, D. Van Essen, J. VanHorn, N. Harel, J.-B. Poline, F. De Martino, J. Bjaalie,G. Dehaene-Lambertz, S. Dehaene, P. Valdes-Sosa,B. Thirion, K. Zilles, S. Hill, M. Abrams, P. Tass,W. Vanduffel, A. Evans, and S. Eickhoff, Interoperableatlases of the human brain, NeuroImage , 525 (2014).[14] E. T. Rolls, M. Joliot, and N. Tzourio-Mazoyer, Imple-mentation of a new parcellation of the orbitofrontal cor-tex in the automated anatomical labeling atlas, Neuroim-age , 1 (2015).[15] S. B. Eickhoff, B. T. Yeo, and S. Genon, Imaging-basedparcellations of the human brain, Nature Reviews Neu-roscience , 672 (2018).[16] M. Breakspear, Dynamic models of large-scale brain ac- tivity, Nature neuroscience , 340 (2017).[17] G. Deco and V. K. Jirsa, Ongoing cortical activity at rest:criticality, multistability, and ghost attractors, Journal ofNeuroscience , 3366 (2012).[18] G. Deco, A. Ponce-Alvarez, D. Mantini, G. L. Romani,P. Hagmann, and M. Corbetta, Resting-state functionalconnectivity emerges from structurally and dynamicallyshaped slow linear fluctuations, Journal of Neuroscience , 11239 (2013).[19] M. Demirta¸s, J. B. Burt, M. Helmer, J. L. Ji, B. D. Ad-kinson, M. F. Glasser, D. C. Van Essen, S. N. Sotiropou-los, A. Anticevic, and J. D. Murray, Hierarchical het-erogeneity across human cortex shapes large-scale neuraldynamics, Neuron , 1181 (2019).[20] C. Cakan, C. Dimulescu, L. Khakimova, D. Obst,A. Fl¨oel, and K. Obermayer, A deep sleep model of thehuman brain: how slow waves emerge due to adapta-tion and are guided by the connectome, arXiv preprintarXiv:2011.14731 (2020).[21] E. Tang and D. S. Bassett, Colloquium: Control of dy-namics in brain networks, Reviews of modern physics ,031003 (2018).[22] S. Gu, R. F. Betzel, M. G. Mattar, M. Cieslak, P. R.Delio, S. T. Grafton, F. Pasqualetti, and D. S. Bassett,Optimal trajectories of brain state transitions, Neuroim-age , 305 (2017).[23] S. Gu, F. Pasqualetti, M. Cieslak, Q. K. Telesford, B. Y.Alfred, A. E. Kahn, J. D. Medaglia, J. M. Vettel, M. B.Miller, S. T. Grafton, et al. , Controllability of structuralbrain networks, Nature communications , 1 (2015).[24] M. Golos, V. Jirsa, and E. Dauc´e, Multistability in largescale models of brain activity, PLoS computational biol-ogy , e1004644 (2015).[25] S. F. Muldoon, F. Pasqualetti, S. Gu, M. Cieslak, S. T.Grafton, J. M. Vettel, and D. S. Bassett, Stimulation-based control of dynamic brain networks, PLoS compu-tational biology , e1005076 (2016).[26] L. D. Berkovitz and N. G. Medhin, Nonlinear optimalcontrol theory (CRC press, 2012).[27] R. Herzog, G. Stadler, and G. Wachsmuth, Directionalsparsity in optimal control of partial differential equa-tions, SIAM Journal on Control and Optimization ,943 (2012).[28] E. Casas, R. Herzog, and G. Wachsmuth, Analysis ofspatio-temporally sparse optimal control problems ofsemilinear parabolic equations, ESAIM: Control, Opti-misation and Calculus of Variations , 263 (2017).[29] F. Tr¨oltzsch, Optimal control of partial differential equa-tions: theory, methods, and applications , Vol. 112 (Amer-ican Mathematical Soc., 2010).[30] W. Stannat and L. Wessels, Deterministic control ofstochastic reaction-diffusion equations, arXiv preprintarXiv:1905.09074 (2019).[31] E. Casas, C. Ryll, and F. Tr¨oltzsch, Sparse optimal con-trol of the schl¨ogl and fitzhugh–nagumo systems, Com-putational Methods in Applied Mathematics , 415(2013).[32] R. FitzHugh, Impulses and physiological states in theo-retical models of nerve membrane, Biophysical Journal , 445 (1961).[33] J. Nagumo, S. Arimoto, and S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proceed-ings of the IRE , 2061 (1962).[34] T. Kostova, R. Ravindran, and M. Schonbek, Fitzhugh–nagumo revisited: Types of bifurcations, periodical forc-ing and stability regions by a lyapunov functional, Inter-national journal of bifurcation and chaos , 913 (2004).[35] E. Sch¨oll, G. Hiller, P. H¨ovel, and M. A. Dahlem, Time-delayed feedback in neurosystems, Philosophical Trans-actions of the Royal Society A: Mathematical, Physicaland Engineering Sciences , 1079 (2009).[36] S. Eydam, I. Franovi´c, and M. Wolfrum, Leap-frog pat-terns in systems of two coupled fitzhugh-nagumo units,Physical Review E , 042207 (2019).[37] I. Shepelev and T. Vadivasova, Variety of spatio-temporalregimes in a 2d lattice of coupled bistable fitzhugh-nagumo oscillators. formation mechanisms of spiral anddouble-well chimeras, Communications in Nonlinear Sci-ence and Numerical Simulation , 104925 (2019).[38] S. Plotnikov, J. Lehnert, A. Fradkov, and E. Sch¨oll, Syn-chronization in heterogeneous fitzhugh-nagumo networkswith hierarchical architecture, Physical Review E ,012203 (2016).[39] J. Lehnert, T. Dahms, P. H¨ovel, and E. Sch¨oll, Loss ofsynchronization in complex neuronal networks with de-lay, EPL (Europhysics Letters) , 60013 (2011).[40] C. Cakan, J. Lehnert, and E. Sch¨oll, Heterogeneous de-lays in neural networks, The European Physical JournalB , 1 (2014).[41] D. Nikitin, I. Omelchenko, A. Zakharova, M. Avetyan,A. L. Fradkov, and E. Sch¨oll, Complex partial synchro-nization patterns in networks of delay-coupled neurons,Philosophical Transactions of the Royal Society A ,20180128 (2019).[42] I. Omelchenko, T. H¨ulser, A. Zakharova, and E. Sch¨oll,Control of chimera states in multilayer networks, Fron-tiers in Applied Mathematics and Statistics , 67 (2019).[43] G. Ruzzene, I. Omelchenko, J. Sawicki, A. Zakharova,E. Sch¨oll, and R. G. Andrzejak, Remote pacemaker con-trol of chimera states in multilayer networks of neurons,Physical Review E , 052216 (2020).[44] T. Chouzouris, I. Omelchenko, A. Zakharova, J. Hlinka,P. Jiruska, and E. Sch¨oll, Chimera states in brain net-works: Empirical neural vs. modular fractal connectivity,Chaos: An Interdisciplinary Journal of Nonlinear Science , 045112 (2018).[45] M. Gerster, R. Berner, J. Sawicki, A. Zakharova,A. ˇSkoch, J. Hlinka, K. Lehnertz, and E. Sch¨oll,Fitzhugh–nagumo oscillators on complex networksmimic epileptic-seizure-related synchronization phenom-ena, Chaos: An Interdisciplinary Journal of NonlinearScience , 123130 (2020).[46] L. Ramlow, J. Sawicki, A. Zakharova, J. Hlinka, J. C.Claussen, and E. Sch¨oll, Partial synchronization in em-pirical brain networks as a model for unihemisphericsleep, EPL (Europhysics Letters) , 50007 (2019).[47] A. Ghosh, Y. Rho, A. McIntosh, R. K¨otter, and V. Jirsa,Cortical network dynamics with time delays reveals func-tional connectivity in the resting brain, Cognitive neuro-dynamics , 115 (2008).[48] V. Vuksanovi´c and P. H¨ovel, Dynamic changes in net-work synchrony reveal resting-state functional networks,Chaos: An Interdisciplinary Journal of Nonlinear Science , 023116 (2015).[49] A. Mess´e, M.-T. H¨utt, P. K¨onig, and C. C. Hilgetag, A closer look at the apparent correlation of structuraland functional connectivity in excitable neural networks,Scientific reports , 7870 (2015).[50] R. Buchholz, H. Engel, E. Kammann, and F. Tr¨oltzsch,On the optimal control of the schl¨ogl-model, Computa-tional Optimization and Applications , 153 (2013).[51] E. Polak and G. Ribiere, Note sur la convergencede m´ethodes de directions conjugu´ees, ESAIM: Mathe-matical Modelling and Numerical Analysis-Mod´elisationMath´ematique et Analyse Num´erique , 35 (1969).[52] J. Nocedal and S. Wright, Numerical optimization (Springer Science & Business Media, 2006).[53] R. Fletcher and C. M. Reeves, Function minimization byconjugate gradients, The computer journal , 149 (1964).[54] R. Caminiti, F. Carducci, C. Piervincenzi, A. Battaglia-Mayer, G. Confalone, F. Visco-Comandini, P. Pantano,and G. M. Innocenti, Diameter, length, speed, and con-duction delay of callosal axons in macaque monkeysand humans: comparing data from histology and mag-netic resonance imaging diffusion tractography, Journalof Neuroscience , 14501 (2013).[55] M. Steriade, A. Nunez, and F. Amzica, A novel slow (1hz) oscillation of neocortical neurons in vivo: depolariz-ing and hyperpolarizing components, Journal of neuro-science , 3252 (1993).[56] M. M¨olle and J. Born, Slow oscillations orchestrating fastoscillations and memory consolidation, Progress in brainresearch , 93 (2011).[57] E. M. Izhikevich, Dynamical systems in neuroscience (MIT press, 2007).[58] C. Cakan and K. Obermayer, Biophysically groundedmean-field models of neural populations under electricalstimulation, PLOS Computational Biology , 1 (2020).[59] D. C. Van Essen, S. M. Smith, D. M. Barch, T. E.Behrens, E. Yacoub, K. Ugurbil, W.-M. H. Consortium, et al. , The wu-minn human connectome project: anoverview, Neuroimage , 62 (2013).[60] M. Jenkinson, C. F. Beckmann, T. E. Behrens, M. W.Woolrich, and S. M. Smith, Fsl, Neuroimage , 782(2012).[61] M. H. Mohajerani, A. W. Chan, M. Mohsenvand,J. LeDue, R. Liu, D. A. McVea, J. D. Boyd, Y. T. Wang,M. Reimers, and T. H. Murphy, Spontaneous cortical ac-tivity alternates between motifs defined by regional ax-onal projections, Nature neuroscience , 1426 (2013).[62] D. Reato, A. Rahman, M. Bikson, and L. C. Parra, Low-intensity electrical stimulation affects network dynamicsby modulating population rate and spike timing, Journalof Neuroscience , 15067 (2010).[63] Noise-induced transitions in physics, chemistry, and bi-ology, in Noise-Induced Transitions: Theory and Appli-cations in Physics, Chemistry, and Biology (SpringerBerlin Heidelberg, Berlin, Heidelberg, 1984) pp. 164–200.[64] S. Schmidt, M. Scholz, K. Obermayer, and S. A. Brandt,Patterned brain stimulation, what a framework withrhythmic and noisy components might tell us about re-covery maximization, Frontiers in human neuroscience ,325 (2013).[65] C. Honey, O. Sporns, L. Cammoun, X. Gigandet, J.-P.Thiran, R. Meuli, and P. Hagmann, Predicting humanresting-state functional connectivity from structural con-nectivity, Proceedings of the National Academy of Sci-ences , 2035 (2009).[66] R. F. Gal´an, On how network architecture determines the dominant patterns of spontaneous neural activity, PloSone , e2148 (2008).[67] A. Hamdan and A. Nayfeh, Measures of modal controlla-bility and observability for first-and second-order linearsystems, Journal of guidance, control, and dynamics ,421 (1989).[68] E. Tang, H. Ju, G. L. Baum, D. R. Roalf, T. D. Satterth-waite, F. Pasqualetti, and D. S. Bassett, Control of brainnetwork dynamics across diverse scales of space and time,Physical Review E , 062301 (2020).[69] M. A. Nitsche and W. Paulus, Transcranial direct cur-rent stimulation–update 2011, Restorative neurology andneuroscience , 463 (2011).[70] A. Antal, K. Boros, C. Poreisz, L. Chaieb, D. Terney,and W. Paulus, Comparatively weak after-effects of tran-scranial alternating current stimulation (tacs) on corticalexcitability in humans, Brain stimulation , 97 (2008).[71] D. Terney, L. Chaieb, V. Moliadze, A. Antal, andW. Paulus, Increasing human brain excitability by tran-scranial high-frequency random noise stimulation, Jour-nal of Neuroscience , 14147 (2008).[72] J. R. Behrens, A. Kraft, K. Irlbacher, H. Gerhardt, M. C.Olma, and S. A. Brandt, Long-lasting enhancement of vi-sual perception with repetitive noninvasive transcranialdirect current stimulation, Frontiers in Cellular Neuro-science , 238 (2017).[73] M. Moisa, R. Polania, M. Grueschow, and C. C. Ruff,Brain network mechanisms underlying motor enhance- ment by transcranial entrainment of gamma oscillations,Journal of Neuroscience , 12053 (2016).[74] L. Marshall, M. M¨olle, M. Hallschmid, and J. Born, Tran-scranial direct current stimulation during sleep improvesdeclarative memory, Journal of Neuroscience , 9985(2004).[75] J. Ladenbauer, N. K¨ulzow, S. Passmann, D. Antonenko,U. Grittner, S. Tamm, and A. Fl¨oel, Brain stimulationduring an afternoon nap boosts slow oscillatory activityand memory consolidation in older adults, Neuroimage , 311 (2016).[76] S. Alagapan, S. L. Schmidt, J. Lefebvre, E. Hadar, H. W.Shin, and F. Fr¨ohlich, Modulation of cortical oscilla-tions by low-frequency direct cortical stimulation is state-dependent, PLoS biology , e1002424 (2016).[77] G. Thut, T. O. Bergmann, F. Fr¨ohlich, S. R. Soekadar,J.-S. Brittain, A. Valero-Cabr´e, A. T. Sack, C. Miniussi,A. Antal, H. R. Siebner, et al. , Guiding transcranial brainstimulation by eeg/meg to interact with ongoing brain ac-tivity and associated functions: a position paper, ClinicalNeurophysiology , 843 (2017).[78] T. O. Bergmann, Brain state-dependent brain stimula-tion, Frontiers in psychology , 2108 (2018).[79] G. Gong, P. Rosa-Neto, F. Carbonell, Z. J. Chen, Y. He,and A. C. Evans, Age-and gender-related differences inthe cortical anatomical network, Journal of Neuroscience , 15684 (2009). upplemental Material We use the Diffusion Tensor Imaging (DTI) data of 12 human subjects from the Human Connec-tome Project (Young Adults HCP) [59] with the following IDs: 101309, 121416, 211215, 211619,212116, 213522, 219231, 220721, 268749, 284646, 303119, 329844. All subjects are male, healthy,and 26-30 years old.
Data acquisition and preprocessing is described in Ref. [59]. We used BEDPOSTX (Bayesian Es-timation of Diffusion Parameters Obtained using Sampling Techniques) from the FSL toolbox [60]for building up distributions on diffusion parameters, which automatically determines the numberof crossing fibres per voxel. Based on these diffusion parameters at each voxel, we then applied thePROBTRACKX (probabilistic tractography) algorithm from the same toolbox with 5000 samplesper voxel. The resulting connectivity matrix A s for each subject s was normalized by dividing theconnections between any two regions by the number of voxels in the source region multiplied by5000. Self connections were deleted, A sii ! = 0 , ∀ i , and we averaged the structural connectivity ma-trices of the individual subjects to obtain the adjacency matrix A , A = P N =12 s A s . Probabilisticfiber tracking does not provide information about directionality, hence the structural connectiv-ity matrix was symmetrized (cid:16) A ← A + A T (cid:17) . Since application of the PROBTRACKX algorithmtypically results in a certain number of false positive fibers and thus assigns non-zero connec-tion strengths to all pairs of brain regions, we enforced a sparsity of 20% [79] by discarding allconnections with a relative connection strength smaller than 0 . Figure S1 shows traces of the activity variables x k and the respective total inputs for each nodeover time for different background inputs µ (see bifurcation diagram shown in Fig. 3b of the mainpaper). The total input µ + σ N X i A ki x i ( t ) + η ξ k ( t ) (S1)to each node consists of the sum of the background input µ , the input σ P Ni A ki x i ( t ) from othernodes in the network, and the input η ξ k ( t ) from the noise process. We expect qualitatively differentdynamics if the total input of a node lies inside vs. outside the interval given by Eq. (17) of themain paper (dashed blue lines in Fig. S1), because total inputs inside this interval correspond toan individual FHN oscillator being in its limit cycle.For small µ , as shown in Fig. S1a, the total input is below the lower threshold, and the networkis in a stable fixed point. If µ is larger with the total input still below but closer to the lowerthreshold, a low amplitude limit cycle (laLC) emerges (Fig. S1b). These oscillations cannot be1 a r X i v : . [ q - b i o . N C ] M a r xplained by the dynamics of the individual FHN oscillator but emerge as a result of networkeffects. If the total inputs of some of the nodes enter the described interval, as it happens forsome (Fig. S1c) or all (Fig. S1d) of the maxima of activity, the network dynamics exhibits highamplitude oscillations. The elevated amplitudes for nodes with total input within the intervalincrease the coupling inputs to other nodes in the network and this creates oscillations in nodesthat are not in their individual LC. We call this regime the mixed limit cycle (mixLC) state. If µ is high enough so that all nodes satisfy the condition in Eq. (17), as shown in Fig. S1e, the networkis in the high amplitude limit cycle (haLC) state. For higher µ , the total input of some nodesis exceeds the upper threshold and we observe mixLC states, as in Fig. S1f, that show similaraperiodic behavior as mixLC states at the low bifurcation (cf. Fig. S1c). We also observe a laLCat the high bifurcation (Fig. S1g) and a stable fixed point (Fig. S1h) for even higher values for thebackground input µ .The interval criterion for the node input alone is not sufficient to explain the asynchronousor aperiodic network dynamics observed in Figs. S1c and f, but in these traces we see that thenetwork dynamics change qualitatively, depending on whether some of the node inputs fall intothe considered interval. Consequently, small changes in the input to individual nodes can result inlarge effects on the network dynamics. Figure S2a shows the activity variables x k for nodes k ∈ { , , } plotted against each otherfor the asynchronous mixed state at the low bifurcation in Fig. S1c (equivalent to Fig. 3c(3) of themain paper). We observe open trajectories, which differ between different oscillation periods. Thisis a strong indication that the analyzed network state is not only asynchronous but also aperiodic.Figure S2b shows the equivalent phenomenon for the trajectories of the investigated asynchronousstates at the low bifurcation S1 (cf. Fig. 3a of the main paper).Figure S3a shows the phase space oscillations in the ( x k , x k ) plane of three other networknodes, k ∈ { , , } (lowest, intermediate, and highest weighted degree d k in the network), forpoint S1 (cf. Fig. 3a of the main paper). The large differences in ranges of the trajectories are causedby total inputs [Eq. (S1)] of different strength due to their different weighted in-degree. Even smallchanges in the input to a node can have a large effect on the shape of the limit cycle, especiallyclose to the bifurcations (see also Fig. S3b). The starting and end points of these trajectories (allplotted for t ∈ [5000 , Figures S4a–h show the activity variables x k and the total inputs [Eq. (S1)] for each node forthe same parametrizations of the network dynamics as Figs. S1a–h, with additional Gaussianwhite noise ( η = 0 . R = 0 .
47 ( η = 0) to R = 0 .
58 ( η = 0 . η = 0 . η = 0). If a randomfluctuation drives a node into the respective individual LC, this increases the coupling input tothe other nodes in the network. With more nodes oscillating in the individual LC, the drivinginput to the remaining nodes is increased, which can then be sufficient to entrain a synchronousnetwork oscillation. Close to the high bifurcation in Fig. S4f, a similar effect counteracts theaperiodic collapsing of the network oscillation (cf. Fig. S1f). The periodic and synchronous statesin Figs. S4d, e are robust against the influence of noise, showing no qualitative change in theirdynamics compared to the noise-free case in Figs. S1d, e. Figure S4h shows, similar to Fig. S4a2igure S1: Activity traces over time of all nodes in the network (different colors denote differentnodes) for different external input µ , as indicated in the respective title. Parameter values of (a)-(h) correspond to Figs. 3c(1-8) of the main paper with σ = 0 .
08 and η = 0 (noise-free case). Toppanels show the activity variables x k as a function of time for random initial conditions after asufficiently long transient (¯ t = 10000). Bottom panels show the respective time series of the totalinput [Eq. (S1)] to the network nodes. Blue dashed lines indicate the input values for which asingle FHN oscillator transitions into its individual LC [0 .
73 and 1 .
33, see Eq. (17) of the mainpaper]. 3igure S2: Trajectories of activity variables x k , for nodes k ∈ { , , } for different nodecombinations, in the mixLC state without noise ( η = 0). Trajectories are shown after ¯ t = 10000(to aviod transient effects) for a time interval of length 10000 arb. units. The open trajectoriesindicate aperiodic network dynamics. (a) Network parameters are σ = 0 . , µ = 0 .
61 (same asFig. S1c or Fig. 3c(3) of the main paper). (b) Parameters are σ = 0 . , µ = 0 . x k , x k ) phase plane for 3 of the 94 nodes in thenetwork at location S1 ( σ = 0 . , µ = 0 .
7, cf. Fig. 3a of the main paper). All nodes have thesame start and end time and are plotted for 64 time units, which is equivalent to roughly 2 cycles(after transient of ¯ t = 5000). Cyan to pink corresponds to node 71 which has the lowest weighteddegree d k in the network. Black to orange corresponds to node 33 with an intermediate value of d k , and green to yellow corresponds to node 72 which has the highest value of d k in the network.Color changes along the trajectory in discrete time. (b) Shape of the LC of a single, uncouplednode for different background inputs µ = { . , . , . , . , . , . , . } (bottom left to topright). 4t the low bifurcation, a self-sustained noise induced network oscillation at the high bifurcation.Figure S4i shows that if the network dynamics are in a fixed point far away from a bifurcation, weobserve uncorrelated fluctuations in the activity variable and the combined input, instead of theemerging oscillations in Figs. S4a and h.When evaluating the results of the optimal control method for the synchronization task in thecase of finite noise, it is unreasonable to expect that the optimal control algorithm will achievethe target mean network cross-correlation of R T = 1. In the synchronization task in Section IVB of the main paper, the optimal control method increases the average network cross-correlationfrom R = 0 .
25 in the uncontrolled case to R = 0 .
83 (parameters µ = 0 . , σ = 0 . , η = 0 . R T = 1 with the effect of noise on R of a previously synchronous network state. Figure S4j showsthe effect of noise on an uncontrolled, synchronous state with low global coupling strength as S1( σ = 0 . R = 0 .
99 in the noise-free ( η = 0) case is substantially reduced to R = 0 .
75 when noise isintroduced ( η = 0 . The network dynamics as shown in Fig. 6b of the main paper does not achieve the target state (cf.initial state in Fig. 6a of the main paper) within the time interval in which the control is active.The biphasic pulse of the optimal control method occurs around t = 150, after which the networkoscillation almost comes to rest. Since, however, no stable fixed point exists in the absence ofcontrol input, the oscillation amplitude grows until it reaches the laLC target state. Figure S5shows that the laLC target state is stable and reached about 550 time units after the control pulse.Although the control target is eventually reached, this trajectory results in a nonzero precision cost,which is evaluated in the last τ = 25 time units of the control interval. Due to energy constraints,however, this is still the optimal control solution. t c The critical time t c is defined as the first point in time when all nodes are “in synchrony”. Thecross-correlation measure for synchrony [Eq. (7) of the main paper] is defined for time intervalsand does not allow to determine synchrony for a given point in time. We therefore compute theKuramoto order parameter r , r ( t ) = 1 N (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N X k =1 e iθ k ( t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , (S2)for N nodes of the network with their respective phases θ k . We calculate the phases for node k by assigning θ k (ˆ t ) = 0 to all times ˆ t that correspond to its oscillation maxima. We then linearlyinterpolate the phases θ k ∈ [0 , π ) between subsequent maxima. The critical time t c is then definedas the first point in time in the control period, for which r ( t ) ≥ .
999 holds.
Figure S6 provides additional information to Fig. 12 of the main paper by showing the fractionof controlled nodes as a function of the sparsity parameter I s and the resulting average cross-correlation R . As expected, the fraction of controlled nodes decreases with increasing penalty onsparsity in space. Plotting the average cross-correlations R against the fraction of controlled nodesintuitively shows how many nodes have to receive a finite control input in order to achieve a certainsynchrony for different noise levels. It is not possible to achieve a significantly higher synchronycompared to the uncontrolled dynamics by controlling less than 15% of the network nodes.5igure S4: (a)-(i) Activity traces over time of all nodes in the network (different colors denotedifferent nodes) for different external input µ , as indicated in the respective title. Parametersfor (a)-(h) are the same as in Figs. S1a-h, except for added Gaussian white noise with variance η = 0 . x k as a function of time for random initialconditions after a sufficiently long transient (¯ t = 10000). Bottom panels show the respective timeseries of the total input [Eq. (S1)] to the network nodes. Blue dashed lines indicate the input valuesfor which a single FHN oscillator transitions into its individual LC (0 .
73 and 1 .
33, see Eq. (17) ofthe main paper). (j) Activity traces over time of all nodes for µ = 0 . , σ = 0 .
025 and differentnoise levels, top: noise-free case ( η = 0, resulting in R = 0 . η = 0 . R = 0 . τ = 25) and together with the red shaded area it indicates the time interval the control is active.The dashed line at t = 675 indicates the first point in time when the cross-correlation with thelaLC target state is larger than 0 . η , equivalent toFig. 12 of the main paper. Top row: Fraction of nodes in the network that receive finite controlinput as a function of the sparsity parameter I s . Bottom row: Achieved synchonization measuredby the average cross-correlation R as a function of the fraction of controlled nodes. The mean andstandard deviation are shown for 5 different initial conditions of the network dynamics, each with20 independent realizations of the noise. Horizontal lines denote the average cross-correlations R for the uncontrolled case. Points S1 (S2) are located close to the low (high) bifurcation. Parametersas in Fig. 12 of the main paper. 7igure S7: Linear controllability measures plotted against the weighted node degrees d k . Red solidlines denote the linear regression results. Left: Average controllability with ( r = 0 . , p < − ).Right: Modal controllability with ( r = − . , p < − ). Videos can be viewed at https://github.com/rederoth/nonlinearControlFHN-videos. The states( x k , x k ) for all network nodes k are plotted as dots in the ( x k , x k ) phase plane. The color ofeach node corresponds to its weighted degree (green to blue indicates high to low values). Theoptimal control inputs ¯ u k are indicated by black arrows whose direction denotes the sign and whoselength denote absolute strength (see top right corner for scale). The time t and the duration ofthe control interval is indicated in the bottom right corner. If applicable, the limit cycle of a singlenode (no coupling, σ = 0) is indicated in light gray. • Video 1: Attractor switching at point A1 (low bifurcation) from down-state to oscillatorystate • Video 2: Attractor switching at point A1 (low bifurcation) from oscillatory state to down-state • Video 3: Attractor switching at point B2 (high bifurcation) from low-amplitude oscillationto high-amplitude oscillation • Video 4: Attractor switching at point B2 (high bifurcation) from high-amplitude oscillationto low-amplitude oscillation • Video 5: Synchronization at point S1 (low bifurcation) with η = 0 (noise-free network) • Video 6: Synchronization at point S2 (high bifurcation) with η = 0 (noise-free network) • Video 7: Synchronization at point S1 (low bifurcation) with η = 0 .
024 (noisy network) • Video 8: Synchronization at point S2 (high bifurcation) with η = 0 .
024 (noisy network)
The considered linear controllability measures are known to be closely related to the weighted nodedegree d k [23, 25]. Figure S7 shows the average and modal controllability measures as function of d k for each of the N = 94 network nodes, where the average controllability shows a positive, andthe modal controllability a negative correlation with d k . Nodes with high average controllabilitytend to have a small modal controllability. 8igure S8: Mean of the optimal node-wise control energy E k plotted against the modal controlla-bility for the synchronization task (equivalent to Fig. 14 of the main paper). (a, b) The controlsignal from time t = 0 until time t = t c is considered. Solid red lines denote the linear regressionresults with the following obtained coefficients: (a) r = − . , p < − , (b) r = 0 . , p = 0 . t = t c until time t = T is considered. (a, c)Point S1 (low bifurcation). (b, d) Point S2 (high bifurcation). All parameters as in Fig. 14 of themain paper. Figure S8 shows the correlation between the optimal node-wise control energies E k of the controlinputs with the modal controllability for the synchronization task discussed in Fig. 14 of the mainpaper. Since nodes with high average controllability show a low modal controllability, the positivecorrelation at the low bifurcation for t < t c (Fig. 14a of the main paper) is reversed, while there isagain no significant correlation at the high bifurcation (cf. Fig. S8b). In the second phase of thecontrol, with t ≥ t c , we again observe nodes with an intermediate modal controllability receivingsmaller control energies E k (analogous to Figs. 9c, d and 14c, d of the main paper). For the synchronization task under finite noise, we observe a change in the control strategy withincreasing noise levels (cf. Section II B of the main paper). Figure S9 shows that the influence ofnoise also affects the correlation of the node-wise control energy E k with the introduced measuresfrom linear control theory. At the low bifurcation (Fig. S9a), E k is not correlated with the averagecontrollability over the whole time interval of the control for η = 0 (combination of Figs. 14aand c of the main paper). With increasing noise levels, however, we also observe an increasednegative correlation. At the high bifurcation (Fig. S9b) the correlation of E k with the averagecontrollability is even inverted from positive for η = 0 to increasingly negative for η = 0 .
012 and9igure S9: Mean of the optimal node-wise control energy E k plotted against average (a, b) andmodal controllability (c, d) for the synchronization task with different noise levels η . (a, c) Resultsat the low bifurcation (point S1). (b, d) Results at the high bifurcation (point S2). Solid linesdenote the linear regression results, where black corresponds to the noise-free case ( η = 0 . η = 0 . η = 0 . η = 0 . r = − . , p = 0 .
63) not significant, η = 0 .
012 ( r = − . , p < − ), η = 0 . r = − . , p < − ). (b) ( r = 0 . , p = 0 . r = − . , p = 0 . r = − . , p < − ), (c)( r = 0 . , p = 0 .
49) n.s., ( r = 0 . , p < − ), ( r = 0 . , p < − ), (d) ( r = − . , p = 0 . r = 0 . , p = 0 . r = 0 . , p < − ). All parameters as in Fig. 11 of the main paper. η = 0 . E kk