Data-Driven Inference of High-Accuracy Isostable-Based Dynamical Models in Response to External Inputs
DData-Driven Inference of High-Accuracy Isostable-Based DynamicalModels in Response to External Inputs
Dan Wilson Department of Electrical Engineering and Computer Science, University of Tennessee,Knoxville, TN 37996, USAFebruary 10, 2021
Abstract
Isostable reduction is a powerful technique that can be used to characterize behaviors ofnonlinear dynamical systems in a basis of slowly decaying eigenfunctions of the Koopman op-erator. When the underlying dynamical equations are known, previously developed numericaltechniques allow for high-order accuracy computation of isostable reduced models. However, insituations where the dynamical equations are unknown, few general techniques are available thatprovide reliable estimates of the isostable reduced equations, especially in applications wherelarge magnitude inputs are considered. In this work, a purely data-driven inference strategyyielding high-accuracy isostable reduced models is developed for dynamical systems with a fixedpoint attractor. By analyzing steady state outputs of nonlinear systems in response to sinu-soidal forcing, both isostable response functions and isostable-to-output relationships can beestimated to arbitrary accuracy in an expansion performed in the isostable coordinates. De-tailed examples are considered for a population of synaptically coupled neurons and for theone-dimensional Burgers’ equation. While linear estimates of the isostable response functionsare sufficient to characterize the dynamical behavior when small magnitude inputs are consid-ered, the high-accuracy reduced order model inference strategy proposed here is essential whenconsidering large magnitude inputs. a r X i v : . [ m a t h . D S ] F e b ata-driven identification of low-order dynamical models is of primary importancein the physical, chemical, and biological sciences. Many well-established model identi-fication strategies have been developed to identify linear models from data, however,such strategies often fail in situations where nonlinear terms dominate. In this work,the Koopman operator paradigm is leveraged to develop a high-accuracy model in-ference strategy for dynamical systems with stable fixed points. By analyzing steadystate outputs of nonlinear systems in response to sinusoidal forcing it is shown that ageneral, nonlinear dynamical model can be inferred. Input-to-output relationships ofthe resulting model can be obtained to arbitrary accuracy in an asymptotic expansionperformed in a reduced coordinate basis. In detailed numerical examples with appli-cations to both neuroscience and fluid dynamics, the proposed strategy identifies alow-order model that accurately captures fundamentally nonlinear system behaviors. With an ever-growing abundance of data, there is an increasing demand for data-driven strategiesthat can be used to both predict and control the behaviors of dynamical systems. While a varietyof projection based techniques [1] are well-suited for dynamical systems that can be accuratelyapproximated by a local linearization, Koopman analysis has emerged as an essential techniquein the study of dynamical systems with fundamentally nonlinear behaviors. The key feature ofKoopman analysis is that it allows the observables of a nonlinear dynamical system to be studiedin terms of a linear, but possibly infinite-dimensional operator [2], [3]. As such, complicatednonlinear behaviors can be understood in terms of the evolution of eigenfunctions that propagate inaccordance with their associated eigenvalues [4], [5], [6]. Using the Koopman operator framework,it is generally possible to make predictions about the evolution of observables by considering theirbehavior on a low-dimensional manifold comprised of a finite number of Koopman eigenmodes [7],[8]. While Koopman eigenmodes can often be identified if the underlying dynamical equations areknown, however, when the underlying dynamical equations are unknown, the Koopman operatormust be approximated in a data-driven setting. Dynamical mode decomposition (DMD) is one suchdata-driven method [9], [10], which approximates eigenmodes from a timeseries of system observ-ables. Extended DMD [11], deep learning approaches [12], [7], and delay embedding [6], [8], [13]techniques have also been useful in certain applications. A commonality among these data-drivenKoopman analysis strategies is that they attempt to identify a low-dimensional representation ofthe infinite-dimensional Koopman operator.Because the Koopman framework often results in a low-dimensional dynamical representationfor the evolution of system observables, there has been a growing interest in applying this strategyto control problems. Time-varying inputs make the system behavior non-autonomous, and insuch systems, the Koopman eigenmodes and associated eigenvalues are time-dependent adding anadditional layer of complexity to the analysis. With this in mind, a number of Koopman-basedcontrol strategies have been developed that augment the system measurements with knowledgeof the applied control in order to characterize the dynamical behavior in response to inputs [14],[15], [16], [17]. Related strategies have been successful that consider a finite number of constantcontrols, computing the resulting Koopman eigenfunctions, and switching between control states asnecessary [18]. An alternative strategy is to identify Koopman eigenfunctions of the autonomous,2nforced system and work directly in the resulting eigenfunction basis [19]. This is the approachtaken by the isostable reduction framework [20], [21], [22], [23], [24] that uses a basis of the mostslowly decaying eigenfunctions associated with the Koopman operator. In many applications anaccurate reduced order model can be obtained using a small number of isostable coordinates [22],[24], [25] making control and analysis possible in a substantially reduced order setting.Characterizing the response of isostable coordinates to applied input is a fundamental challengewhen considering control applications in the isostable reduced coordinate basis. This can be done,for instance, by approximating the gradient of the isostable coordinate field with respect to theapplied inputs. Previous work has focused on identifying linear approximations of the isostableresponse functions with respect to a fixed point attractor [26], [27], or along specific trajectories[21], [22]. These strategies work in the limit that the applied inputs are small in magnitude butsuffer when larger magnitude inputs are considered. This limitation can be addressed by consid-ering high order expansions of the isostable response functions in a basis of isostable coordinates.Recent work [28] details such a method for approximating these isostable response functions in os-cillatory dynamical systems to arbitrary accuracy, however, this strategy requires knowledge of theunderlying model equations. Currently, no existing strategies have been developed for identifyinghigh-accuracy approximations of the isostable response functions using data-driven techniques.In this work, a purely data-driven strategy is developed for identifying reduced order isostable-based models in dynamical systems with stable fixed points. By analyzing steady-state modeloutputs in response to sinusoidal forcing, both the isostable response functions and the isostable-to-output relationships can be estimated to arbitrary accuracy in an expansion performed in the basisof isostable coordinates. This proposed method can be considered an extension of the techniquesuggested in [26], which considered only the first order accurate dynamics of the isostable reducedequations. The current work allows allows for estimates of higher order accuracy terms which arecritically important when considering large magnitude inputs. The organization of this paper is asfollows: Section 2 gives necessary background on the isostable reduced coordinate framework andalso reframes the numerical isostable reduction techniques from [28] for use in dynamical systemswith stable fixed points. Section 3 details a data-driven strategy for inferring isostable-basedreduced order models using information about steady state model outputs in response to sinusoidalforcing. Section 4 provides examples that include systems with relevance to both neuroscientificand fluid flow applications, and Section 5 gives concluding remarks.
Consider an ordinary differential equation˙ x = F ( x ) + U ( t ) , (1)where x ∈ R N denotes the state, F represents the nominal dynamics, and U ( t ) is an externalinput. Let φ ( t, x ) denote the unperturbed flow of (1), and suppose that (1) has a stable fixed pointdenoted by x . Near the fixed point, local linearization yields∆ x = J ∆ x + O ( | ∆ x | ) , (2)where ∆ x ≡ x − x and J is the Jacobian of F evaluated at x . While (2) is only accurate ina close vicinity of the fixed point, it nonetheless can be used in conjunction with the isostablecoordinate framework to characterize the infinite-time decay of solutions in the fully nonlinearbasin of attraction of x . To do so, let w k , v k , and λ k to be left eigenvectors, right eigenvectors,3nd eigenvalues of J , respectively, ordered so that | Re( λ k ) | ≤ | Re( λ k +1 ) | . Isostable coordinatesassociated with a subset eigenvalues with the smallest magnitude real components can be definedexplicitly according to ψ j ( x ) = lim t →∞ (cid:0) w Tj ( φ ( t, x ) − x ) exp( − λ j t ) (cid:1) , (3)where T indicates the matrix transpose. Intuitively, in Equation (3), the term w Tj ( φ ( t, x ) − x ) selectsfor the component of the decay in the v j direction – multiplying by the exponentially growing terman taking the limit as time approaches infinity yields the isostable coordinate. The definition (3) isclosely aligned with the one given in [22], however, other definitions that compute Fourier averagesof observables [20], [21] can also be used. As a point of emphasis, the constructive definition ofisostable coordinates (3) is only possible for a subset of eigenvalues with the smallest magnitudereal components [29]. Isostable coordinates associated with larger magnitude eigenvalues must bedefined implicitly as level sets of Koopman eigenfunctions with decay rates that are governed bytheir associated λ j .The utility of the transformation to isostable coordinates relies on the fact that many dynamicalsystems have behavior that is governed by their slowest decaying modes of the Koopman operator.By truncating the rapidly decaying components and retaining only a small number of the slowlydecaying isostables, a reduced order model can be obtained. Such isostable coordinate systemshave been applied recently to a variety of control applications [23], [30], [31], [32], [33], [34]. When U ( t ) = 0, under the evolution of ˙ x = F ( x ), all isostable coordinates decay exponentiallyaccording to ˙ ψ k = λ k ψ j in the entire nonlinear basin of attraction of the fixed point. In theanalysis to follow, the dynamics of the M slowest decaying isostable coordinates will be explicitlyconsidered and the rest will be truncated (i.e., by taking all truncated isostable coordinates tobe equal to zero at all times). The reduced order dynamics of Equation (1) on a hypersurfacedetermined by the non-truncated isostable coordinates then follows˙ ψ k = λ k ψ k + I k ( ψ , . . . , ψ M ) · U ( t ) ,k = 1 , . . . , M,x = x + G ( ψ , . . . , ψ M ) , (4)where I k denotes the gradient of ψ k , G maps the isostable coordinates to the state, and the dotdenotes the dot product. For low dimensional systems (with N ≤ x and infer the functions I k and G numerically [20],[35]. This approach, however, does not scale well to in higher dimensions. Instead,it is usually more feasible to Taylor expand both I k and G in powers of the isostable coordinates as G ( ψ , . . . , ψ M ) ≈ M (cid:88) k =1 (cid:104) ψ k g k (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k g jk (cid:105) + M (cid:88) i =1 i (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ i ψ j ψ k g ijk (cid:105) + . . . , (5) I n ( ψ , . . . , ψ M ) ≈ I n + M (cid:88) k =1 (cid:104) ψ k I kn (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k I jkn (cid:105) + M (cid:88) i =1 i (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ i ψ j ψ k I ijkn (cid:105) + . . . . (6)For the purposes of this work Equation (4) will be considered a j th order accurate model if theexpansion of G from (5) and and each I n from (6) are computed to j th and j − th order accuracyin the isostable coordinates, respectively. This convention generally assumes that U ( t ) and each4 n are O ( (cid:15) ) terms where 0 < (cid:15) (cid:28) O ( (cid:15) j ) accuracy. Note that these models often perform well even when the when themagnitude of input is larger than O ( (cid:15) ) (see, for example, [28]).Using methods adapted from [28] that are given in Appendix A, it is possible to directly solvefor each g ijk... and I ijk...n directly when the underlying model equations are known. As a concreteexample, a simple dynamical system is considered˙ x = µx + u x ( t ) , ˙ x = λ ( − x + x + x + x ) , (7)with λ = − µ = − .
05. The model (7) has a finite-dimensional Koopman invariant subspace[36] making it an ideal testbed for developing nonlinear model identification strategies. Equation(7) has a stable fixed point at the x = x = 0 with associated eigenvalues λ = − .
05 and λ = − x nullcline as shown in Figure 1. Due to the separationbetween the eigenvalues, a single isostable reduced model can be determined by identifying theunknown g ijk and I ijk terms from equations (5) and (6), respectively, using methods detailed inAppendix A. Level sets of the isostable coordinates and the gradient of the isostable coordinatesare shown in panels B and C of Figure 1. Approximations of G ( ψ ) valid to various orders of ψ inthe expansion from (5) are shown in panel D. The resulting reduced dimension model of the form(4) simulated in response to a sinusoidal input shown in panels E, corresponding outputs are givenin panel F. For (7), third order accuracy is sufficient to match the full model behavior.The procedure described in Figure 1 explicitly requires knowledge of the underlying modelequations. In practical applications, underlying model equations are generally not known a priori .Rather, time series of model observables are typically all that can be measured. With this in mind,the following sections will illustrate a purely data-driven nonlinear model identification strategy. In the analysis to follow, input-output relationships will be used to identify the necessary termsof an isostable reduced model based on (4). To proceed, it will be assumed that U ( t ) = Au ( t ),with A = (cid:2) A . . . A N (cid:3) T and u ( t ) ∈ R ; in other words, U ( t ) is explicitly assumed to be a rank-1input. Higher rank inputs can be considered with appropriate modifications. Sinusoidal inputsof the form u ( t ) = (cid:15) sin( ωt ) will be used to measure the input-output dynamics where ω givesthe natural frequency of the input and (cid:15) is assumed to be a small constant. Intuitively, when asinusoidal input is applied to a linear model, the steady state output is a phase-shifted sinusoid atthe same frequency. As will be shown in the analysis to follow, a nonlinear reduced order modelcan be directly related to higher-frequency Fourier modes comprising the steady-state output –this information can then be used to infer nonlinear isostable reduced models that characterize theresponse to inputs.To proceed, using the assumed structure on U ( t ), the isostable coordinates reduction from (4)will be written as ˙ ψ k = λ k ψ j + I k,A ( ψ , . . . , ψ M ) · u ( t ) , (8) k = 1 , . . . , M,y = y + G y ( ψ , . . . , ψ M ) . (9)5igure 1: A Model dimension reduction applied to a simple model (7) with nominal dynamicsrepresented in panel A. If the full model equations are known, it is straightforward to computethe necessary terms of the reduced order models from Equations (5) and (6) that only include theslowest decaying isostable dynamics; these terms are shown in panels B through D for a singleisostable coordinate. Once the terms of the reduced order model are identified, it can be used topredict the response to inputs, provided the terms of G and I are computed to sufficiently highorder accuracy in the isostable coordinate.Above, y ∈ R ζ is a collection of observables and G y maps the isostable coordinates to the systemobservables. As with (4), G y will be Taylor expanded in the basis of isostable coordinates G y ( ψ , . . . , ψ M ) ≈ M (cid:88) k =1 (cid:104) ψ k g ky (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k g jky (cid:105) + M (cid:88) i =1 i (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ i ψ j ψ k g ijky (cid:105) + . . . . (10)Similarly, the isostable response to inputs from (8) will be written in terms of the Taylor expansion I n,A ( ψ , . . . , ψ M ) ≈ I n,A + M (cid:88) k =1 (cid:104) ψ k I kn,A (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k I jkn,A (cid:105) + M (cid:88) i =1 i (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ i ψ j ψ k I ijkn,A (cid:105) + . . . , (11)where, for instance, I ijkn,A = I ijkn · A . As with Equation (4), the reduced order model (8) with output(9) will be considered a j th order accurate model if the terms of G y and the terms of each I n,A arecomputed to j th and j − th order accuracy in the isostable coordinates, respectively.In order to aid in the analysis, each ψ n will be asymptotically expanded in orders of (cid:15) as ψ n ( t ) = ψ (0) n ( t ) + (cid:15)ψ (1) n ( t ) + (cid:15) ψ (2) n ( t ) + . . . . (12)6ubstituting (12) into (8) and collecting the O (1) terms yields ˙ ψ (0) n = λ n ψ (0) n ; consequently inthe limit that time approaches infinity, ψ (0) n tends to zero. With this in mind, for simplicity ofexposition, it will be assumed that each ψ (0) n is zero for all time. For the moment, it will be assumed that ζ = 1, i.e., that the observable y is of dimension 1. Higherdimensional outputs will be considered in Section 3.6. A similar computation strategy for firstorder accurate models was considered in [26]. To begin, u ( t ) = (cid:15) sin( ωt ) will be applied and onlythe steady state dynamics will be considered. Collecting the O ( (cid:15) ) terms after substituting (12) into(8) yields ˙ ψ (1) n = λ n ψ (1) n + I n,A sin( ωt ) . (13)The steady state, the solution to (8) is ψ (1) n,ss = − I n,A ω + λ n ( λ n sin( ωt ) + ω cos( ωt ))= s ,n ( ω ) I n,A sin( ωt ) + c ,n ( ω ) I n,A cos( ωt ) , (14)where s ,n ( ω ) ∈ C and c ,n ( ω ) ∈ C are defined appropriately. Considering (9), to leading order in (cid:15) the output is y ss ( w, t ) = y + (cid:15) M (cid:88) k =1 (cid:20) s ,n ( ω ) g ky I k,A sin( ωt ) + c ,n ( ω ) g ky I k,A cos( ωt ) (cid:21) + O ( (cid:15) ) , (15)Multiplying both sides of (15) by either sin( ωt ) or cos( ωt ), integrating over one period, and con-sidering the steady state outputs that result when using q different frequencies ω , . . . , ω q , one canwrite the matrix equation 1 (cid:15) Γ = π Ξ Υ + O ( (cid:15) ) , (16)whereΓ = ω (cid:82) π/ω y ss ( w , t ) sin( ω t ) dtω (cid:82) π/ω y ss ( w , t ) cos( ω t ) dt ... ω q (cid:82) π/ω q y ss ( w q , t ) sin( ω q t ) dtω q (cid:82) π/ω q y ss ( w q , t ) cos( ω q t ) dt ∈ C q × , Ξ = s , ( ω ) . . . s ,M ( ω ) c , ( ω ) . . . c ,M ( ω )... s , ( ω q ) . . . s ,M ( ω q ) c , ( ω q ) . . . c ,M ( ω q ) ∈ C q × M , and Υ = (cid:2) g y I ,A . . . g My I M,A (cid:3) T ∈ C M × . The individual terms of Υ can then be estimated,for instance by taking Υ = 1 (cid:15)π Ξ † Γ . (17)A similar strategy was used in [26] to fit a linear model to the isostable reduced dynamical model.As explained in [26], isostable reduced models of the form (8) and (9) can always be scaled so that I n,A can be chosen arbitrarily. For this reason, it will be assumed that I n,A = 1 for all n so thatthe relationship (16) can be used to identify the unknown g y , . . . , g My terms.7 .2 Inferring Second Order Accurate Models for Single Output Systems Once the O ( (cid:15) ) dynamics have been determined, the steady state behavior of the O ( (cid:15) ) terms of(12) can be used to determine the second order accurate terms of (10) and (11). To this end,recalling that u ( t ) = (cid:15) sin( ωt ) one can substitute the asymptotic expansions (11) and (12) into (8)and collect O ( (cid:15) ) terms to yield˙ ψ (2) n = λ n ψ (2) n + M (cid:88) k =1 ψ (1) k I kn,A sin( ωt ) . (18)The steady state dynamics of each ψ (1) k are given by (14) so that in steady state, (18) can be writtenas ˙ ψ (2) n,ss = λ n ψ (2) n,ss + M (cid:88) k =1 (cid:2) s ,k ( ω ) I k,A sin( ωt ) + c ,k ( ω ) I k,A cos( ωt ) (cid:3) I kn,A sin( ωt ) . (19)Using the trigonometric product-to-sum identities from (B3) in Appendix B, one can rewrite (19)as ˙ ψ (2) n,ss = λ n ψ (2) n,ss + 12 M (cid:88) k =1 (cid:20) I kn,A I k,A (cid:0) sin(2 ωt ) c ,k ( ω ) − cos(2 ωt ) s ,k ( ω ) + s ,k ( ω ) (cid:1)(cid:21) . (20)Because (20) is a periodically forced linear equation, one can verify that the steady state solutionis ψ (2) n,ss = − λ n + 4 ω ) M (cid:88) k =1 (cid:20) sin(2 ωt ) I kn,A I k,A (cid:0) λ n c ,k ( ω ) + 2 ωs ,k ( ω ) (cid:1) + cos(2 ωt ) I kn,A I k,A (cid:0) − λ n s ,k ( ω ) + 2 ωc ,k ( ω ) (cid:1)(cid:21) − M (cid:88) k =1 (cid:20) s ,k ( ω ) I kn,A I k,A λ n (cid:21) = M (cid:88) k =1 (cid:20) I kn,A s k ,n ( ω ) sin(2 ωt ) + I kn,A c k ,n ( ω ) cos(2 ωt ) − s ,k ( ω ) I kn,A I k,A λ n (cid:21) , (21)where s k ,n ( ω ) ∈ C and s k ,n ( ω ) ∈ C are defined appropriately. Once again, considering the outputgiven by (9), to leading order the O ( (cid:15) ) terms of the output equations are y ss ( ω, t ) − y = O ( (cid:15) ) + M (cid:88) j =1 (cid:2) ψ (2) j g jy (cid:3) + M (cid:88) j =1 j (cid:88) k =1 (cid:2) ψ (1) j ψ (1) k g jky (cid:3) + O ( (cid:15) )= O ( (cid:15) ) + (cid:15) M (cid:88) j =1 g jy M (cid:88) k =1 (cid:20) I kj,A s k ,j ( ω ) sin(2 ωt ) + I kj,A c k ,j ( ω ) cos(2 ωt ) − s ,k ( ω ) I kj,A I k,A λ j (cid:21) + M (cid:88) j =1 j (cid:88) k =1 (cid:20) g jky (cid:0) s ,j ( ω ) I j,A sin( ωt ) + c ,j ( ω ) I j,A cos( ωt ) (cid:1) × (cid:0) s ,k ( ω ) I k,A sin( ωt ) + c ,k ( ω ) I k,A cos( ωt ) (cid:1)(cid:21) + O ( (cid:15) ) , (22)8here the O ( (cid:15) ) terms are given by (15). Once again, trigonometric product-to-sum identities from(B3) can be used to rewrite (22) according to y ss ( ω, t ) − y = O ( (cid:15) ) + (cid:15) M (cid:88) j =1 g jy M (cid:88) k =1 (cid:20) I kj,A s k ,j ( ω ) sin(2 ωt ) + I kj,A c k ,j ( ω ) cos(2 ωt ) − s ,k ( ω ) I kj,A I k,A λ j (cid:21) + 12 M (cid:88) j =1 j (cid:88) k =1 g jky (cid:20) − s ,j ( ω ) I j,A s ,k ( ω ) I k,A cos(2 ωt ) + c ,j ( ω ) I j,A c ,k ( ω ) I k,A cos(2 ωt )+ s ,j ( ω ) I j,A c ,k ( ω ) I k,A sin(2 ωt ) + c ,j ( ω ) I j,A s ,k ( ω ) I k,A sin(2 ωt )+ s ,j ( ω ) I j,A s ,k ( ω ) I k,A + c ,j ( ω ) I j,A c ,k ( ω ) I k,A (cid:21) + O ( (cid:15) ) , (23)Notice that the O ( (cid:15) ) terms are a linear combination of sines, cosines, and constants. Also recallthat terms of the form g jy and I j,A can be estimated independently for all j using (16). Thus, therelationships (23) can be rewritten as y ss ( ω, t ) − y = O ( (cid:15) ) + (cid:15) M (cid:88) j =1 M (cid:88) k =1 I kj,A (cid:0) β j,k, ( ω ) sin(2 ωt ) + β j,k, ( ω ) cos(2 ωt ) + β j,k, ( ω ) (cid:1) + M (cid:88) j =1 j (cid:88) k =1 g jky (cid:0) β j,k, ( ω ) sin(2 ωt ) + β j,k, ( ω ) cos(2 ωt ) + β j,k, ( ω ) (cid:1) + O ( (cid:15) ) , (24)where β j,k, ( ω ) , . . . , β j,k, ( ω ) ∈ C are defined so that (23) and (24) are identical. Finally, recallingthat the O ( (cid:15) ) terms from (24) are a linear combination of sines and cosines with frequency ω ,employing a similar approach that was used to identify (16) one can multiply both sides of (24) byeither sin(2 ωt ), cos(2 ωt ), or 1 and integrate over one period using q different frequencies, ω , . . . , ω q to yield an equation of the form 1 (cid:15) Γ = π Ξ Υ + O ( (cid:15) ) , (25)where Γ = ω (cid:82) π/ω y ss ( w , t ) sin(2 ω t ) dtω (cid:82) π/ω y ss ( w , t ) cos(2 ω t ) dtω (cid:82) π/ω y ss ( w , t ) dt − x ... ω q (cid:82) π/ω q y ss ( w q , t ) sin(2 ω q t ) dtω q (cid:82) π/ω q y ss ( w q , t ) cos(2 ω q t ) dtω q (cid:82) π/ω q y ss ( w q , t ) dt − x ∈ C q × , Υ ∈ C (cid:16) M M (cid:17) × is a vector containing all of the terms in the second order accurate expansion,i.e., terms of the form I kj,A and g jky , and Ξ ∈ C q × (cid:16) M M (cid:17) . Much like for (16), the second orderaccurate terms of the isostable reduced model (8) that comprise Υ can be estimated according toΥ = 1 (cid:15) π Ξ † Γ . (26)9 .3 Inferring Reduced Models to Arbitrary Orders of Accuracy for Single Out-put Systems Sections 3.1 and 3.2 detail a strategy for inferring first and second order accurate reduced orderisostable models from output data. As illustrated in Appendix C, it is possible to identify the termsof the expansions (10) and (11) to arbitrary orders of accuracy using linear relationships of the form(C6). Intuitively this is made possible by Lemma C.1 which states that under the application ofan input u ( t ) = (cid:15) sin( ωt ), the steady state output can be written as y ss ( ω, t ) − y = j − (cid:88) k =0 (cid:20) τ k ( ω ) sin( kωt )+ σ k ( ω ) cos( kωt ) (cid:21) + (cid:15) j (cid:20) τ j ( ω ) sin( jωt )+ σ j ( ω ) cos( jωt ) (cid:21) + O ( (cid:15) j +1 ) , (27)where τ j ( ω ) ∈ C and σ j ( ω ) ∈ C . In other words, the oscillatory components of the response withfrequency jω first appear in the O ( (cid:15) j ) terms. These specific terms can be isolated by multiplying(27) by either sin( jωt ) or cos( jωt ) and taking the integral over a single period. Subsequently,relationships between the terms that comprise each τ k ( ω ) and σ j ( ω ) can be derived to infer theunknown coefficients required for a j th order reduction as long as the coefficients of the first, second, . . . , j − th and j − th order reductions have been identified. Explicit details of this fitting strategyare given in Appendix CAs a final note, as the order of accuracy required increases, the complexity of the necessary termsgrows rapidly. For instance, the O ( (cid:15) ) terms of the steady state response are written succinctly in(15); the O ( (cid:15) ) terms of the steady state output (24) are much more complicated. For reductionsthat are valid to higher orders of accuracy it advisable to compute the terms of each matrix Ξ j ,and R j from (C6) using a symbolic computational package. Knowledge of the eigenvalues of the Jacobian from (2) (i.e., the decay rates of isostable coordinates)is necessary to identify the terms of reduced order models. This can be accomplished in a two stepprocedure that involves first obtaining a coarse estimate of the eigenvalues from noisy data andsubsequently refining these estimates using the relationship (16). To obtain a coarse estimationof the linearized eigenvalues as described below, it is assumed that low intensity background noiseprevents the system from reaching the stable fixed point in the absence of stimulation. If nobackground noise is present, one could alternatively apply a small magnitude, random input thatsimulates a white noise process.
A coarse approximation of the linearized eigenvalues can be obtained using noisy, unperturbedmodel output. This strategy is based on similar strategies implemented in [13] and [27]. Thisstrategy represents a variation of the delay-embedding approach proposed in [6]. To begin, consideran isostable reduced model (8) with output (9). It will be assumed that the output y ( t ) = O ( (cid:15) ).It will also be assumed that each eigenvalue is not repeated. Consider a time series of output datataken every ∆ t seconds. This time series can be stored as a matrix Y ∈ R K,L where Y ≡ (cid:2) y t y t . . . y t L − (cid:3) , (28)10ith y t i ∈ R K defined according to y t i = y ( Ki ∆ t ) − y y ( Ki ∆ t + ∆ t ) − y ... y ( Ki ∆ t + ∆ t ( K − − y . (29)As a preliminary step in the analysis, proper orthogonal decomposition (POD) [37], [38] can beemployed to decompose the time series contained in Y into a set of representative modes. Assumingthat L > K , the POD modes φ i , can be computed by finding the eigenvectors of the covariancematrix Y Y T . The corresponding eigenvalues λ POD i give a sense of the importance of the temporalfluctuations captured by the associated POD mode. In order to accurately represent the outputdata, a total of N POD modes can be chosen so that (cid:80) N POD j =1 λ POD j / (cid:80) Kj =1 λ POD j ≈
1. DefiningΦ ∈ R K × N POD , such that Φ = (cid:2) φ . . . φ N POD (cid:3) , any column of Y can be projected onto the newPOD basis as y t i − y = N POD (cid:88) j =1 (cid:0) φ j p ij (cid:1) + res POD i = Φ µ i + res POD i , (30)where µ i = (cid:104) p ij . . . p N i POD (cid:105) T is a vector of POD coefficients associated with y t i and res POD i is a small residual term that stems from the fact that some of the POD modes are truncated.Simultaneously, considering (10), to leading order (cid:15) the output is given by y ( t ) − y = M (cid:88) k =1 ψ k ( t ) g ky . (31)Recalling that ˙ ψ k = λ k ψ k so that ψ k ( t ) = exp( λ k t ) ψ k (0), one can write y t i − y = P Ψ i + res ISOi (32)where Ψ i = (cid:2) ψ ( Ki ∆ t ) . . . ψ M ( Ki ∆ t ) (cid:3) T , P ∈ C K × M where the k th row and j th column areequal to g jy exp( λ j ∆ t ( k − ISOi is a small residual terms that results from truncating boththe O ( (cid:15) ) terms and the most rapidly decaying isostable coordinates. Assuming both residuals arenegligible in (30) and (32) and noticing that the left hand sides are identical, one can write µ i = Φ T P Ψ i , (33)Ψ i = P † Φ µ i , (34)where † is the Moore-Penrose pseudoinverse. Above, equation (33) uses the fact that the PODmodes are orthogonal and Equation (34) assumes that the columns of P are linearly independentso that P † P gives the identity matrix. Letting Λ = diag(exp( λ K ∆ t ) , . . . , ( λ M K ∆ t )) note thatΨ i = ΛΨ i +1 Using this information, along with (33) and (34), one finds that µ i +1 = Φ T P Λ P † Φ µ i = A µ µ i . (35)Additionally, as discussed in [13], when residuals res ISO i and res ISO i are small enough that they arenegligible, A µ shares eigenvalues with Λ. This allows each λ m term to be estimated from data byestimating the matrix A µ according to A µ = X + X −† (36)11here X + = (cid:2) µ . . . µ L − (cid:3) and X − = (cid:2) µ . . . µ L − (cid:3) . Subsequently, each eigenvalue λ µ of A µ canbe related to the decay rates of the isostable coordinates from (8) according to λ j = λ µ,j K ∆ t . (37)Note that the accuracy of (37) will be influenced by the magnitude of the residuals res ISOi andres
PODi which depend on factors such as noise, truncation of the POD modes, and truncation ofthe O ( (cid:15) ) terms from (31). In practice, the above approach gives a coarse approximation of theeigenvalues that comprise the reduced order model (8). In some cases, the coarse eigenvalue estimates from the previous section can still yield accurateresult when fitting reduced order models of the form (8) and (9) using the methods described inSections 3.1-3.3. In other cases, however, it can be useful to refine these estimates in conjunctionwith relationships derived in Section 3.1. To do so, recalling that each term I n,A is assumed tobe equal to 1, consider the relationship (16) that can be used to estimate the first order accurateterms of the reduction h ( g y , . . . , g My , λ , . . . , λ M ) = π Ξ Υ − (cid:15) Γ . (38)When λ , . . . , λ M are known exactly, it is possible to find values of each g y , . . . , g My so that h ( g y , . . . , g My , λ , . . . , λ M ) ≈
0, for instance, by using Equation (17). Supposing that only an esti-mate of λ , . . . , λ M is available (perhaps using the method from Section 3.4.1) with a correspondingestimate of g y , . . . , g My , denote Z = (cid:2) g y . . . g My λ . . . λ M (cid:3) T and let Z be the initial estimate.One can update this estimate of Z according to the Newton iteration Z = Z + ∆ Z where∆ Z = − (cid:18) ∂h∂Z (cid:12)(cid:12)(cid:12)(cid:12) Z (cid:19) † h ( Z ) . (39)Above, ∆ Z represents the least squares solution to (cid:0) ∂h∂Z (cid:12)(cid:12) Z (cid:1) ∆ Z = − h ( Z ) and attempts to updatethe estimated parameters so that || h ( Z ) + (cid:0) ∂h∂Z (cid:12)(cid:12) Z (cid:1) ∆ Z || ≈ || h ( Z + ∆ Z ) || ≈ || · || denotesthe Euclidean norm. The update (39) can be repeated until convergence is achieved. Note thatconvergence is not guaranteed; like any Newton iteration, this strategy works best when startingwith a good initial estimation of Z . Additionally, if convergence is not achieved it may be necessaryto change the number of isostable coordinates considered in the reduction. The following summarizes a list of steps required to estimate the terms of (8) and (9) with theexpansions (10) and (11) used for G y and each I k,A , respectively, computed to arbitrary orderaccuracy in the isostable coordinates.Step 1) Using the strategy given in Section 3.4.1, decide on the number of isostable coordinates touse in the reduction and obtain a coarse estimate for their unperturbed decay rates. It isadvisable to choose a number of POD modes, N POD so that (cid:80) N POD j =1 λ POD j / (cid:80) Kj =1 λ POD j ≈ N POD so that this ratio is closer to 1 will allow more of the dynamical behaviorto be captured but will also increase the risk of overfitting. Step 1 can be skipped if theeigenvalues are known a priori . Step 1 can also be skipped by making an initial guesss forthe number of isostable coordinates and their associated decay rates and updating theseguesses with the Newton iteration from Step 4.12tep 2) Once the number of isostable coordinates has been determined, use a symbolic computa-tional package to compute the terms of Ξ j for each j up to the desired order of accuracy.These matrices are defined as part of (16) and (25) for first order and second order ac-curacy relationships, respectively. Third order accuracy terms and higher are defined in(C15). For third order accurate reductions and higher, associated terms of the vectors R j as defined in (C15) will also need to be computed symbolically.Step 3) For various frequencies ω , apply an input u ( t ) = (cid:15) sin( ωt ). After transients die out, use theresulting steady state responses to determine the terms of each vector Γ j defined accordingto Equations (16), (25), and (C7) to be used in determining the first, second, and higherorder accuracy terms of the expansions, respectively. As discussed in the examples fromthe following sections, it can be helpful to use larger values of (cid:15) for higher order accuracyterms, although, this is not absolutely necessary.Step 4) Apply the strategy from (3.4.2) to refine the eigenvalue estimates obtained in Step 1. If theNewton iteration does not converge, one can repeat Steps 1 and 2 using a smaller numberof isostable coordinates (or by using a different guess for the initial isostable coordinates).Step 5) Once a refined estimate for the eigenvalues associated with the decay rates of the isostablecoordinates is obtained, this information can be used to evaluate Ξ and obtain an estimatefor the first order accurate terms using (17). This information can be used to computea numerical approximation for Ξ and subsequently estimate the second order accurateterms of the expansion according to (26). In general, once the j th order accuracy terms arecomputed, as discussed in Appendix C, numerical approximations of Ξ j +1 and R j +1 canbe computed and the relationship (C6) can be used to estimate the j + 1 th order terms.This process can be repeated until the terms of (10) and (11) are computed to the desiredorders of accuracy. Systems with multiple outputs can be considered with straightforward modifications to the modelidentification algorithms. To do so, suppose that the state dynamics of a general differential equa-tion of the form (1) can be characterized by an isostable coordinate framework according to (4).As in (8), it will be assumed that a rank-1 input can be applied of the form U ( t ) = Au ( t ), A = (cid:2) A . . . A N (cid:3) T and u ( t ) ∈ R yielding isostable dynamics of the form (8). Additionally,suppose that there are N y outputs of the form y m = y m, + G m,y ( ψ , . . . , ψ M ) ,m = 1 , . . . , N y . (40)Analogous to (10), each G m,y can be Taylor expanded in terms of the isostable coordinates G m,y ( ψ , . . . , ψ M ) ≈ M (cid:88) k =1 (cid:104) ψ k g km,y (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k g jkm,y (cid:105) + M (cid:88) i =1 i (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ i ψ j ψ k g ijkm,y (cid:105) + . . . , (41)for m = 1 , . . . , N y . The isostable dynamics in the multiple output scenario can still be representedaccording to (11); adding extra outputs does not change the isostable coordinate dynamics. In-stead, each additional coordinate yields an additional G m,y ( ψ , . . . , ψ M ) for which the terms of theexpansion from (41) must be fit. 13o identify a first order accurate model with multiple outputs, for example, each g jm,y can be fitto the output data. To do so, the techniques from Sections 3.1-3.3 can be used to identify equationsof the form (16) 1 (cid:15) Γ m, = π Ξ Υ m, ,m = 1 , . . . , N y , (42)where similar to the definition given as part of (16), Γ m, is a vector calculated considering thesteady state behavior of y m when applying input at various frequencies and Υ is a vector of termscharacterizing the the first order accurate isostable dynamics from (11) and the first order accurateterms of the expansion of G m,y from (41). Note that the terms of Ξ are identical for all outputrelationships and were defined as part of (16). As done in Section 3.1, one can assume that I n,A = 1for all n , so that the relationship (42) can be used to identify g m,y , . . . , g Mm,y for all m .Once the first order terms are identified, the second order accurate terms can be inferred byidentifying equations of the form (25) for each output1 (cid:15) Γ m, = π Ξ Υ m, ,m = 1 , . . . , N y , (43)where Γ m, and Υ m, are defined analogously to Γ and Υ from (42) for each output y m . Notethat Ξ was defined as part of (25). All relationships of the form (43) for m = 1 , . . . , N y can beused to infer the second order terms of each I n,A from (11) and of each G m,y from (41). In general,once the j th order accurate terms have been estimated, a similar strategy can be used to infer the j + 1 th order terms of the expansion by identifying equations of the form (C15) for each outputΓ m,j +1 = π(cid:15) j +1 (cid:0) Ξ j +1 Υ m,j +1 + R m,j +1 (cid:1) + O ( (cid:15) j +2 ) , (44)where Γ m,n +1 , Υ m,j +1 , and R m,j +1 are defined analogously to Γ j +1 , Υ j +1 , and R j +1 from (C15)for each input y m and Ξ j +1 is defined as part of Equation (C15).As a final consideration, note that the number of terms to fit grows in proportion to the numberof outputs. In situations where there are a large number of outputs, it can be useful to first apply areduction strategy such as POD to the outputs and consider a subset of the most important PODcoefficients to be the system output rather than fitting coefficients for the full set of outputs. Thisis intuitively similar to Galerkin projection methods for obtaining numerical solutions of partialdifferential equations, for example, as performed in [39]. This suggested approach will be illustratedin the example from Section 4.3. As a preliminary illustration of the proposed reduced order model identification strategy, considerthe simple dynamical system from (7) with white noise added to the x variable˙ x = µx + u x ( t ) + √ Dη ( t ) , ˙ x = λ ( − x + x + x + x ) , (45)14here λ = − µ = − .
05, and η ( t ) is an independent and identically distributed zero-mean whitenoise process with intensity D = 0 . y ( t ) = x ( t ). Notethat the only difference between (7) and the above equation is the addition of noise. In Section2.1 a reduced order model was determined with knowledge of the underlying model Equations (7).Here, a reduced order model of the form (8) with output of the form (9) will be inferred strictlyfrom measurements of model output without any assumed knowledge of the underlying systemdynamics. To do so, (45) is simulated for 20,000 time units with a time step of 0.1 using thestochastic simulation algorithm presented in [40]. The resulting data is stored in a matrix of theform (28) using K = 100 and POD is performed on the resulting data. Using a single POD mode, λ POD1 / (cid:80) Kj =1 ( λ POD j ) = 0 .
87. The coarse eigenvalue estimation strategy detailed in Section 3.4.1results in an initial eigenvalue estimate of λ = − . u x ( t ) = (cid:15) sin( ωt ) is applied taking ω ∈ { . , . , . , . , . } . Using (cid:15) = 0 .
01, the steady state output is averaged over 100 cycles in order to compute the terms of Γ according to (16). An example input-output relationship in steady state is shown in Panels A andB of Figure 2. The terms of Γ (from (25)) and Γ (from (C7)) are computed taking (cid:15) = √ . √ .
01, respectively. Choosing (cid:15) in this manner helps to emphasize the contributions of thehigher order terms when estimating the associated coefficients of the expansions (10) and (11).Auxiliary matrices Ξ , Ξ , Ξ and R used to fit the first, second, and third order coefficientsof the reduction are computed symbolically. After obtaining a preliminary estimate of g y , theNewton iteration suggested by (39) is employed to refine the estimate of the principal eigenvalueof λ = − . λ = − .
05 for the noiseless system(7)). Second and third order terms are computed in succession using (26) and (C6), respectively.Panels C and D show estimates of dψ /dx and x ( ψ ) obtained from first, second, and third orderaccurate estimates of the reduced isostable dynamics for models of the form (8) with output (9).These estimates are similar to results from panels C and D from Figure 1 that were obtainedwith knowledge of the full model equations. The resulting estimates of the reduced order modelparameters can be used to predict the response to other inputs. Starting from an initial condition at x = x = 0, the model (45) is simulated taking u x ( t ) = 0 . . t ) + sin(0 . t ) + sin(0 . t )) asshown in panel E. The full model output is compared to the outputs from reduced models of variousorders of accuracy. For results shown here, the reduced models do not contain any noise terms.Much like the results presented in Figure 1, the third order accurate reduced model accuratelyreflects the output from the full simulations. The first and second order accurate models performdecently, especially in moments when the magnitude of the input is small, but are not sufficient toreplicate the full system behavior. Note here that u x ( t ) is comprised of higher frequency sinusoidsthan those used to infer the reduced order models. Additionally, the applied input is periodic, buthas a period that is significantly larger than the sinusoids used to infer the reduced order model.15igure 2: Results from applying the model inference strategy described in Section 3 to the simplemodel (45). Panel A shows an example of a sinusoidal input with the black line in panel B givingthe corresponding model output after allowing the transient dynamics to decay. Fourier coefficientsare computed for the first, second, and third harmonics by averaging the output over 100 cyclesand the associated Fourier series representation is shown as a red line. Using inputs of variousfrequency and magnitude, the corresponding outputs are measured and this information is used toidentify reduced order models valid to various orders of accuracy as described in Section 3. Curvesfrom panels C and D characterize the response of the isostable coordinate to inputs and the outputas a function of the isostable coordinate, respectively. Notice that these curves are similar to thoseshown in panels C and D of Figure 1 which were obtained with full knowledge of the underlyingdynamical equations. The input shown in panel E is applied to the model equations (45) and theoutput in panel F is compared to the output predicted from the reduced order models valid todifferent orders of accuracy. Next, the proposed reduced model inference strategy is applied to a more complicated modeldescribing the spike rates of a population of synaptically coupled neurons from [41]. The model16quations are C ˙ V i = − I L ( V i ) − I Na ( V i , h i ) − I K ( V i , h i ) − I T ( V i , r i )+ I bi − g syn N N (cid:88) j =1 s j ( V i − E syn ) + √ Dη i ( t ) + u ( t ) , ˙ h i = ( h ∞ ( V i ) − h i ) /τ h ( V i ) , ˙ r i = ( r ∞ ( V i ) − r i ) /τ r ( V i ) , ˙ s i = a (1 − s )1 + exp( − ( V i − V T ) /σ T ) − bs i . (46)Here, N = 1000 gives the total number of neurons in the population, V i , s i , h i , and r i repre-sent the transmembrane voltage, a synaptic variable, and two gating variables assigned to neuron i , respectively, the conductance g syn sets the coupling strength, E syn = −
100 mV is the rever-sal potential of the neurotransmitter resulting in inhibitory coupling, u ( t ) is an injected currentcommon to all neurons, C = 1 µ F / cm is the membrane capacitance, √ Dη i ( t ) is an independentand identically distributed zero-mean white noise process with intensity D = 1, and I bi is thebaseline current of neuron i and is drawn from a normal distribution with a mean of 5 and a vari-ance of 1 µ A / cm . The characteristics of the synaptic current are determined by the parameters a = 3 , V t = − , σ T = 0 . , and β = 1. The reader is referred to [41] for a full explanationof the remaining functions that determine the ionic currents I L , I Na , I K , I T and the behavior ofthe gating variables. The observable for (46) is taken to be the firing rate R ( t ) where neuron i isdefined to fire at the moment V i crosses -25 mV with a positive slope. The firing rate defined overa sliding window F R ( t ) = Total Number of Neurons That Fired in the Interval [t-W,t] W , (47)where W = 1 . t = 0 .
015 ms fora total of 5000 ms. The resulting data is stored in a matrix of the form (28) taking K = 50 and PODis performed on the resulting data. Using two POD modes yields (cid:80) j =1 ( λ POD j ) / (cid:80) Kj =1 ( λ POD j ) =0 .
93. Using the coarse eigenvalue estimation strategy from Section 3.4.1 in conjunction with thesetwo POD modes yields two complex conjugate eigenvalues λ , = − . ± . i .A reduced order model is obtained by considering the firing rate in response to sinusoidal input u ( t ) = (cid:15) sin( ωt ) with ω ∈ { . , . , . , . , . , . , . } . Taking (cid:15) = 0 . µ A /µ F, initial transients areallowed to decay and the steady state output is averaged over 700 ms to compute the terms of Γ from (16). In a similar manner, (cid:15) = 1 µ A /µ F is used to compute the terms of Γ from (25). Terms ofthe auxiliary matrices Ξ and Ξ from (16) and (25), respectively, are computed symbolically. Afterobtaining preliminary estimates of the first order terms, the Newton iteration from (39) is employedto obtain a refined estimate of λ , = − . ± . i for the principal eigenvalues. Second order termsare subsequently computed using (26). Third order accurate terms were also considered, however,the resulting reduced order models did not perform better than the second order accurate models,and the results are not shown. 17igure 3: Panel A shows a sinusoidal inputs to (46) with the resulting voltage traces from a subsetof neurons and firing rates shown in panels B and C, respectively. The same information is shownin panels D-F using a larger magnitude input with a different frequency. For small inputs, F R ( t )appears somewhat sinusoidal. When larger magnitude inputs are used the resulting firing rates arenon-sinusoidal indicating the importance of nonlinear models to characterize the output in responseto inputs.The predictive capability of the resulting reduced order model is considered with two differentinputs shown in Figure 4. In both the full (46) and reduced model equations, no input is applied forthe first 200 ms of simulation to allow for any transient behavior to die out. Subsequently, a smallmagnitude input u ( t ) = sin(0 . t ) + cos(0 . t ) + sin( − . t ) is applied for 100 ms and is shown inpanel A. Resulting outputs from the full and reduced models are shown in panel B with the absoluteerror between the reduced and full models shown in panel C. Analogous results are shown in panelsD-F for a larger magnitude input u ( t ) = 2 . . t ) + cos(0 . t ) + sin(0 . t )). Note that eachneuron from the full model equations (46) has a white noise term added to the voltage equationswhile the reduced order models are deterministic – this contributes to a jagged appearance of thefull model outputs. First and second order reduced models have comparable performance whenweaker input is applied. As the inputs become stronger, the performance of the first order accuratemodel begins to degrade while the second order accurate model still performs well. Note that whilethe terms of the reduced order models were determined by considering the steady state behavior inresponse to purely sinusoidal forcing, they accurately replicate the transient response to the inputsconsidered in panels A and D (which are not periodic on the time scales considered).18igure 4: For the relatively weak input shown in panel A, the output of the first and secondorder accurate reduced order models in panel B (red and blue lines, respectively) closely resemblesthe output from the full model equations (black line). Panel C shows the corresponding absoluteerror between the full and reduced order simulations. Analogous results are shown for a largermagnitude input in panels D-F. While both the first and second order accurate models performwell, the second order accurate model performs significantly better when larger magnitude inputsare considered. As a final example, consider the 1-D Burgers’ equation ∂w∂t = 1Re ∂ w∂x − w ∂w∂x , (48)Here, w is the state on the spatial domain x ∈ [0 , x = 0 is definedto be w L ( t ) = 0 . u ( t ) where 0.3 is the nominal value and u ( t ) serves as a time-varying input.The boundary condition at x = 1 is held fixed at a value of 0.3. Because of its similar structureto the Navier-Stokes equation, the Burgers’ equation is often often used as a test bed for modelreduction techniques in fluid flow applications [42], [43].In this example, RE = 10 is used and the model output is taken to be the state on the entirespatial domain. Towards determining appropriate POD modes to represent the model outputs, aninput w L ( t ) = 0 . . t /
20) is applied for t = 100 time units in order to excite system modesover wide range of frequencies. The model response to the time-varying output are shown in panelsA and B of Figure 5, respectively. POD is performed on the resulting snapshots taken at ∆ t = 0 . (cid:80) j =1 ( λ POD j ) / (cid:80) j =1 ( λ POD j ) = 0 . u ( t ) = 0 is shown for reference as a black line.A reduced order model using three isostable coordinates will be used with an initial guess of λ = − λ = −
2, and λ = −
3. As discussed in Section 3.6, the POD coefficients associatedwith the five POD modes from Figure 5 are taken as the system output. A reduced order model isobtained by considering the steady state model output (i.e., the time course of the POD coefficientsin steady state) in response to input u ( t ) = (cid:15) sin( ωt ) with ω = { . , . , . . . , . } . Taking (cid:15) = 0 . (resp. Γ ) from (16) (resp. (25)) are computed. Terms of auxiliarymatrices Ξ and Ξ from (16) and (25), respectively, are computed using a symbolic computationalpackage.The Newton iteration from (39) is applied to obtain a refined estimate of the principal eigenval-ues. This Newton iteration is performed as described in Section 3.4.2 using the first POD mode asthe output. This iteration converges to λ = − . λ = − .
62, and λ = − .
06. The eigenvaluesthat result from the Newton iteration are robust to changes in the initial guess for the eigenval-ues. First and second order terms of the reduced functions (i.e., the functions that characterizethe isostable dynamics and the isostable-to-output relationships for each POD coefficient) are thencomputed using relationships of the form (42) and (43). Reduced order models beyond secondorder accuracy are not considered for this example.Figure 6 compares the outputs predicted by the resulting reduced order models with isostabledynamics of the form (8) and model outputs (41) to the full model dynamics from (48). Here, w red ( x, t ) = (cid:80) j =1 φ j ( x ) p j ( t ) where φ j ( x ) and p j ( t ) are the j th POD modes and POD coeffi-cients, respectively. Starting from steady state the left boundary condition is taken to be w L ( t ) =0 . . . t ) + sin(0 . t ) + sin(0 . t )). The logarithm of the L error defined as L E ( t ) = (cid:82) ( w red ( x, t ) − w full ( x, t )) dx is shown in panel B, where w red and w full comes from the reducedand full model simulations, respectively. The solution profiles at t = 50 are shown in panel C.Panels D-F give analogous results using a larger magnitude input w L ( t ) = 0 . . . t ) +sin(0 . t ) + sin(0 . t )). In both cases, the second order accurate models result in L errors thatare approximately two orders of magnitude smaller than the L errors from the first order accuratemodels. The differences in errors between these reduced order models are particularly pronouncedwhen using stronger inputs. 20igure 6: The input w L ( t ) shown in panel A is applied to the full model eqrefburgmod as well asthe first and second order reduced models. Panel B shows the resulting L error between full andreduced order models. Panel C gives a snapshot of the solution profiles for each model at t = 50.Analogous results are shown in panels D-F when the stronger input from panel D is applied. Thesecond order model output has about two orders of magnitude less error than the first order model.These differences are more pronounced when larger magnitude inputs are used as evidenced bypanel F. In applications where the underlying model equations are known, it is relatively straightforward toto numerically compute isostable-based reduced order models directly from the dynamical equa-tions. However, in most experimental applications, model equations are not known precisely anddata-driven techniques must be used instead. In this work, a general, data-driven strategy isdeveloped for inferring nonlinear isostable reduced models of the form (8) and (9).The strategy proposed in this work leverages an isostable-based coordinate system that char-acterizes the behavior of the level sets of the slowest decaying Koopman eigenfunctions. Using thisisostable coordinate basis as a foundation, one can show that in response to inputs of the form u ( t ) = (cid:15) sin( ωt ), steady state model outputs can be written in terms of a Fourier series expansionwith a fundamental frequency ω . Furthermore, components of the output with frequency jω areshown to be O ( (cid:15) j ) terms. These key insights are used to develop the proposed model inferencestrategy discussed in Sections 3.1-3.3. Using this strategy, it is possible to consider the steady-stateoutput in response to sinusoidal input at various frequencies to infer high-accuracy reduced ordermodels of the form (8) and (9). The proposed model inference strategy is illustrated for threeexample systems. In each example considered, a linear model that only considers the first orderterms in the expansions of (10) and (11) performs reasonably well when small magnitude inputs areconsidered. However, when considering larger magnitude inputs, nonlinear reduced order modelsare necessary to capture the dynamical behavior.While it is technically possible to infer the terms of the expansions (10) and (11) to arbitraryorders of accuracy in the isostable coordinates using the proposed method, the robustness of the21odel fitting procedure suffers as the desired order of accuracy increases. Indeed, considering thestructure of (C6), the output measurements that comprise Γ j are multiplied by 1 /(cid:15) j ; this amplifiesthe influence of any errors or uncertainties associated with the measurements of the steady stateoutput. This effect can be partially mitigated by increasing the amplitude of the sinusoidal inputsas higher order accuracy terms are considered (this strategy was used in the examples considered inthis work). As an additional consideration, the j th order accuracy terms depend on the j − th orderaccuracy terms, so any errors in the computation of lower order accuracy terms will be propagatedwhen computing higher order accuracy terms. For the simple model (45) from Section 4.1 thethird order terms of (10) and (11) were inferred reliably. In the more complicated neural spike rateexample From Section 4.2, the performance of the reduced models was significantly improved bythe addition of the second order terms, but the third order terms did not improve the performance– in this example it is possible that the contribution from the third order terms is simply negligible.It would be worthwhile to investigate other statistical techniques to infer the reduced order termsfrom the relationships (16), (25), and (C15) for the first, second, and arbitrary order accuracyterms, respectively, perhaps involving either approximate Bayesian computation [44], [45] or deeplearning approaches [7], [12].Future work will investigate the possibility of extending the proposed methods for use in oscil-latory dynamical systems. The ‘direct method’ [46], [47] is a well-established technique that can beused to determine reduced order equations of the form ˙ θ = ω + Z ( θ ) u ( t ) from experimental data,where θ ∈ S is the phase, Z ( θ ) captures the effect of an input u ( t ) on the phase, and ω is thenominal frequency of oscillation. The reduced order dynamical models considered in this work fromEquation (4) are similar in structure to the models that were investigated in [28] for oscillatorydynamical systems. It would be of interest to investigate whether the data-driven model inferencetechniques proposed here can be adapted for use in applications that involve limit cycle oscillators. Acknowledgment
This material is based upon work supported by the National Science Foundation Grant CMMI-2024111.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author uponreasonable request.
Appendix A Asymptotic Expansions of Isostable Reduced Mod-els
In [28], a strategy was devised for computing asymptotic expansions of the state and the gradi-ent of the isostable coordinates for periodic orbit attractors in terms of the isostable coordinatesthemselves. A nearly identical approach can be used to compute the individual terms of (5) and(6) for stable fixed point attractors. The following results can be obtained by following the samearguments as those presented in [28], but taking the attractor to be a fixed point instead of aperiodic orbit.First, assuming U ( t ) = 0, the individual terms of G ( ψ , . . . , ψ M ) from (5) can be found by22aking the time derivatives of ∆ x = G ( ψ , . . . , ψ M ) yielding d ∆ xdt = M (cid:88) k =1 (cid:104) g k λ k ψ k (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) g jk ( λ j + λ k ) ψ j ψ k (cid:105) + . . . (A1)Above, the relationship ˙ ψ k = λ k ψ k is used when taking time derivatives using the product rule.Letting F ( x ) = (cid:2) f . . . f N (cid:3) T , a direct expansion of (1) in terms of ∆ x yields d ∆ xdt = J ∆ x + (cid:80) ∞ i =2 1 i ! (cid:20) i ⊗ ∆ x T (cid:21) vec( f ( i )1 )... (cid:80) ∞ i =2 1 i ! (cid:20) i ⊗ ∆ x T (cid:21) vec( f ( i ) N ) (A2)where ⊗ is the Kronecker product, vec( · ) is an operator that stacks each column of a matrix intoa single column vector, and higher order partial derivatives are defined recursively as f ( k ) j = ∂ vec (cid:0) f ( k − j (cid:1) ∂x T ∈ R N ( k − × N . (A3)Above, all partial derivatives are evaluated at x and the notation ⊗ ∆ x T is shorthand for ∆ x T ⊗ ∆ x T ⊗ ∆ x T ∈ R × N . By equating (A1) and (A2), substituting ∆ x from (5) into (A2), andmatching the resulting powers in the isostable coefficients, one finds the resulting relationship0 = ( J − ( λ i + λ j + λ k + . . . )Id) g ijk... + q ijk... , (A4)where Id is an appropriately sized identity matrix and q ijk... ∈ R N is a function of the lower orderterms of the expansion, for example, q (a third order term) might depend on g or g (a firstor second order term) but will not depend on g (a third order term). Equation (A4) is nearlyidentical to Equation (18) from [28] except for the fact that the time derivative of g ijk... is zerosince the attractor is a fixed point. For this reason, g ijk... can be computed as the solution to alinear matrix equationThe individual terms of I n from (6) can be computed in a similar fashion. Starting with equation(39) from [28], for any isostable-based coordinate system the following holds for any state in thebasin of attraction of the associated attractor: dI n dt = − (cid:18) ∂F∂x T (cid:12)(cid:12)(cid:12)(cid:12) x − λ n Id (cid:19) I n . (A5)Direct differentiation of (6) provides the time derivative of the gradient of the isostable coordinate dI n dt = M (cid:88) k =1 (cid:104) ψ k λ k I kn (cid:105) + M (cid:88) j =1 j (cid:88) k =1 (cid:104) ψ j ψ k ( λ j + λ k ) I jkn (cid:105) + . . . . (A6)Above, the relationship ˙ ψ k = λ k ψ k is used when taking time derivatives using the product rule.Additionally, letting J T ≡ ∂F∂x T = (cid:104) ∂f ∂x T · · · ∂f N ∂x T (cid:105) , where all partial derivatives are evaluated23t x , near the fixed point one can write the asymptotic expansion ∂F∂x T (cid:12)(cid:12)(cid:12)(cid:12) x +∆ x = J T + (cid:2) a · · · a N (cid:3) ,a i = ∞ (cid:88) j =1 j ! (cid:18)(cid:20) j ⊗ ∆ x T (cid:21) ⊗ Id (cid:19) vec( f ( j +1) i ) , (A7)where a i is a column vector and f ( i +1) i was defined in (A3). Following the strategy from [28], onecan substitute (A7), (A6), and (6) into (A5), replace any remaining ∆ x terms with G ( ψ , . . . , ψ M )from (5), and match the resulting terms of the isostable coefficients to find the relationships0 = − ( J T + ( − λ n + λ i + λ j + λ k + . . . )Id) I ijk...n + q ijk...ψ n , (A8)where q ijk...ψ n , is comprised of only lower order terms of the expansion of I n ( ψ , . . . , ψ M ), for example, q ψ n (a third order term) might depend on I n or I n (a first or second order term) but will notdepend on I n (a third order term). Equation (A8) is nearly identical to Equation (44) from [28],except for the fact that the time derivative of I ijk...n is zero since the attractor is a fixed point. Onceagain, each I ijk...n can be found as the solution to a linear matrix equation.A detailed description of strategies for computation of the terms q ijk... and q ijk...ψ n from equations(A4) and (A8), respectively, is discussed in [28] and is not repeated here. Appendix B Steady State Behavior of the Isostable CoordinateExpansion In Response to Sinusoidal Input
The relationships (14) and (21) illustrate how the steady state behavior of the first and second orderaccurate terms of the isostable coordinate expansions in response to sinusoidal input u ( t ) = (cid:15) sin( ωt )can be written as a Fourier series with a fundamental frequency ω using a finite number of terms.Theorem B.1 below shows that this pattern holds for all orders of accuracy in the expansion. Theorem B.1 : Under the application of u ( t ) = (cid:15) sin( ωt ), in steady state, any ψ ( j ) n,ss term from (12)can be written in the following form, ψ ( j ) n,ss = j (cid:88) k =0 (cid:20) α kn,j ( ω ) sin( kωt ) + β kn,j ( ω ) cos( kωt ) (cid:21) , (B1)where α kn,j ( ω ) ∈ C and β kn,j ( ω ) ∈ C . Proof . Theorem B.1 can be proven by induction. From equations (14) and (21), this relationshipholds for each ψ (1) n,ss and ψ (2) n,ss , respectively. Suppose that (B1) holds for each ψ (1) n,ss , ψ (2) n,ss , ψ ( j − n,ss , . . . , ψ ( j ) n,ss and for all n . Then considering the dynamics specified by (8) and using the asymptotic expansions(11) and (12), for any ψ n one can collect all O ( (cid:15) j +1 ) terms giving the steady state dynamics˙ ψ ( j +1) n,ss = λ n ψ ( j +1) n,ss + M (cid:88) b =1 (cid:18) sin( ωt ) ψ ( j ) b ,ss I b n,A (cid:19) + (cid:88) a + a = j (cid:18) sin( ωt ) (cid:20) M (cid:88) b =1 b (cid:88) b =1 (cid:2) ψ ( a ) b ,ss ψ ( a ) b ,ss I b b n,A (cid:3)(cid:21)(cid:19) + . . . + (cid:88) a + a + ··· + a j = j (cid:18) sin( ωt ) (cid:20) M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) ψ ( a ) b ,ss . . . ψ ( a j ) b j ,ss I b ...b j n,A (cid:3)(cid:21)(cid:19) . (B2)24onsider, any term of the form sin( ωt ) ψ ( a ) b ,ss . . . ψ ( a m ) b m ,ss for which a + a + · · · + a m = j from (B2).From (B1), terms of this form can be written as the product of sines and cosines with frequenciesthat are multiplies of ω . Considering the trigonometric product-to-sum identitiessin( u ) sin( v ) = 12 (cos( u − v ) − cos( u + v )) , cos( u ) cos( v ) = 12 (cos( u − v ) + cos( u + v )) , sin( u ) cos( v ) = 12 (sin( u + v ) + sin( u − v )) . (B3)By converting all products to sums, and noting, for instance, that ψ ( a ) b ,ss terms have a maximumfrequency of a ω , one can writesin( ωt ) ψ ( a ) b ,ss . . . ψ ( a m ) b m ,ss = j +1 (cid:88) k =0 (cid:20) α k ( ω ) sin( kωt ) + β k ( ω ) cos( kωt ) (cid:21) , (B4)where α k ( ω ) ∈ C and β k ( ω ) ∈ C . Using the product-to-sum formulas, all terms containing sinesand cosines in (B2) can be decomposed into the form (B4). Therefore, (B2) can be rewritten inthe form ˙ ψ ( j +1) n,ss = λ n ψ ( j +1) n,ss + j +1 (cid:88) k =1 (cid:20) γ kn,j +1 ( ω ) sin( kωt ) + δ kn,j +1 ( ω ) cos( kωt ) (cid:21) , (B5)where each γ kn,j ( ω ) ∈ C and δ kn,j ( ω ) ∈ C . Finally, noticing that (B5) is simply a periodically forcedlinear ordinary differential equation, one finds that the steady state dynamics are ψ ( j +1) n,ss ( t ) = j +1 (cid:88) k =1 − γ kn,j +1 ( ω )( kω cos( kωt ) + λ n sin( kωt )) − δ kn,j +1 ( ω )( λ n cos( kωt ) − kω sin( kωt )) λ n + ( kω ) , (B6)which is of the form (B1) thereby completing the proof. Remark . Theorem B.1 says that in response to an input u ( t ) = (cid:15) sin( ωt ), that the O ( (cid:15) j ) termscan be represented exactly as a Fourier series with a fundamental frequency ω and a maximumfrequency jω . This can be exploited to identify the terms of the expansions from (10) and (11) asexplained in the following section. Appendix C Fitting Isostable Reduced Models to Arbitrary Or-der Accuracy using Input-Output Relationships
Linear relationships (16) and (25) can be used to fit the first and second order terms of the expansion(10) and (11), respectively, when considering a system with a single output. Similar relationshipscan be obtained for any desired orders of accuracy. To show how this can be done, first considerthe following Lemma:
Lemma C.1 : Under the application of u ( t ) = (cid:15) sin( ωt ), for a single output system, in steady statethe output relationship from (9) can be written in the following form y ss ( ω, t ) − y = j (cid:88) k =0 (cid:20) τ k ( ω ) sin( kωt ) + σ k ( ω ) cos( kωt ) (cid:21) + O ( (cid:15) j +1 ) , (C1)25here each τ j ( ω ) ∈ C and σ j ( ω ) ∈ C ) is an O ( (cid:15) m ) term with m ≥ j . Remark . Lemma C.1 highlights that under the application of u ( t ) = (cid:15) sin( ωt ), oscillatory compo-nents of frequency jω of the steady state output are O ( (cid:15) m ) terms where m ≥ j . Proof . Consider any term from the Taylor expansion of G y ( ψ , . . . , ψ M ) from (10) of the form (cid:80) Mb =1 (cid:80) b b =1 · · · (cid:80) b p − b p − =1 (cid:80) b p − b p =1 (cid:2) ψ b . . . ψ b p g b ...b p y (cid:3) , where p ∈ N . Considering the asymptotic ex-pansions of the isostable coordinates from Equation (12), in steady state one can rewrite thesesums as M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ b ,ss . . . ψ b p ,ss g b ...b p y (cid:3) = j − (cid:88) k =1 (cid:18) (cid:88) a + a + ··· + a p = k (cid:20) (cid:15) k M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ ( a ) b ,ss . . . ψ ( a p ) b p ,ss g b ...b p y (cid:3)(cid:21)(cid:19) + (cid:88) a + a + ··· + a p = j (cid:20) (cid:15) j M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ ( a ) b ,ss . . . ψ ( a p ) b p ,ss g b ...b p y (cid:3)(cid:21) + O ( (cid:15) j +1 )(C2)Above, all terms have been rewritten to emphasize the orders of (cid:15) in the Taylor expansion. Using(B1), in steady state, terms of the form ψ ( j ) n,ss can be written as a Fourier series expansion withfundamental frequency ω and maximum frequency of jω . Considering the product-to-sum identitiesfrom (B3), this implies Equation (C2) can be rewritten as M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ b ,ss . . . ψ b p ,ss g b ...b p y (cid:3) = j − (cid:88) k =0 (cid:20) τ k, ( ω ) sin( kωt ) + σ k, ( ω ) cos( kωt ) (cid:21) + (cid:88) a + a + ··· + a p = j (cid:20) (cid:15) j M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ ( a ) b ,ss . . . ψ ( a p ) b p ,ss g b ...b p y (cid:3)(cid:21) + O ( (cid:15) j +1 ) , (C3)where each τ k, ( ω ) and σ k, ( ω ) ∈ C contain the contributions from the O ( (cid:15) ) , . . . , O ( (cid:15) j − ) termsof the expansion (C2). To arrive at the simplification (C3) one can note that when using theproduct-to-sum identities (B3) when simplifying each O ( (cid:15) k ) term for k < j , kω is the maximumpossible frequency of the resulting sinusoids. Considering the O ( (cid:15) j ) terms in (C3), continuing thesimplification in this manner yields M (cid:88) b =1 b (cid:88) b =1 · · · b p − (cid:88) b p − =1 b p − (cid:88) b p =1 (cid:2) ψ b ,ss . . . ψ b p ,ss g b ...b p y (cid:3) == j − (cid:88) k =0 (cid:20) τ k, ( ω ) sin( kωt ) + σ k, ( ω ) cos( kωt ) (cid:21) + (cid:20) τ j, ( ω ) sin( jωt ) + σ j, ( ω ) cos( jωt ) (cid:21) + O ( (cid:15) j +1 ) . (C4)where terms that are proportional to sin( kωt ) (resp. cos( kωt )) are absorbed into the new constants τ k, ( ω ) ∈ C (resp. σ k, ( ω ) ∈ C ). Above, note that τ j, ( ω ) and σ j, ( ω ) are O ( (cid:15) j ) terms. Because any26f the terms of the expansion (10) can be written in the form (C4) it is possible to write y ss ( ω, t ) − y in the form of (C1) where the terms proportional to sin( jωt ) and cos( jωt ) are O ( (cid:15) j ) terms whichcompletes the proof.Lemma C.1 will used to show how higher order terms of the expansions (10) and (11) can bedetermined from the steady state behavior. Towards this goal, (B1) will be rewritten as ψ ( j ) n,ss ( t ) = α jn,j ( ω ) sin( jωt ) + β jn,j ( ω ) cos( jωt ) + j − (cid:88) k =0 (cid:20) α kn,j ( ω ) sin( kωt ) + β kn,j ( ω ) cos( kωt ) (cid:21) , (C5)in order to emphasize the highest frequency terms in each ψ ( j ) n,ss . The the following theorem cannow be stated: Theorem C.1 : Consider an isostable coordinate reduction of the form (8) with output equation (9)with a single output and M isostable coordinates where all λ n for n = 1 . . . M are known. Supposefor all k = 1 , . . . , j that the coefficients α kn,k ( ω ) and β kn,k ( ω ) from (C5) have been computed;these represent the coefficients of (C5) that are proportional to the highest frequency terms ofeach ψ ( j ) n,ss ( t ). Suppose also that all terms g b y , g b b y , . . . , g b ...b j y and I n , I b n , . . . , I b ,...,b j − n have beenestimated for all n , i.e., all terms associated with the j th and j − th order accurate expansions inthe isostable coordinates from (10) and (11), respectively, have been estimated. ThenStatement C1: By applying a series of q inputs of the form u = (cid:15) sin( ωt ) with w = w , . . . , w q and measuring the resulting steady state behavior y ss ( ω, t ), all terms of the form g b ...b j +1 y and I b ,...,b j n for all n can be estimated using the linear relationshipΥ j +1 = Ξ † j +1 (cid:18) (cid:15) j +1 π Γ j +1 − R j +1 (cid:19) + O ( (cid:15) ) , (C6)where Γ j +1 = ω (cid:82) π/ω y ss ( w , t ) sin(( j + 1) ω t ) dtω (cid:82) π/ω y ss ( w , t ) cos(( j + 1) ω t ) dt ... ω q (cid:82) π/ω q y ss ( w q , t ) sin(( j + 1) ω q t ) dtω q (cid:82) π/ω q y ss ( w q , t ) cos(( j + 1) ω q t ) dt ∈ C q , (C7)where Υ j +1 ∈ C η j +1 is a vector containing η j +1 total unknown elements and iscomprised solely of terms of the form g b ...b j +1 and I b ,...,b j n , and Ξ j +1 ∈ C q × η j +1 , R j +1 ∈ C q are appropriately defined matrices comprised solely of previously esti-mated terms, and † denotes the pseudoinverse.Statement C2: The terms α j +1 n,j +1 ( ω ) and β j +1 n,j +1 ( ω ) of ψ ( j +1) n,ss from the expansion (C5) can be esti-mated for all n once Υ j +1 has been estimated according to (C6). Proof of Statement C1.
Consider the steady state dynamics of ψ ( j +1) n,ss governed by (B2). Thesedynamics can be rewritten as a sum of sines and cosines according to (B5).˙ ψ ( j +1) n,ss = λ n ψ j +1 n,ss + γ j +1 n,j +1 ( ω ) sin(( j + 1) ωt ) + δ j +1 n,j +1 ( ω ) cos(( j + 1) ωt )+ j (cid:88) k =1 (cid:20) γ kn,j +1 ( ω ) sin( kωt ) + δ kn,j +1 ( ω ) cos( kωt ) (cid:21) , (C8)27n the analysis to follow, the main focus will be on the terms proportional to sin(( j + 1) ωt ) andcos(( j + 1) ωt ) in the above equation. When substituting (C5) into (B2) and subsequently applyingthe product-to-sum identities from (B3), the only way to yield terms that are proportional tosin(( j + 1) ωt ) and cos(( j + 1) ωt ) is to consider the highest frequency terms of each ψ ( k ) n,ss (i.e., thehighest frequency terms only depend on α kn,k ( ω ) and β kn,k ( ω ) for k = 1 , . . . , j ). With this in mind,considering the structure of (B2) and comparing to the structure of (C8), one finds that the terms γ j +1 n,j +1 and δ j +1 n,j +1 can be written as γ j +1 n,j +1 ( ω ) = M (cid:88) b =1 (cid:2) χ n,s ( ω ) I b n,A (cid:3) + M (cid:88) b =1 b (cid:88) b =1 (cid:2) χ b b n,s ( ω ) I b b n,A (cid:3) + . . . + M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) χ b ...b j n,s ( ω ) I b ...b j n,A (cid:3) ,δ j +1 n,j +1 ( ω ) = M (cid:88) b =1 (cid:2) χ b n,c ( ω ) I b n,A (cid:3) + M (cid:88) b =1 b (cid:88) b =1 (cid:2) χ b b n,c ( ω ) I b b n,A (cid:3) + . . . + M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) χ b ...b j n,c ( ω ) I b ...b j n,A (cid:3) , (C9)where each term of the form χ b b ...n,s ( ω ) ∈ C and χ b b ...n,c ( ω ) ∈ C only depend on the highest frequencyterms of each ψ ( k ) n,ss (i.e., only depend on α kn,k ( ω ) and β kn,k ( ω ) for k = 1 , . . . , j which are known byassumption). Because (C8) is of the form (B5) (i.e., a linear, periodically forced ordinary differentialequation) the solution ψ ( j +1) n,ss ( t ) is given by (B6). Note that from γ j +1 n,j +1 ( ω ) and δ j +1 n,j +1 ( ω ) in (C9),the only unknown terms are of the form I b ...b j n,A . Substituting (C9) into (B6), and recalling that α j +1 n,j +1 ( ω ) and β j +1 n,j +1 ( ω ) are the coefficients proportional to the highest frequency terms of ψ ( j +1) n,ss ,one can write α j +1 n,j +1 ( ω ) = M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) z b ...b j n,α ( ω ) I b ...b j n,A (cid:3) + k j +1 n,α ( ω ) ,β j +1 n,j +1 ( ω ) = M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) z b ...b j n,β ( ω ) I b ...b j n,A (cid:3) + k j +1 n,β ( ω ) , (C10)where each term of the form z b b ...n,α ( ω ) ∈ C , z b b ...n,β ( ω ) ∈ C , k j +1 n,α ( ω ) ∈ C , and k j +1 n,β ( ω ) ∈ C can bedetermined from the terms of the reduction that are already known by assumption.Next, invoking Lemma C.1 and considering terms up to and including O ( (cid:15) j +1 ), multiplyingboth sides of (C1) by either sin(( j + 1) ωt ) or cos(( j + 1) ωt ) and integrating over one period yields (cid:34)(cid:82) π/ω y ss ( ωt ) sin(( j + 1) ωt ) dt (cid:82) π/ω y ss ( ωt ) cos(( j + 1) ωt ) dt (cid:35) = πω (cid:20) τ j +1 ( ω ) σ j +1 ( ω ) (cid:21) + O ( (cid:15) j +2 ) , (C11)where τ j +1 ( ω ) and σ j +1 ( ω ) are O ( (cid:15) j +1 ) terms. Focusing on the terms τ j +1 ( ω ) and σ j +1 ( ω ), letting (cid:15) j ( y ss ( ω, t ) ( j ) ) denote the O ( (cid:15) j ) terms of the steady state output, one can collect all O ( (cid:15) j +1 ) terms28rom the output using the asymptotic expansion (10) to yield y ( j +1) ss ( ω, t ) = y + M (cid:88) b =1 ψ ( j +1) b ,ss g b y + (cid:88) a + a = j +1 (cid:20) M (cid:88) b =1 b (cid:88) b =1 (cid:2) ψ ( a ) b ,ss ψ ( a ) b ,ss g b b y (cid:3)(cid:21) + . . . + (cid:88) a + a + ··· + a j +1 = j +1 (cid:20) M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j =1 b j (cid:88) b j +1 =1 (cid:2) ψ ( a ) b ,ss . . . ψ ( a j +1 ) b j +1 ,ss g b ...b j +1 y (cid:3)(cid:21) . (C12)By converting all products to sums using (B3) and recalling from (B1) that each ψ ( a ) b ,ss has termswith a maximum frequency of a ω , one can write y ( j +1) ss ( ω, t ) = (cid:20) τ j +1 ( ω ) sin(( j + 1) ωt ) + σ j +1 ( ω ) cos(( j + 1) ωt ) (cid:21) + j (cid:88) k =0 (cid:20) r k,s ( ω ) sin( kωt ) + r k,c ( ω ) cos( kωt ) (cid:21) , (C13)where r k,c ( ω ) ∈ C and r k,s ( ω ) ∈ C are appropriately defined O ( (cid:15) j +1 ) constants that are proportionalto the lower frequency terms. Note that above, the τ j +1 ( ω ) and σ j +1 ( ω ) are the same as theconstants from (C1), i.e., they are the O ( (cid:15) j +1 ) terms that are proportional to sin(( j + 1) ωt ) andsin(( j + 1) ωt ), respectively.Once again, note that the terms proportional to sin(( j + 1) ωt ) and cos(( j + 1) ωt ) in (C13)when applying the product-to-sum identities to (C12) only result when considering the highestfrequency terms of each ψ ( k ) n,ss . With this in mind, terms that are proportional to sin(( j + 1) ωt ) andcos(( j + 1) ωt ) can be computed solely with knowledge of each α kn,k ( ω ) and β kn,k ( ω ) for k ≤ j + 1.By assumption, each α kn,k ( ω ) and β kn,k ( ω ) is already known for k ≤ j for all n . Consideringthe relationship (C10), the terms α j +1 n,j +1 ( ω ) and β j +1 n,j +1 ( ω ) are known for all n except for thecontributions from terms of the form I b ...b j n,A .By substituting (C5) into (C12), expanding using product-to-sum identities (B3), and keepingin mind the structure of the unknown terms of α j +1 n,j +1 ( ω ) and β j +1 n,j +1 ( ω ) from (C10), one can collectterms appropriately as in (C13) to find that τ j +1 ( ω ) and σ j +1 ( ω ) can be written as τ j +1 ( ω ) = M (cid:88) n =1 (cid:20) M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) z b ...b j n,I,τ ( ω ) I b ...b j n,A (cid:3)(cid:21) + M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j =1 b j (cid:88) b j +1 =1 (cid:20) z b ...b j +1 g,τ ( ω ) g b ...b j +1 y (cid:21) + r τ,j +1 ( ω ) ,σ j +1 ( ω ) = M (cid:88) n =1 (cid:20) M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j − =1 b j − (cid:88) b j =1 (cid:2) z b ...b j n,I,σ ( ω ) I b ...b j n,A (cid:3)(cid:21) + M (cid:88) b =1 b (cid:88) b =1 · · · b j − (cid:88) b j =1 b j (cid:88) b j +1 =1 (cid:20) z b ...b j +1 g,σ ( ω ) g b ...b j +1 y (cid:21) + r σ,j +1 ( ω ) , (C14)where each z b b ...n,I,τ ( ω ), z b b ...n,I,σ ( ω ), z b b ...n,g,τ ( ω ), z b b ...n,g,σ ( ω ), r τ,j +1 ( ω ) and r σ,j +1 ( ω ) ∈ C and can becomputed from the lower order terms of the expansions (10) and (11) that are known by assumption.Here, the terms r τ,j +1 ( ω ) and r σ,j +1 ( ω ) are neither proportional to the terms g b ...b j +1 y nor g b ...b j +1 y .29y considering the steady state behavior in response to inputs of the form u ( t ) = (cid:15) sin( ωt ) atmultiple frequencies, one can substitute Equation (C14) into Equation (C11) and rearrange to yieldΓ j +1 = π(cid:15) j +1 (cid:0) Ξ j +1 Υ j +1 + R j +1 (cid:1) + O ( (cid:15) j +2 ) , (C15)where Γ j +1 is defined in (C7), Υ j +1 ∈ C η j +1 is a vector containing η j +1 total elements of the un-known terms g b ...b j +1 y and I b ,...,b j n , Ξ j +1 is a known matrix that contains terms that are proportionalto unknown terms from Υ j +1 , and R j +1 is a known vector comprised of all remaining terms. Therelationship (C6) follows immediately. Proof of Statement C2.
Estimation of Υ j +1 according to (C6) provides and estimation of all termsof the form I b ...b j n,A . Consequently, all elements of the right hand side of (C10) are known providingan estimate for the terms α j +1 n,j +1 ( ω ) and β j +1 n,j +1 ( ω ) for all n . Remark.
Theorem C.1 provides a strategy to iteratively estimate the terms of the expansions (10)and (11) that are part of the reduced order model (8) with output (9). For example, equation(17) provides a relatively straightforward expression of this form that can be used to infer the thefirst order accurate terms. Equations for identifying second order and higher accuracy terms aremore complicated and it is advisable to compute the terms of matrices Ξ j and R j using a symboliccomputational package. All equations of the form (C6) require knowledge of all lower order termsof the expansions (10) and (11). References [1] P. Benner, S. Gugercin, and K. Willcox. A survey of projection-based model reduction methodsfor parametric dynamical systems.
SIAM Review , 57(4):483–531, 2015.[2] M. Budiˇsi´c, R. Mohr, and I. Mezi´c. Applied Koopmanism.
Chaos: An Interdisciplinary Journalof Nonlinear Science , 22(4):047510, 2012.[3] I. Mezi´c. Analysis of fluid flows via spectral properties of the Koopman operator.
AnnualReview of Fluid Mechanics , 45:357–378, 2013.[4] A. Mauroy and I. Mezi´c. Global stability analysis using the eigenfunctions of the Koopmanoperator.
IEEE Transactions on Automatic Control , 61(11):3356–3369, 2016.[5] S. Bagheri. Koopman-mode decomposition of the cylinder wake.
Journal of Fluid Mechanics ,726:596–623, 2013.[6] H. Arbabi and I. Mezic. Ergodic theory, dynamic mode decomposition, and computation ofspectral properties of the Koopman operator.
SIAM Journal on Applied Dynamical Systems ,16(4):2096–2126, 2017.[7] B. Lusch, N. J. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings ofnonlinear dynamics.
Nature Communications , 9(1):1–10, 2018.[8] S. L. Brunton, B. W. Brunton, J. L. Proctor, E. Kaiser, and N. J. Kutz. Chaos as an inter-mittently forced linear system.
Nature Communications , 8(1):1–9, 2017.[9] C. W. Rowley, I. Mezic, S. Bagheri, P. Schlatter, and D. S. Henningson. Spectral analysis ofnonlinear flows.
Journal of Fluid Mechanics , 641(1):115–127, 2009.3010] N. J. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor.
Dynamic mode decomposition:data-driven modeling of complex systems . Society for Industrial and Applied Mathematics,Philadelphia, PA, 2016.[11] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of thekoopman operator: Extending dynamic mode decomposition.
Journal of Nonlinear Science ,25(6):1307–1346, 2015.[12] E. Yeung, S. Kundu, and N. Hodas. Learning deep neural network representations for koopmanoperators of nonlinear dynamical systems. In , pages 4832–4839. IEEE, 2019.[13] D. Wilson. A data-driven phase and isostable reduced modeling framework for oscillatorydynamical systems.
Chaos: An Interdisciplinary Journal of Nonlinear Science , 30(1):013121,2020.[14] J. L. Proctor, S. L. Brunton, and N. J. Kutz. Dynamic mode decomposition with control.
SIAM Journal on Applied Dynamical Systems , 15(1):142–161, 2016.[15] J. L. Proctor, S. L. Brunton, and N. J. Kutz. Generalizing Koopman theory to allow for inputsand control.
SIAM Journal on Applied Dynamical Systems , 17(1):909–930, 2018.[16] M. O. Williams, M. S. Hemati, S. T. M. Dawson, I. G. Kevrekidis, and C. W. Rowley. Extend-ing data-driven Koopman analysis to actuated systems.
IFAC-PapersOnLine , 49(18):704–709,2016.[17] M. Korda and I. Mezi´c. Linear predictors for nonlinear dynamical systems: Koopman operatormeets model predictive control.
Automatica , 93:149–160, 2018.[18] S. Peitz and S. Klus. Koopman operator-based model reduction for switched-system controlof PDEs.
Automatica , 106:184–191, 2019.[19] E. Kaiser, N. J. Kutz, and S. L. Brunton. Data-driven discovery of Koopman eigenfunctionsfor control. arXiv preprint arXiv:1707.01146 , 2017.[20] A. Mauroy, I. Mezi´c, and J. Moehlis. Isostables, isochrons, and Koopman spectrum for theaction–angle representation of stable fixed point dynamics.
Physica D: Nonlinear Phenomena ,261:19–30, 2013.[21] D. Wilson and J. Moehlis. Extending phase reduction to excitable media: Theory and appli-cations.
SIAM Review , 57(2):201–222, 2015.[22] D. Wilson and J. Moehlis. Isostable reduction with applications to time-dependent partialdifferential equations.
Physical Review E , 94(1):012211, 2016.[23] A. Sootla and A. Mauroy. Geometric properties of isostables and basins of attraction ofmonotone systems.
IEEE Transactions on Automatic Control , 62(12):6183–6194, 2017.[24] D. Wilson and B. Ermentrout. Greater accuracy and broadened applicability of phase reductionusing isostable coordinates.
Journal of Mathematical Biology , 76(1-2):37–66, 2018.[25] D. Wilson and B. Ermentrout. Phase models beyond weak coupling.
Physical Review Letters ,123(16):164101, 2019. 3126] D. Wilson and S. Djouadi. Isostable reduction and boundary feedback control for nonlinearconvective flow. In
Proceedings of the 58th IEEE Conference on Decision and Control , 2019.[27] D. Wilson, S. Sahyoun, P. Kreth, and S. M. Djouadi. Investigating the underlying dynamicalstructure of supersonic flows using effective model reduction. To Appear in : Proceedings of the2020 American Control Conference .[28] D. Wilson. Phase-amplitude reduction far beyond the weakly perturbed paradigm.
PhysicalReview E , 101(2):022220, 2020.[29] M. D. Kvalheim and S. Revzen. Existence and uniqueness of global Koopman eigenfunctionsfor stable fixed points and periodic orbits. arXiv preprint arXiv:1911.11996 , 2019.[30] A. Mauroy and I. Mezi´c. Global computation of phase-amplitude reduction for limit-cycledynamics.
Chaos: An Interdisciplinary Journal of Nonlinear Science , 28(7):073108, 2018.[31] O. Castej´on and A. Guillamon. Phase-amplitude dynamics in terms of extended responsefunctions: Invariant curves and arnold tongues.
Communications in Nonlinear Science andNumerical Simulation , 81:105008, 2020.[32] S. Shirasaka, W. Kurebayashi, and H. Nakao. Phase-amplitude reduction of transient dynamicsfar from attractors for limit-cycling systems.
Chaos: An Interdisciplinary Journal of NonlinearScience , 27(2):023119, 2017.[33] B. Monga, D. Wilson, T. Matchen, and J. Moehlis. Phase reduction and phase-based optimalcontrol for biological systems: a tutorial.
Biological Cybernetics , 113(1-2):11–46, 2019.[34] D. Wilson. Isostable reduction of oscillators with piecewise smooth dynamics and complexFloquet multipliers.
Physical Review E , 99(2):022210, 2019.[35] D. Wilson and B. Ermentrout. Augmented phase reduction of (not so) weakly perturbedcoupled oscillators.
SIAM Review , 61(2):277–315, 2019.[36] S. L. Brunton, B. W. Brunton, J. L. Proctor, and N. J. Kutz. Koopman invariant subspacesand finite linear representations of nonlinear dynamical systems for control.
PloS One , 11(2),2016.[37] P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley.
Turbulence, coherent structures,dynamical systems and symmetry . Cambridge University Press, New York, 1996.[38] C. W. Rowley and S. T. M. Dawson. Model reduction for flow analysis and control.
AnnualReview of Fluid Mechanics , 49:387–417, 2017.[39] R. C. Camphouse. Boundary feedback control using proper orthogonal decomposition models.
Journal of Guidance, Control, and Dynamics , 28(5):931–938, 2005.[40] R. Honeycutt. Stochastic Runge-Kutta algorithms. I. white noise.
Physical Review A , 45:600–603, 1992.[41] J. Rubin and D. Terman. High frequency stimulation of the subthalamic nucleus eliminatespathological thalamic rhythmicity in a computational model.
Journal of Computational Neu-roscience , 16:211–235, 2004. 3242] J. Page and R. R. Kerswell. Koopman analysis of Burgers equation.
Physical Review Fluids ,3(7):071901, 2018.[43] N. J. Kutz, J. L. Proctor, and S. L. Brunton. Applied Koopman theory for partial differentialequations and data-driven modeling of spatio-temporal systems.
Complexity , 2018.[44] T. Toni, D. Welch, N. Strelkowa, A. Ipsen, and M. P. H. Stumpf. Approximate Bayesian com-putation scheme for parameter inference and model selection in dynamical systems.
Journalof the Royal Society Interface , 6(31):187–202, 2009.[45] M. Sunn˚aker, A. G. Busetto, E. Numminen, J. Corander, M. Foll, and C. Dessimoz. Approx-imate Bayesian computation.
PLoS Computational Biology , 9(1), 2013.[46] T. Netoff, M. A. Schwemmer, and T. J. Lewis. Experimentally estimating phase response curvesof neurons: theoretical and practical issues. In
Phase Response Curves in Neuroscience , pages95–129. Springer, 2012.[47] E. M. Izhikevich.