Evaluating the accuracy of the dynamic mode decomposition
Hao Zhang, Scott T. M. Dawson, Clarence W. Rowley, Eric A. Deem, Louis N. Cattafesta
EEvaluating the accuracy of the dynamic modedecomposition
Hao Zhang ∗ , Scott T. M. Dawson , Clarence W. Rowley , Eric A. Deem , and Louis N.Cattafesta Mechanical and Aerospace Engineering, Princeton University Mechanical Engineering, Florida State UniversityOctober 3, 2017
Abstract
Dynamic mode decomposition (DMD) gives a practical means of extracting dynamic informa-tion from data, in the form of spatial modes and their associated frequencies and growth/decayrates. DMD can be considered as a numerical approximation to the Koopman operator, aninfinite-dimensional linear operator defined for (nonlinear) dynamical systems. This work pro-poses a new criterion to estimate the accuracy of DMD on a mode-by-mode basis, by estimat-ing how closely each individual DMD eigenfunction approximates the corresponding Koopmaneigenfunction. This approach does not require any prior knowledge of the system dynamics orthe true Koopman spectral decomposition. The method may be applied to extensions of DMD(i.e., extended/kernel DMD), which are applicable to a wider range of problems. The accuracycriterion is first validated against the true error with a synthetic system for which the trueKoopman spectral decomposition is known. We next demonstrate how this proposed accuracycriterion can be used to assess the performance of various choices of kernel when using the kernelmethod for extended DMD. Finally, we show that our proposed method successfully identifiesmodes of high accuracy when applying DMD to data from experiments in fluids, in particularparticle image velocimetry of a cylinder wake and a canonical separated boundary layer.
The decomposition of spatiotemporal data into spatial modes and temporal functions describingtheir evolution gives a means to isolate coherent features and assemble low-order representations ofcomplex dynamics. Over the past decade, the dynamic mode decomposition (DMD) [22] has becomea routinely-used method for such purposes [6,21,25,30]. See, for example, [17] and [20] for reviews ofmany ensuing uses and applications of DMD. While successfully used on a range of datasets, generalquestions still exist in terms of how to select a reduced set of modes, and how to ensure resultsare quantitatively accurate. On the first point, numerous methods have been proposed to selecta reduced number of modes that best represent the dynamics of the system [3, 15, 16, 29]. On thesecond point, the sensitivity of the outputs of DMD to noisy data has also been investigated [1, 7],and a number of modified algorithms proposed that give improved accuracy for noisy data [5, 13]. ∗ Email: [email protected] a r X i v : . [ m a t h . D S ] O c t he present work differs from these past studies by giving a means of estimating the accuracyof DMD on a mode-by-mode basis, without any a-priori knowledge of the system dynamics, noisecharacteristics, or truncation of low-energy modes. It has been shown previously [21,26] that DMDapproximates the Koopman operator, an infinite-dimensional linear operator defined for (nonlinear)dynamical systems. In this work, we will exploit this connection by estimating the accuracy towhich we approximate eigenfunctions of the Koopman operator. This approach allows our analysisto naturally extend to extensions of DMD [27] that are designed to improve the approximation tothe Koopman operator for nonlinear systems. Extended DMD uses nonlinear observables to expandthe space in which the Koopman operator is approximated. However, EDMD suffers from the curseof dimensionality: that is, the computational cost increases rapidly with the dimension of the state.To circumvent this issue, kernel DMD (KDMD) [28] was proposed as a computationally inexpensivealternative, which makes use of a kernel function to implicitly include a rich (and nonlinear) set ofobservables, while maintaining the same computational cost as DMD. The optimal choice of kernelfunction for KDMD is still an open question, and here we demonstrate that the accuracy criterionmay be used to evaluate and compare the performance of various kernels.The structure of this work is as follows. We first review DMD, the Koopman operator, and kernelDMD in section 2, before presenting and validating our proposed accuracy criterion in section 3.Section 4 uses the accuracy criterion to measure the performance of various kernels in KDMD fora simple nonlinear system, while section 5 demonstrates that this criterion is effective in selectingaccurate DMD modes from experimental data. We first give a review of previous results, including the DMD algorithm and its connections to theKoopman operator (section 2.1), as well as extensions of DMD that can better approximate theKoopman operator for nonlinear systems (section 2.2).
Dynamic mode decomposition was introduced in [22], and our presentation here follows that in [20,26]. Consider a discrete-time dynamical system whose state space is denoted by X ⊂ R n , andsuppose the dynamics are given by x ( k + 1) = F ( x ( k )) , x ( k ) ∈ X. (2.1)Let ψ , . . . , ψ q be real-valued functions on X , which we call observables, and let ψ : X → R q denotethe vector-valued function whose components are ( ψ , . . . , ψ q ). We may not be able to measure thestate x directly, but instead, we can measure the vector y = ψ ( x ) . As a special case, y could be the state itself, i.e., y = ψ ( x ) = x . For complex systems, it can beadvantageous to define observables that are nonlinear functions of the state, which will be discussedin more detail in section 2.2. For the purposes of describing standard DMD, we assume y = x .We consider pairs of snapshots ( x k , x k ), with x k ∈ X, k = 1 , , · · · , m , and where x k = F ( x k )is the image of x k upon application of the dynamics (2.1). For sequential data, x (1) , . . . , x ( m + 1)satisfying (2.1), one takes x k = x ( k ), x k = x ( k + 1), though non-sequential data may also beused, such as from multiple runs of experiments or simulations [26]. In DMD, we seek a matrix A ∈ R q × q such that y k = Ay k , k = 1 , , · · · , m Y = (cid:104) y y · · · y m (cid:105) , Y = (cid:104) y y · · · y m (cid:105) , and define the DMD matrix A by A = Y Y + . (2.2)DMD modes and eigenvalues are the eigenvectors and eigenvalues of A . A typical algorithm tocompute these modes and eigenvalues is as follows [26]: Algorithm (DMD)
1. Compute the reduced SVD Y = U Σ V T .2. (Optional) Truncate the SVD by only retaining the first r colunms of U , V , and the first r rows and columns of Σ , to obtain U r , Σ r , V r .3. Let ˜ A = U Tr AU r = U Tr Y V r Σ − r , ˜ A ∈ R r × r .4. Find the eigenvalues µ i and eigenvectors ˜ v i of ˜ A , such that ˜ A ˜ v i = µ i ˜ v i .5. The (projected) DMD modes are given by v i = U Tr ˜ v i , with corresponding (discrete-time)DMD eigenvalues µ i .The eigenvectors of the matrix A ∈ R q × q can be found from the eigenvectors of the smallermatrix ˜ A ∈ R r × r . We denote the eigenvalues and eigenvectors of A by { µ i , v i } . In the case ofsequential data (for which y k = y k +1 ), suppose that we can express the initial state as y = q (cid:88) i =1 c i v i . The time evolution of the system (starting at y ) is then predicted by DMD to be y k +1 = A k y = q (cid:88) i =1 c i µ ki v i . (2.3)Therefore, each DMD mode v i is associated with a single frequency and growth/decay rate (DMDeigenvalue µ i ). In reality, (2.3) may not hold exactly, depending on the quantity and quality ofdata used, whether the system dynamics are nonlinear, whether the SVD is truncated in step 2 ofthe DMD algorithm above. For cases where equation (2.3) does not give an exact description ofthe dynamics, DMD gives a least-squares fit to the data (as pairs of snapshots).There are connections between DMD and an infinite-dimensional linear operator called theKoopman operator [21, 26], with the high-level idea being that DMD gives a finite-dimensionalnumerical approximation of the Koopman operator. Our proposed criterion for evaluating theaccuracy of DMD exploits this connection. For a given state-space X , the Koopman operatoracts on scalar-valued functions of X , which we referred earlier as observables. Here, we considerobservables in L ( X ), the space of square integrable functions on X . Given the dynamics in (2.1),one then defines the Koopman operator K : L ( X ) → L ( X ) by( K φ )( x ) = ( φ ◦ F )( x ) = φ ( F ( x )) . (2.4) To be fully rigorous, one typically assumes the dynamics (2.1) are measure-preserving, so that φ ◦ F is in L whenever φ ∈ L ; in fact, if F is measure preserving, then K is an isometry. K maps a function φ to another function φ ◦ F , and ( K φ )( x ) gives the value of φ ( x ) at thenext time step. Here we emphasize two points: first that the Koopman operator acts on functionsof the state instead of the state itself; and second that the Koopman operator is linear, even thoughthe dynamics might be nonlinear. On the second point, note that K ( c φ + c φ ) = c K φ + c K φ holds for any functions φ , φ and any scalars c , c . Since the Koopman operator is linear, it mayhave eigenvalues and eigenfunctions, which satisfy K ϕ = µϕ, (2.5)where ϕ is the eigenfunction with eigenvalue µ .Now, suppose we have a given set of observables { ψ , ψ , · · · , ψ q } , and suppose ϕ is a Koopmaneigenfuntion (with eigenvalue µ ) that lies in the span of { ψ j } : i.e., ϕ ( x ) = ¯ w ψ ( x ) + · · · + ¯ w q ψ q ( x ) = w ∗ ψ ( x ) , (2.6)for some w ∗ ∈ C n . Then one can show (see [26, § w ∗ is a left eigenvector of the DMD matrix A with eigenvalue µ (i.e., w ∗ A = µ w ∗ ). Thisconnection implies that we can approximate Koopman eigenfunctions (and eigenvalues) for a givenunknown dynamical system directly from data using DMD. In particular, given left eigenvectors ofthe DMD matrix ( w ∗ i A = µ i w ∗ i ), we consider ϕ i ( x ) = w ∗ i ψ ( x ) as a DMD-approximated Koopmaneigenfunction, with eigenvalue µ i . In order to apply the connection between DMD and Koopman mentioned above, the Koopmaneigenfunctions must lie within the space spanned by the observables { ψ j } . If one takes ψ ( x ) = x ,as with standard DMD, then the subspace spanned by { ψ j } consists only of linear functions of x ,and this subspace is often not large enough to include eigenfunctions of K (a notable exceptionbeing the case in which F is linear). Extended DMD (EDMD) was proposed in [27] in order toenlarge the subspace of observables, and therefore better approximate Koopman eigenfunctions. Inparticular, Extended DMD approximates the Koopman operator by a weighted residual method,with trial functions given by { ψ j } and a particular choice of test functions specified by the data.Examples of observables ψ j ( x ) could include polynomials, Fourier modes, indicator functions, orspectral elements, as suggested in [27]. For instance, if we take x ∈ R and take observables to bemonomials in components of x up to degree d = 2 (including the constant 1), then the vector ofobservables is ψ ( x ) = (cid:104) x x x x x x (cid:105) T . We can potentially approximate many more accurate Koopman eigenfunctions with EDMD thanwe could with DMD. However, EDMD suffers from the curse of dimensionality [2]. If the statedimension is n and we consider (multivariate) polynomials up to degree d , then the number of ob-servables is q = (cid:0) n + dd (cid:1) , which is approximately n d for large n . For large problems (as arise in fluids),data might typically have n ≈ , so even if one considers only quadratic polynomials, the numberof observables is q ≈ , too large for practical computation. It is thus very computationallyexpensive to consider large subspaces of observables.Kernel DMD (KDMD) has been proposed to deal with this curse of dimensionality [28]. InKDMD, EDMD is reformulated such that only inner products of observables need to be computed.4he inner product can be evaluated by making use of a kernel function, a common technique inthe community of machine learning. A kernel function k : R n × R n → R is defined as k ( x , ˆ x ) = (cid:104) ψ ( x ) , ψ ( ˆ x ) (cid:105) . (2.7)To appreciate how kernel functions work, consider for example a polynomial kernel k ( x , ˆ x ) =(1 + x T ˆ x ) d as an example. This kernel corresponds to a set of observables ψ ( x ) consisting of allmonomials in components of x up to degree d [2]. Taking n = 2 and d = 2, this kernel functioncan be expanded as(1 + x T ˆ x ) = 1 + 2 x ˆ x + 2 x ˆ x + 2 x ˆ x + 2 x x ˆ x ˆ x + ˆ x ˆ x = (cid:104) ψ ( x ) , ψ ( ˆ x ) (cid:105) , (2.8)where ψ ( x ) = (1 , √ x , √ x , x , √ x x , x ). In the terminology of machine learning, ψ is calledthe feature map, and ψ ( x ) ∈ R q is called the feature space (which might be infinite dimensional).In the example above, the dimension of the (implicitly defined) feature space is q = 6, but in orderto compute k ( x , ˆ x ), we require inner products only in state space, which has dimension n = 2.Kernel functions hence can be used to evaluate the inner product in a high dimensional (or eveninfinite dimensional) feature space in an efficient way. More examples of kernel functions are givenin section 4.1. The connection between DMD and the Koopman operator as discussed in section 2.1 implies that wecan use variants of DMD (e.g., DMD, EDMD, or KDMD) to approximate Koopman eigenfunctionsand eigenvalues, given access to data. By applying DMD variants to a given dataset, we canpotentially identify many Koopman eigenfunctions and eigenvalues (which we refer to as eigenpairs).However, the reliability of these eigenpairs remains unknown. Before using DMD results for anyanalysis or reduced order modeling, it is desirable and necessary to assess the quality (i.e., accuracy)of the results. In this section, we will develop a criterion for evaluating the accuracy of DMD-approximated Koopman eigenpairs. We describe this accuracy criterion in section 3.1, and thenvalidate it in section 3.2 using a simple nonlinear system where the analytical Koopman eigenpairsare known.The most common way to select which of the computed DMD modes are most relevant is touse the “mode amplitude”: for sequential data, one projects the initial condition onto DMD modesand one views the magnitude of the projection coefficients as the mode amplitudes. It is commonpractice [12, 21, 26] to retain the modes of largest amplitude. This approach sounds plausible;however, it was observed in [15] (which used sparsity-promoting techniques to select modes) thatmode amplitude is not always a useful criterion for mode selection. Indeed, mode amplitudes canbe misleading, as we illustrate below with a simple example.Suppose we have three DMD modes, v = (1 , , , v = (0 , , , v = (0 , , (cid:15) ) , (3.1)where (cid:15) is small and thus v and v are almost parallel. If we consider an initial condition x =(1 , , ζ ), and project it onto these DMD modes, we obtain x = v − ζ(cid:15) v + ζ(cid:15) v . (3.2)5or instance, if ζ = 10 − and (cid:15) = 10 − , then ζ/(cid:15) = 10 , so the mode amplitude (defined as themagnitude of the projection coefficients) indicates that v and v are much more important than v . The mode amplitudes indicate that we might be able to neglect v without significant adverseeffects. However, it is clear that v is much more relevant for reconstructing x : if we use only v and v , we obtain − ζ(cid:15) v + ζ(cid:15) v = (0 , , ζ ) , (3.3)which does not accurately approximate x = (1 , , ζ ). A better approximation to x is simply v = (1 , , Given data from an experiment or simulation, we can split the dataset into training data and testingdata. Training data is used to approximate DMD modes (and associated Koopman eigenpairs),while testing data is used to evaluate the quality of these identified modes. Data-driven algorithmsmay suffer from the problem of over-fitting [11], so any evaluation criteria should use testing datathat differs from the training data.The idea of our approach is to evaluate the accuracy of a DMD mode (and eigenvalue) bylooking at the accuracy of its corresponding Koopman eigenfunction. Suppose we are given anapproximate Koopman eigenpair ( µ, ϕ ), and we wish to evaluate its accuracy. If ( µ, ϕ ) were a trueKoopman eigenpair, then by definition it would satisfy ϕ ◦ F = µϕ, where F defines the dynamics in (2.1). Ideally, we would like to compute (cid:107) ϕ ◦ F − µϕ (cid:107)(cid:107) ϕ (cid:107) , (3.4)where (cid:107) · (cid:107) is the norm of a function. (We divide by (cid:107) ϕ (cid:107) so that the above quantity is independentof the scaling of the eigenfunction (cid:107) ϕ (cid:107) .) However, in order to compute (3.4), we require explicitknowledge of the dynamics F , which is unknown in most cases of interest. Instead, we can estimatethe above quantity using finite number of data points (i.e., the testing data). The estimationshould give some sense of the quantity in (3.4), using only the testing data, which consists of pairsof samples ( x k , x k ) with x k ∈ X and x k = F ( x k ). This observation motivates the followingdefinition of an accuracy criterion: α = (cid:80) k | ϕ ( x k ) − µϕ ( x k ) | (cid:80) k | ϕ ( x k ) | , (3.5)where | · | denotes the absolute value, and the summation is over the entire testing dataset. Adiagram summarizing how this accuracy criterion may be applied is shown in Figure 1. Morespecifically, given a DMD-approximated eigenfunction ϕ ( x ) = w ∗ ψ ( x ) with eigenvalue µ (i.e.,6rainingdataTestingdataDatasnapshots Koopmaneigenpairs ModeerrorDMDFigure 1: A diagram summarizing the implementation of the accuracy criterion. Training datais used to approximate Koopman eigenpairs with variants of DMD, while testing data is used toevaluate the quality of Koopman eigenpairs. w ∗ A = µ w ∗ , with A as defined in (2.2)), the accuracy criterion, or estimated mode error, can bewritten as α = (cid:80) k | w ∗ ψ ( x k ) − µ w ∗ ψ ( x k ) | (cid:80) k | w ∗ ψ ( x k ) | . (3.6)The numerator measures to what extent the eigenfunction equation holds, and the denominatorgives a measure of the magnitude of the eigenfunction. Here α can be interpreted as the error ofa Koopman eigenpair. The error is defined on a mode-by-mode basis, which enables independentevaluation for each individual DMD mode. Therefore it makes sense to call α the mode error.Observe that α is always non-negative, and it is usually less than 1. When we feed in the trueKoopman eigenfunction and eigenvalue into α in equation (3.5), then α = 0 (assuming that thetesting data is noise-free). If α is close to 1, the Koopman eigenpair is extremely unreliable,because the discrepancy in the eigenfunction equation is of the same order as the magnitude of theeigenfunction. Therefore, usually we only care about the DMD eigenpairs for which 0 ≤ α (cid:28) (cid:96) norm (or itssquare), which yield similar results in terms of indicating relative accuracy of modes.A meaningful evaluation criterion should be (fairly) independent the scaling of the eigenfunc-tions, the scaling of the testing data, and the size of the testing set. The proposed accuracycriterion approximately satisfies all of these. To show this, we consider the simple case where thefull system state is used in DMD, i.e., ψ ( x ) = x and the DMD-computed eigenfunction is linear,i.e., ϕ ( x ) = w ∗ ψ ( x ) = w ∗ x . The fact that we normalize by the magnitude of the observablesmeans that α is relatively independent of eigenfunction scaling, data scaling, and data quantity,as is desired. In the case where the observable is not the full state (i.e., when using EDMD orKDMD), the scaling of the eigenfunctions and size of testing again do not influence α , for the samereason. However, due to the nonlinear transformation ψ ( x ), the scaling of testing data x may playsome role in the size of α . Fortunately, it is reasonable to expect that the relative magnitude of α should still indicate the relative accuracy of different DMD-computed Koopman eigenpairs.We point out that if the testing data is clean, mode error is determined only by the quality ofDMD-approximated Koopman eigenpairs. If the testing data is noisy, mode error is also affected bythe noise in testing data. For experimental data, we have access only to the noisy measurements. Inthese cases, the relative magnitude of α is still expected to indicate relative accuracy of Koopmaneigenpairs. We reiterate again that this definition of error does not assume access to analyticalKoopman spectral decomposition, which is unknown in most cases.7 .2 Validating the accuracy criterion We have proposed an accuracy criterion that exploits the connection between DMD and the Koop-man operator. Before applying this criterion to real data, we first seek to validate it as a reliablemeasure of accuracy. We will first consider a simple 2D nonlinear system for which the analyticalKoopman spectral decomposition is known. Given analytical Koopman eigenpairs, we can definethe true error to be the distance between the DMD eigenvalue and the true eigenvalue (eigenvalueerror), or the difference between the DMD eigenfunction and the true eigenfunction (eigenfunctionerror). We will validate the accuracy criterion against the true error, and show that accuracycriterion reliably indicates accuracy.Here we consider a 2D nonlinear map (also considered in [26]) with dynamics defined by (cid:34) x x (cid:35) (cid:55)→ (cid:34) γx δx + ( γ − δ ) x (cid:35) , γ = 0 . , δ = 0 . . (3.7)It is straightforward to verify that γ, δ are Koopman eigenvalues with respective eigenfunctions ϕ γ ( x ) = x , ϕ δ ( x ) = x − x . Additional Koopman eigenvalues and eigenfunctions are given by µ k,(cid:96) = γ k δ (cid:96) , ϕ k,(cid:96) = ϕ kγ ϕ (cid:96)δ , (3.8)where k, (cid:96) = 0 , , , · · · are non-negative integers. Notice that the analytical eigenfunctions aremultivariate polynomials in the state variables.To collect training data, m = 100 random initial points are sampled from a uniform distributionon [ − , × [ − , m test = 100 snapshot pairs as the testing data. The generated trainingand testing dataset are used for subsequent analysis in both this and the next section.Here we apply EDMD with monomials as observables. In particular, the observables are takento be ψ k,(cid:96) ( x ) = x k x (cid:96) , k, (cid:96) = 0 , , , , , , where the feature space dimension is q = 6 × α ∼ − ), and this is consistent with the comparison to analytical eigenvalues. Asmentioned in section 2.1, if the Koopman eigenfunctions lie in the span of the observables, theeigenfunction can be found exactly by EDMD. In this case, monomials up to degree 5 span theleading Koopman eigenfunctions, and hence these eigenvalues can be identified.To validate that the proposed accuracy criterion does indeed indicate accuracy, now we compare α with the true error. We can compute the discrepancy between DMD eigenvalues, indicated byˆ µ i , and true eigenvalues µ k,(cid:96) = γ k δ (cid:96) given in equation (3.8), by defining the eigenvalue error τ i = | ˆ µ i − µ k,(cid:96) || µ k,(cid:96) | , (3.9)where the indices ( k, (cid:96) ) are chosen such that µ k,(cid:96) is the closest eigenvalue to ˆ µ i . We then interpretˆ µ i as a DMD approximation to the analytical eigenvalue µ k,(cid:96) . We can also compute the discrepancybetween DMD eigenfunctions ˆ ϕ i and true eigenfunctions ϕ k,(cid:96) given in equation (3.8). We normalize8 .5 0.75 1-0.100.1 -15 -10 -5 Re( µ ) I m ( µ ) (a) EDMD/analytical eigenvalues -15 -10 -5 Eigenvalue index E rr o r ( τ , θ , α ) (b) Accuracy criterion and true error Figure 2: (a) EDMD eigenvalues (circles) and analytical eigenvalues (crosses). EDMD eigenvaluesare superimposed by the corresponding accuracy criterion (mode error) α as shown in the colorbar.(b) Comparison between the accuracy criterion α , eigenvalue error τ , and eigenfunction error θ .The eigenvalues are indexed by their absolute value, in descending order.the eigenfunctions ˆ ϕ i and ϕ k,(cid:96) so that | ϕ | max = 1 in the domain Ω = [ − , × [ − , θ i = (cid:107) ˆ ϕ i − ϕ k,(cid:96) (cid:107)(cid:107) ϕ k,(cid:96) (cid:107) , (3.10)where (cid:107) · (cid:107) denotes the L norm given by (cid:107) f (cid:107) = (cid:90) Ω | f ( x ) | d x . (3.11)In order to validate the accuracy criterion, we compare α i with the eigenvalue error τ i andeigenfunction error θ i in Figure 2(b). We observe that α highly correlates with both τ and θ , eventhough the proposed accuracy criterion does not assume access to analytical Koopman eigenpairs.The proposed accuracy criterion hence indicates accuracy very well, by comparison with the trueerror defined using true Koopman eigenpairs. Starting from the 13th eigenvalue ˆ µ ≈ µ , = γ δ = (0 . (0 . = 0 . ϕ , ( x ) = x . Thiscomparison gives us confidence in the reliability of the accuracy criterion.We now consider the 6th eigenvalue ˆ µ ≈ µ , = 0 .
72, and the 13th eigenvalue ˆ µ ≈ µ , =0 . τ ≈ − and θ ≈ − indicate that the 6th eigenpair is approximatedvery accurately, while τ ≈ − , θ ≈ indicate that the 13th eigenpair is approximated withlower accuracy. The EDMD eigenfunctions are compared with the analytical eigenfunctions inFigure 3. It is observed that the 6th eigenfunction is indeed approximated very accurately, as α ≈ − suggests. The 13th eigenfunction are approximated less accurately, as is expected giventhat α ≈ − . This comparison shows that the accuracy criterion does indicate the accuracy ofDMD approximated Koopman eigenpairs, without assuming access to the true Koopman eigenpairs.9 x ϕ , x ˆ ϕ x ϕ , x ˆ ϕ x Figure 3: Eigenfunctions for the system defined in (3.7), restricted to a domain of [ − , × [ − , | ϕ ( x ) | max = 1. The analytical eigenfunction ϕ , shown in (a) is closelyapproximated by the eigenfunction ˆ ϕ computed by EDMD, shown in (b). However, the analyticaleigenfunction ϕ , (with eigenvalue µ , = 0 . ϕ computed by EDMD (with eigenvalue ˆ µ = 0 . . j ),whose real part is shown in (d). This section focusses on using the accuracy criterion defined in section 3.1 to evaluate the perfor-mance of KDMD using various kernel functions. We first introduce a few commonly used kernelfunctions in section 4.1, then we compare the performance of various kernels in section 4.2, usingthe same test problem considered in section 3.2. Following this, section 4.3 studies the robustnessof various kernels for the case where the data are noisy.
In section 2.2 we briefly described KDMD, which makes use of a kernel function to circumvent thecurse of dimensionality associated with EDMD. Application of KDMD requires a suitable choiceof kernel function. In order to appreciate how a kernel function may implicitly define a observablefunction, note that Mercer’s theorem [18] states that a (quite broad) class of “Mercer kernels” k ( x , ˆ x ) may be written as k ( x , ˆ x ) = ∞ (cid:88) i =1 c i ψ i ( x ) ψ i ( ˆ x ) , c i ≥ c i +1 ≥ . (4.1)Hence there exists an infinite dimensional implicit observable function (also called feature map inthe machine learning community) ψ ( x ) = (cid:104) √ c ψ ( x ) √ c ψ ( x ) · · · √ c i ψ i ( x ) · · · (cid:105) T (4.2)such that k ( x , ˆ x ) = (cid:104) ψ ( x ) , ψ ( ˆ x ) (cid:105) . We now introduce a few commonly used kernel functions, andin section 4.2 we compare their performance on the example from the previous section. Polynomial kernel k ( x , ˆ x ) = (1 + x T ˆ x ) d (4.3)10he (implicit) observables associated with the polynomial kernel are all monomials in componentsof x ∈ R n up to degree d . The dimension of the observable vector is q = (cid:0) n + dd (cid:1) . The feature mapfor arbitrary n ≥ , d ≥ n = 2 , d = 2 are givenby equation (2.8). Exponential kernel k ( x , ˆ x ) = exp (cid:0) x T ˆ x (cid:1) (4.4)The (implicit) observables associated with the exponential kernel are all monomials in componentsof x , up to infinite degree. An explicit feature map can be also found from a Taylor expansion ofthe exponential kernel [4]. Taking x ∈ R for example, the kernel can be expanded asexp { x T ˆ x } = ∞ (cid:88) (cid:96) =0 ( x T ˆ x ) (cid:96) (cid:96) ! = ∞ (cid:88) (cid:96) =0 ( x ˆ x + x ˆ x ) (cid:96) (cid:96) != ∞ (cid:88) (cid:96) =0 (cid:80) (cid:96)k =0 (cid:0) lk (cid:1) ( x ˆ x ) k ( x ˆ x ) (cid:96) − k (cid:96) ! = (cid:104) ψ ( x ) , ψ ( ˆ x ) (cid:105) , where the observable is ψ (cid:96),k ( x ) = (cid:16)(cid:0) (cid:96)k (cid:1)(cid:14) (cid:96) ! (cid:17) / x k x (cid:96) − k , where (cid:96) = 0 , , , · · · , and k = 0 , , , · · · , (cid:96) .Notice that the number of observables is infinite, q = ∞ . Gaussian kernel k ( x , ˆ x ) = exp (cid:18) − (cid:107) x − ˆ x (cid:107) σ (cid:19) , (4.5)where (cid:107) · (cid:107) is the (cid:96) norm, and σ scales the kernel width [8]. The Gaussian kernel is a Mercerkernel for all dimensions n ≥ x ∈ R as an example, the (implicit) observables as inequation (4.1) are given by [9] ψ k ( x ) ∝ exp( − ( d − a ) x ) H k ( x √ d ) , where c k ∝ b k , b < ,a, b, d are functions of σ , and H k is the k-th order Hermite polynomial. The number of observablesis infinite, q = ∞ . For arbitrary n , an explicit feature map can in principle be also found fromTaylor expansion of the Gaussian kernel [4]. Laplacian kernel k ( x , ˆ x ) = exp {− (cid:107) x − ˆ x (cid:107) σ } (4.6)Note the similarity between the Laplacian and Gaussian kernels, with the difference being that thatthe Laplacian kernel uses the (cid:96) norm in the exponent without squaring [24]. For arbitrary n , theLaplacian kernel is a valid Mercer kernel [23]. 11 .5 1-0.10.1 0.5 1-0.10.10.5 1-0.10.1 0.5 1-0.10.1 10 -15 -10 -5 Re( µ ) Re( µ ) I m ( µ ) I m ( µ ) Figure 4: KDMD eigenvalues (circles) colored by their estimated mode error α . Analytical eigen-values (crosses) are shown for comparison. (a) Polynomial kernel of degree d = 5, q = (cid:0) (cid:1) = 21.(b) Exponential kernel, q = ∞ . (c) Gaussian kernel with σ = 1, q = ∞ . (d) Laplacian kernel with σ = 1, q = ∞ . We now compare the above kernel functions using the example considered in section 3.2. Figure 4shows the performance of polynomial, exponential, Gaussian, and Laplacian kernels in identifyingthe Koopman eigenvalues of the system, using the same training and testing data as in section 3.2.We find that a polynomial kernel of degree d = 5 accurately identifies the leading eigenvalues( µ k,(cid:96) ∈ [0 . , α ≈ − ), as was the case with EDMD. This is notsurprising, as the polynomial kernel implicitly defines monomials of states as observables, whichspan the same space as the explicitly defined monomials used in EDMD. With increasing orderof the polynomial kernel, more eigenvalues can be accurately identified. It is found that the ex-ponential kernel can identify more eigenvalues ( µ k,(cid:96) ∈ [0 . , α ≈ − ), since the implicit observables associated with exponential kernelare monomials up to infinite degree. The Gaussian kernel is able to find the leading eigenvalues( µ k,(cid:96) ∈ [0 . , α ≈ − to 10 − , even though the implicitobservables of the Gaussian kernel are not monomials. This demonstrates the potential power ofkernel functions: they are able to span a useful function space, primarily because the dimensionof the space of (implicit) observables can be large, and even infinite. The Laplacian kernel canapproximate only a few leading eigenvalues ( µ = 1 . . , . α ≈ − .We emphasize that, while the exact Koopman eigenvalues are known in this case, it is possibleto use the accuracy criterion to compare the performance of different kernels even when the truedynamics are unknown. Indeed, using only the results of the accuracy criterion, we would reasonthat the polynomial kernel is the best choice for identifying the leading Koopman eigenvaluesaccurately. In practice, data is typically corrupted with noise. Here we present a study of the sensitivity ofdifferent kernels with respect to the presence of noise. We add zero-mean Gaussian noise withstandard deviation σ noise = 10 − to the 100 random uniformly distributed data pairs taken from[ − , × [ − , .5 1-0.10.1 0.5 1-0.10.10.5 1-0.10.1 0.5 1-0.10.1 10 -5 -4 -3 -2 -1 Re( µ ) Re( µ ) I m ( µ ) I m ( µ ) Figure 5: KDMD eigenvalues (circles) colored by their estimated mode error α , identified fromnoisy data. Analytical eigenvalues (crosses) are shown for comparison. (a) Polynomial kernel ofdegree d = 5, q = (cid:0) (cid:1) = 21. (b) Exponential kernel, q = ∞ . (c) Gaussian kernel with σ = 1, q = ∞ . (d) Laplacian kernel with σ = 1, q = ∞ .criterion only accounts for the accuracy of DMD approximated Koopman eigenpairs.The results are shown in Figure 5. We observe that the polynomial kernel is slightly more robustthan the other kernels ( α ≈ − ) in the presence of noise, and is able to accurately identify thefirst few leading eigenvalues ( µ = 1 , . q = 21) in comparison to thenumber of snapshots ( m = 100), so we avoid problems of overfitting. In KDMD, the Koopmaneigenpairs are found from the eigendecomposition of the matrix A KDMD = Y + Y , where thecolumns of Y and Y are y = ψ ( x ) ∈ R q and y = ψ ( x ) ∈ R q respectively, and Y , Y ∈ R q × m .The matrix A KDMD has the same non-zero eigenvalues as the DMD matrix A = Y Y + . A is theoptimal (least-square or minimum-norm) solution to min A (cid:107) AY − Y (cid:107) F , where Y , Y ∈ R q × m .For the polynomial kernel, A is the solution to an over-constrained problem ( q < m ), and is hencemore robust to noise. In contrast, the exponential kernel, Gaussian kernel, and Laplacian kernelspan an infinite dimensional space of observables ( q = ∞ ). The finite dimensional approximationto the Koopman operator is found by solving an under-constrained problem ( q (cid:29) m ), which makesit more sensitive to noise, as these three kernels tend to over-fit the noise in the trainning dataset.Given noisy data, they are only able to accurately identify the eigenvalue µ = 1, whose eigenfunctionis a constant. Having demonstrated the use of the accuracy criterion with synthetic data, now we turn our atten-tion to data from fluids experiments. In these cases, the analytical Koopman spectral decompositionis unknown. An important advantage of the proposed accuracy criterion is that it does not relyon known Koopman eigenpairs, and can be applied so long as there is data available. We will usethe proposed accuracy criterion to identify accurate DMD modes for vorticity data from flow pasta circular cylinder in section 5.1, and from a separation experiment in section 5.2.13
20 -10 0 10-40-2002040
Re( λ ) I m ( λ ) (a) DMD, accuracy criterion -20 -10 0 10-40-2002040 10 -5 -4 -3 -2 -1 Re( λ ) I m ( λ ) (b) DMD, mode amplitude(c) Mode 1, f = 0 .
90 Hz (d) Mode 2, f = 1 .
77 Hz (e) Mode 3, f = 2 .
69 Hz
Figure 6: (a)-(b), Continuous-time DMD eigenvalues (circles) colored by (a) the accuracy criterion α and (b) mode amplitude β . Mode amplitudes are normalized by the maximum amplitude.Dominant frequencies (blue cross sign × ) are shown for comparison. The first 11 eigenvalues thathave small α and large β are shown (red plus sign +). (c)-(e) Three dominant DMD modes (onlyshow real part) picked out by accuracy criterion and mode amplitude. In this example, we use the experimental particle image velocimetry (PIV) data for flow past acircular cylinder at a Reynolds number of 413. The PIV velocity data was sampled at frequencyof 20 Hz with a resolution of 135 ×
80 pixels. See [25] for more details about this experiment. Thisdataset has been used in other studies [14, 28] for testing various proposed DMD algorithms. Wewill use vorticity data for DMD, which can be computed from velocity data by finite differencemethods. The state dimension is n = 135 ×
80 = 10800, and the number of snapshots in trainingdata is taken to be m = 1000. We use an additional m test = 1000 snapshot pairs as testing data.When we apply DMD to sequential data that has time step (cid:52) t , the continuous-time DMDeigenvalues λ DMD are related to the discrete-time DMD eigenvalues µ DMD by µ DMD = e λ DMD (cid:52) t . (5.1)The discrete-time DMD eigenvalues are computed with DMD and converted to continuous-timeDMD eigenvalues by equation (5.1), and in this example the time spacing is (cid:52) t = (1 / s . TheDMD frequency f DMD is related to the continuous-time DMD eigenvalues λ DMD by f DMD = Im( λ DMD )2 π , (5.2)where Im( λ DMD ) is the imaginary part of λ DMD .14e first apply the standard DMD method described in section 2.1. We use a truncation level r =100, which corresponds to preserving 78 .
16% of the total energy of the snapshots. The continuous-time DMD eigenvalues are shown shaded by the corresponding accuracy criterion values α inFigure 6(a), and time-averaged mode amplitudes β in Figure 6(b) (defined as in [16]).Inspecting Figure 6 (a), we observe that eigenvalues near the imaginary axis are more accurate,and this observation is consistent with physical intuition: this flow exhibits a von K´arm´an vortexstreet, whose dominant dynamics evolve on a limit cycle. For this experiment, the wake sheddingfrequency is f wake = 0 .
889 Hz [25], In previous work [25], the physically relevant dominant frequen-cies are reported as f = 0 Hz, f = 0 .
89 Hz, f = 1 .
77 Hz, f = 2 .
73 Hz. The DMD mode associatedwith λ is the mean of the flow, and λ , λ and λ are the first, second, and third harmonic ofthe fundamental wake frequency λ wake . These four frequencies represent the dominant dynamics inthis flow. This observation indicates that the proposed accuracy criterion can be used to identifyphysically relevant DMD modes/eigenvalues, and distinguish relevant modes from irrelevant ones.By comparing Figure 6(a) and (b), we verify that the accuracy criterion indicates the same domi-nant frequencies as the mode amplitude. The DMD modes that have higher accuracy as indicatedby the accuracy criterion are shown in Figure 6(c)-(e). We verify that they look similar to thoseidentified in previous work [25]. -40 -20 0 20-40-2002040 Re( λ ) I m ( λ ) (a) KDMD, accuracy criterion -40 -20 0 20-40-2002040 10 -5 -4 -3 -2 -1 Re( λ ) I m ( λ ) (b) KDMD, mode amplitude(c) Mode 1, f = 0 .
90 Hz (d) Mode 2, f = 1 .
79 Hz (e) Mode 3, f = 2 .
69 Hz
Figure 7: (a)–(b), Continuous-time KDMD eigenvalues (circles) colored by (a) the accuracy crite-rion α and (b) mode amplitude β . Mode amplitudes are normalized by the maximum amplitude.Dominant frequencies (blue cross sign × ) are shown for comparison. The first 11 eigenvalues thathave small α and large β are shown (red plus sign +). (c)–(e) Three dominant DMD modes (realpart) picked out by accuracy criterion and mode amplitude. KDMD
Next, we investigate the performance of KDMD on this dataset. Figure 7 shows resultsfor a polynomial kernel of degree d = 5, again using a truncation level of r = 100. The DMD15igenvalues are shown in Figure 7(a)–(b), colored by both accuracy criterion and mode amplitude.The relevant DMD modes picked out by accuracy criterion and mode amplitude are shown inFigure 7(c)–(e). We verify that accuracy criterion is able to isolate dominant modes when usingKDMD. In this example, we use PIV data from a canonical flow separation experiment sketched in Figure 8.Separation is induced on the surface of a flat plate by a suction/blowing boundary condition imposedon the wall of the wind tunnel, near the trailing edge of the plate. The free-stream velocityis U ∞ = 3 . c = 402 mm, the span is s = 305 mm, and the height is h = 0 . c . The Reynolds number based on chord length is Re c = 10 , small enough that theboundary layer is likely laminar upstream of the separation point. The average separation bubblelength is L sep = 0 . c . More information regarding the separation system and the flat plate modelcan be found in [6]. (a) Experiment setup (b) The PIV measurement region Figure 8: Sketch of the canonical separated flow experiment setup (adapted from [10]) and the PIVmeasurement region.PIV velocity data is sampled at f s = 1600 Hz, with a resolution of 319 ×
62 pixels. The PIVvorticity dataset for the separated flow studied here consists of m = 3000 snapshot pairs (thetraining data), with a state dimension n = 319 ×
62 = 19778. We also take another m test = 3000snapshot pairs as testing data.This particular experimental dataset has been used and studied in previous work [12], in whichthe shear layer frequency was found to be f SL = 106 Hz. The shear layer frequency is a periodicroll-up of the shear layer due to the Kelvin-Helmholtz instability. The shear layer frequency f SL canbe identified by applying total-least-squares DMD (TDMD), a variant of DMD which makes use oftotal-least-square regression to improve the accuracy of DMD for noisy data [5, 13]. As in [12], weuse a truncation level of r = 25, which corresponds to preserving 74% of the energy of the data. Inthis example the time spacing is (cid:52) t = 1 /f s = (1 / s .For comparison, we also compute the time-averaged mode amplitude β , as in the example insection 5.1 (e.g., Figure 6(b)). The DMD frequencies are plotted against their accuracy criterionvalues and mode amplitudes in Figure 9 (a)–(b). It is observed that f SL = 106 Hz is accuratelyidentified by TDMD. In addition, it stands out by having a small mode error. The DMD mode16
50 100 150 20000.20.40.60.81
Frequency (Hz) α (a) TDMD, accuracy criterion -5 -4 -3 -2 -1 Frequency (Hz) β (b) TDMD, mode amplitude(c) TDMD mode, f = 106 Hz Figure 9: TDMD frequency ( f TDMD ) and corresponding mode error/amplitude. Mode amplitudesare normalized by the maximum mode amplitude. The truncation level is r = 25. The shearlayer frequency f SL = 106 Hz is denoted with a red square, and corresponds to the most accurate(smallest α ) and largest amplitude (largest β ) mode.associated with shear layer frequency is plotted in Figure 9(c), and it agrees with the mode identifiedin previous work [12]. KDMD
We apply KDMD to this dataset, using polynomial kernels of degree d = 5, again witha truncation level of r = 25. Eigenvalue frequencies, and corresponding accuracy criterion valuesand mode amplitudes are plotted in Figure 10(a)–(b). We observe that the shear layer frequencyhas small error and large mode amplitude, and once again verify that the DMD mode associatedwith shear layer frequency (Figure 10(c)) agrees closely with that found in previous work [12]. Exploiting the connection between DMD and the Koopman operator, we have presented an accu-racy criterion to evaluate the quality (accuracy) of Koopman eigenpairs approximated with DMDvariants. The criterion does not assume access to the analytical Koopman spectral decomposition,which is generally unknown in practice. Furthermore, the proposed accuracy criterion naturallyapplies to other variants of DMD, because it is based on the general notion of Koopman eigenfunc-tions. The proposed accuracy criterion is validated with an synthetic system where the analyticalKoopman eigenpairs are known. Using this the accuracy criterion, we present a study of theperformance of various kernels, and assess their sensitivity to noisy data. In our examples, thepolynomial kernel (with finite-dimensional observables) performs well both in the sense of accuracy17
50 100 150 20000.20.40.60.81
Frequency (Hz) α (a) KDMD, accuracy criterion -5 -4 -3 -2 -1 Frequency (Hz) β (b) KDMD, mode amplitude(c) KDMD mode, f = 105 Hz Figure 10: KDMD frequency ( f KDMD ) and corresponding mode error/amplitude. The truncationlevel is r = 25. The shear layer frequency f SL = 106 Hz is denoted with a red square.and robustness to noise. Exponential, Gaussian, and Laplacian kernel are able to span an infinite-dimensional function space, but the tradeoff is that they are significantly more sensitive to noisein the dataset. We demonstrate that the accuracy criterion can assist in identifying accurate andphysically relevant DMD modes/eigenvalues from noisy experimental data. The accuracy criterionis conceptually simple and easy to use. As a data-driven algorithm, depending on the nature of theproblem, sometimes DMD produces relevant results, and sometimes outputs numerical artifacts.For reduced order modeling based on DMD/Koopman modes, it is hence important to assess thequality of DMD results.Note that our proposed accuracy criterion requires that some portion of data snapshots be keptout of the DMD analysis for purposes of assessing mode accuracy. However, it would be possibleto incorporate this additional data into the DMD analysis after the DMD modes and eigenvaluesof interest have been identified.The demand for accurate reduced order models (ROM) has increased rapidly in recent years,but it is still unclear how to select a subset of Koopman eigenpairs such that the original (nonlinear)system is accurately approximated. In order to build any meaningful ROM we need to at leastassess the accuracy and importance of DMD-approximated Koopman eigenpairs. The present workhas shed some light on the accuracy side. However, how to select the most dynamically importantKoopman eigenpairs remains an open question. Unlike techniques such as Proper Orthogonal De-composition, in which the modes are orthogonal by construction, Koopman eigenfunctions are ingeneral not orthogonal (though orthogonal DMD-like modes may be obtained [19]). Mode ampli-tudes obtained by a projection of data onto DMD modes are not necessarily always a meaningfulcriterion for evaluating importance, as demonstrated in the example in section 3. It would be18esirable to develop an importance criterion that can guide the selection of modes for the purposeof representing the dynamics accurately. Acknowledgments
The authors gratefully acknowledge Dr. Jessica Shang for the experimental data of the cylinderflow. Hao Zhang thanks Dr. Matthew O. Williams for the generous guidance and help as a labmate. This material is based upon work supported by the Air Force Office of Scientific Research(AFOSR) under award number FA9550-14-1-0289, and DARPA award HR0011-16-C-0116.
References [1] S. Bagheri. Effects of weak noise on oscillating flows: Linking quality factor, Floquet modes,and Koopman spectrum.
Physics of Fluids , 26:094104, 2014.[2] C. M. Bishop.
Pattern Recognition and Machine Learning . Springer, 2007.[3] K. K. Chen, J. H. Tu, and C. W. Rowley. Variants of dynamic mode decomposition: boundarycondition, Koopman, and Fourier analyses.
Journal of Nonlinear Science , 22(6):887–915, 2012.[4] A. Cotter, J. Keshet, and N. Srebro. Explicit approximations of the Gaussian kernel.arXiv:1109.4603, 2011.[5] S. T. M. Dawson, M. S. Hemati, M. O. Williams, and C. W. Rowley. Characterizing andcorrecting for the effect of sensor noise in the dynamic mode decomposition.
Experiments inFluids , 3(57):1–19, 2016.[6] E. Deem, L. Cattafesta, H. Zhang, C. Rowley, M. Hemati, F. Cadieux, and R. Mittal. Identi-fying dynamic modes of separated flow subject to ZNMF-based control from surface pressuremeasurements. AIAA Paper 2017-3309, 47th AIAA Fluid Dynamics Conference, 2017.[7] D. Duke, J. Soria, and D. Honnery. An error analysis of the dynamic mode decomposition.
Experiments in Fluids , 52(2):529–542, 2012.[8] G. E. Fasshauer. Positive definite kernels: past, present and future.
Dolomite Research Noteson Approximation , 4:21–63, 2011.[9] A. Gretton. Introduction to RKHS, and some simple kernel algorithms.
Advanced Topics inMachine Learning. Lecture conducted from University College London , 2013.[10] J. Griffin, M. Oyarzun, L. N. Cattafesta, J. H. Tu, C. W. Rowley, and R. Mittal. Control ofa canonical separated flow. AIAA Paper 2013-2968, 43rd AIAA Fluid Dynamics Conference,2013.[11] D. M. Hawkins. The problem of overfitting.
Journal of Chemical Information and ComputerSciences , 44(1):1–12, 2004.[12] M. S. Hemati, E. A. Deem, M. O. Williams, C. W. Rowley, and L. N. Cattafesta. Improvingseparation control with noise-robust variants of dynamic mode decomposition. AIAA Paper2016-1103, 54th AIAA Aerospace Sciences Meeting, Jan. 2016.1913] M. S. Hemati, C. W. Rowley, E. A. Deem, and L. N. Cattafesta. De-biasing the dynamicmode decomposition for applied koopman spectral analysis of noisy datasets.
Theoretical andComputational Fluid Dynamics , pages 1–20, 2017.[14] M. S. Hemati, M. O. Williams, and C. W. Rowley. Dynamic mode decomposition for largeand streaming datasets.
Physics of Fluids , 26(11):111701, 2014.[15] M. R. Jovanovi´c, P. J. Schmid, and J. W. Nichols. Sparsity-promoting dynamic mode decom-position.
Physics of Fluids , 26(2):024103, 2014.[16] J. Kou and W. Zhang. An improved criterion to select dominant modes from dynamic modedecomposition.
European Journal of Mechanics-B/Fluids , 62:109–129, 2017.[17] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor.
Dynamic Mode Decomposition:Data-Driven Modeling of Complex Systems , volume 149. SIAM, 2016.[18] J. Mercer. Functions of positive and negative type, and their connection with the theoryof integral equations.
Philosophical Transactions of the Royal Society of London, Series A ,209:415–446, 1909.[19] B. R. Noack, W. Stankiewicz, M. Morzy´nski, and P. J. Schmid. Recursive dynamic modedecomposition of transient and post-transient wake flows.
Journal of Fluid Mechanics , 809:843–872, 2016.[20] C. W. Rowley and S. T. Dawson. Model reduction for flow analysis and control.
AnnualReview of Fluid Mechanics , 49(1), 2017.[21] C. W. Rowley, I. Mezi´c, S. Bagheri, P. Schlatter, and D. S. Henningson. Spectral analysis ofnonlinear flows.
Journal of Fluid Mechanics , 641:115–127, 2009.[22] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data.
Journal ofFluid Mechanics , 656:5–28, 2010.[23] C. Scovel, D. Hush, I. Steinwart, and J. Theiler. Radial kernels and their reproducing kernelHilbert spaces.
Journal of Complexity , 26(6):641–660, 2010.[24] C. R. Souza. Kernel functions for machine learning applications.
Creative CommonsAttribution-Noncommercial-Share Alike , 3, 2010.[25] J. H. Tu, C. W. Rowley, J. N. Kutz, and J. K. Shang. Spectral analysis of fluid flows usingsub-Nyquist-rate PIV data.
Experiments in Fluids , 55(9):1–13, 2014.[26] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic modedecomposition: Theory and applications.
Journal of Computational Dynamics , 1(2):391–421,2014.[27] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of theKoopman operator: Extending dynamic mode decomposition.
Journal of Nonlinear Science ,25(6):1307–1346, 2015.[28] M. O. Williams, C. W. Rowley, and I. G. Kevrekidis. A kernel-based method for data-drivenKoopman spectral analysis.
Journal of Computational Dynamics , 2(2):247–265, 2015.2029] A. Wynn, D. Pearson, B. Ganapathisubramani, and P. Goulart. Optimal mode decompositionfor unsteady flows.