Structure-preserving Model Reduction of Parametric Power Networks
aa r X i v : . [ ee ss . S Y ] F e b Structure-preserving Model Reduction of Parametric Power Networks ∗ Bita Safaee and Serkan Gugercin Abstract — We develop a structure-preserving parametricmodel reduction approach for linearized swing equations whereparametrization corresponds to variations in operating con-ditions. We employ a global basis approach to develop theparametric reduced model in which we concatenate the localbases obtained via H -based interpolatory model reduction.The residue of the underlying dynamics corresponding to thesimple pole at zero varies with the parameters. Therefore, tohave bounded H and H ∞ errors, the reduced model residuefor the pole at zero should match the original one over theentire parameter domain. Our framework achieves this goalby enriching the global basis based on a residue analysis. Theeffectiveness of the proposed method is illustrated through twonumerical examples. I. INTRODUCTIONPower networks are naturally modeled as second-orderdynamical systems [15], [24], [29], [32]. In the case oflarge-scale networks, monitoring, analysis and control ofresulting second-order systems become exceedingly difficultdue to unmanageable computational demands. To tackle thispredicament, we apply model reduction in which the goalis to construct a lower dimensional model that preserves thephysically meaningful second-order dynamics and providesa high-fidelity approximation of the input/input behaviour.There is a plethora of model reduction approaches forsecond-order dynamical systems, see, e.g., [3], [13], [5], [33],[25], [31], [14], for model reduction of general second-ordersystems, and see, e.g., [22], [15], [16], [26], [37] with a focuson network dynamics.In this paper, we focus on parametrically varying powernetworks where the parameter variations correspond to dif-ferent operation conditions. This leads to the parametricmodel reduction (PMOR) framework [8], [11], [19], [30].The goal of PMOR is to find a parametric reduced model thatcan approximate the original model with acceptable fidelityover a wide range of parameters. PMOR eliminates the needfor performing a separate reduction at each parameter value(operating condition) and therefore plays an important rule incontrol, design, optimization and uncertainty quantification.To form our parametric reduced-order structure-preserving(second-order) power network model we employ a globalbasis approach where the model reduction basis is con-structed by concatenation of local bases for selected param-eter samples. We obtain the local bases using second-order *This work was supported in parts by National Science Foundation underGrant No. DMS-1923221. B. Safaee is with the Department of Mechanical Engineering, VirginiaTech, Blacksburg, VA 24061, [email protected] S. Gugercin is with the Department of Mathematics Virginia Tech,Blacksburg, VA 24061, [email protected] interpolatory H -optimal methods [34], [35]. Since the full-order dynamics has a pole at zero with a parametricallyvarying residue, the parametric reduced model needs to retainthis residue in order to have bounded H and H ∞ errornorms for whole parameter domain. Based on a detailedresidue analysis, we establish the subspace conditions on themodel reduction basis to guarantee this property and explainthe algorithmic implications.The remainder of this paper is organized as follows:Section II presents nonlinear model of the swing equationsas well as its corresponding non-parametric and paramet-ric second-order linear approximations. In Section III, wedescribe the parametric reduction method via interpolatorymodel reduction bases. Section IV presents our main theoret-ical results for subspace conditions to guarantee parametricresidue-matching together with computational details. Sec-tion V illustrates the feasibility of our approach via numericalexamples followed by conclusions in Section VI.II. N ETWORK SWING MODEL
A power network can be represented by a connected graph G = ( V , E ) with buses as nodes V = { , . . . , n } andtransmission lines as edges E ⊆ V × V . Generally, a buscan host different combinations of generators and loads, orit may even be a simple junction node. Assume that eachbus hosts a generator. We can model the active power P ij flowing from bus (node) i to bus j along the transmissionline ( i, j ) ∈ E as P ij = E i E j χ ij sin( δ i − δ j ) , (1)where δ i is the phase angle, E i is the peak voltage magni-tude, and χ ij > is the line reactance. This model ignoresthe line resistances. The swing equation for a single generator i results from Newton’s second law and is given by M i ¨ δ i + D i ˙ δ i = P mechi − P eleci , i ∈ { , . . . , n } , (2)where M i > is the rotor moment of inertia, D i > isa damping constant, and P mechi and P eleci are the inputmechanical power and output electrical power for the i th generator, respectively. Combing (1) and (2) leads to theswing equations of an electric power grid [12], [29], [32] M i ¨ δ i + D i ˙ δ i + X j ∈V i E i E j χ ij sin( δ i − δ j ) (3) = P mechi − P loadi = P neti , ∀ i ∈ V , where the set V i ∈ V refers to those buses connected to bus i in G , P load corresponds to the portion of the electric powerconsumed at bus i and P neti is the net power input at bus i .ssuming small angle differences ( δ i − δ j ≃ and unityvoltage magnitudes ( E i = 1 ), we can rewrite (1) as P ij ≃ b ij ( δ i − δ j ) , (4)where b ij = χ ij is the suseptance between the nodes ( i, j ) ∈E . Define δ = [ δ , δ , . . . , δ n ] T ∈ IR n . Then, the originaldynamics in (3) can be linearized as Σ := ( M ¨ δ ( t ) + D ˙ δ ( t ) + Lδ ( t ) = Bu ( t ) ,y ( t ) = Cδ ( t ) , (5)where M = diag ( M , M , . . . , M n ) ∈ IR n × n and D = diag ( D , D , . . . , D n ) ∈ IR n × n are the diagonal matricesof inertia and damping coefficients, and L ∈ IR n × n is thesusceptance Laplacian matrix ( L = L T ≥ ) whose ( i, j ) thentry is given by [ L ] i,j := − b ij , if ( i, j ) ∈ E , P ( i,j ) ∈E b ij , if j = i , , otherwise. (6)Moreover u = [ P net . . . P netn ] T ∈ IR n , B ∈ IR n × n is the identity matrix, and C ∈ IR q × n yields the outputof the system. By defining the new state variable x =[ δ T ˙ δ T ] T ∈ IR n , one can equivalently represent the second-order dynamic (5) in its first-order form ˙ x = A x + B u, y ( t ) = C x (7)with A = (cid:20) I − M − L − M − D (cid:21) ∈ IR (2 n ) × (2 n ) , B = (cid:20) M − B (cid:21) ∈ IR (2 n ) × n , and C = (cid:2) C (cid:3) ∈ IR q × (2 n ) ,where I ∈ IR n × n is the identity matrix. Due to the simplezero eigenvalue of L , A has one eigenvalue at zero and n − eigenvalues in the left-half plane. Thus (5) is a stabledynamical system, not asymptotically stable [15]. A. Linearized parametric model
In practice, matrix L is not constant due to variations, forexample, in peak voltage magnitudes E i . Therefore, to allowvariations, we will view E i as a parameter that can vary andwrite it simply as p i . This leads to the parametric powernetwork model that appears as M i ¨ δ i ( t ; p )+ D i ˙ δ i ( t ; p ) + X j ∈ ν i p i p j χ ij sin( δ i ( t ; p ) − δ j ( t ; p ))= P neti , ∀ i ∈ V (8)with the corresponding linear model ( M ¨ δ ( t ; p ) + D ˙ δ ( t ; p ) + L ( p ) δ ( t ; p ) = Bu ( t ) ,y ( t ; p ) = Cδ ( t ; p ) , (9)where p = (cid:2) p p . . . p n (cid:3) T ∈ Ω ⊆ IR n is the parame-ter vector, the matrix L ( p ) will now vary with p , and allowsfor variation in operating conditions. The parametric matrix L ( p ) can be written as L ( p ) = P L P , (10) where P = diag ( p ) = diag ( p , . . . , p n ) ∈ IR n × n is diagonaland L is as defined in (6). Note that p i = 1 for i = 1 , . . . , n recovers the non-parametric problem. We will allow p i ’s varyaround this nominal value, i.e., p i ∈ (1 − α, α ) where <α < ; thus P stays invertible for every p ∈ Ω . Choosing,e.g., α = 0 . , corresponds to allowing a variation inpeak voltage magnitudues.III. S TRUCTURE - PRESERVING PARAMETRIC REDUCEDMODELS FOR LINEARIZED SWING EQUATIONS
We seek to develop a reduction framework such thatnot only it preserves the structure, but also the parametricreduced model serves with acceptable accuracy as a surrogatemodel over diverse operating conditions. Since it is crucialthat the reduced model preserves the physically-meaningfulsecond-order structure, instead of transferring the second-order dynamics to the first-order form, as in (7), and applyingmodel reduction there, we will directly reduce the second-order dynamics (9). In other words, our goal is to find areduced parametric system M r ¨ δ r ( t ; p ) + D r ˙ δ r ( t ; p ) + L r ( p ) δ r ( t ; p ) = B r u ( t ) y r ( t ; p ) = C r δ r ( t ; p ) , (11)where M r , L r ( p ) , D r ∈ IR r × r , B ∈ IR r × n and C ∈ IR q × r with r ≪ n such that the y r ( t ; p ) ≈ y ( t ; p ) for a wide rangeof inputs u ( t ) over the parameter range of interest.Since M and D are symmetric positive definite, and L ( p ) is symmetric positive semi-definite, one should preservethese structures in the reduced model. We achieve this usingGalerkin projection: construct a model reduction basis V ∈ IR n × r and the reduced-order matrices in (11) using M r = V T M V, D r = V T DV, L r ( p ) = V T L ( p ) V, (12) B r = V T B, and C r = CV.
Accuracy of the structure-preserving reduced model (11)with the form (12) clearly depends on the choice of V . Wedescribe this choice next. A. Interpolatory model reduction bases
There are numerous ways to choose the model reductionbasis V for reducing parametric dynamical systems; see, forexample, [1], [8], [11], [19], [30] and the references therein.For the parametric structured second-order dynamical sys-tem (9), we will employ the structure-preserving parametricinterpolatory model reduction framework from [2], whichextended the interpolatory model reduction framework forparametric systems [4] to the structured setting. For recentextensions of structured interpolatory model reduction tospecial classes of nonlinear systems, see [9], [10].Transfer functions of the full-order parametric model (9)and reduced one (11) are, respectively, given by H ( s, p ) = C ( s M + sD + L ( p )) − B, and (13) H r ( s, p ) = C r ( s M r + sD r + L r ( p )) − B r . (14)Note that both H ( s, p ) and H r ( s, p ) are q × n matrix-valuedrational functions in s . The goal, in parametric interpola-tory model reduction, is to choose V such that H r ( s, p ) nterpolates H ( s, p ) at selected points in the frequency s and parameter p . Since H ( s ) is matrix-valued, one enforcesinterpolation only along the selected directions: Let p ( i ) bea parameter point of interest. And let { σ ( i )1 , . . . , σ ( i ) r i } ∈ IC be the frequency interpolation points with the correspondingtangent directions { b ( i )1 , . . . , b ( i ) r i } ∈ IC n for the parame-ter sample p ( i ) . Assume we have m parameter samples { p (1) , . . . , p ( m ) } . Then, the goal is to construct V such that H ( σ ( i ) j , p ( i ) ) b ( i ) j = H r ( σ ( i ) j , p ( i ) ) b ( i ) j (15)for j = 1 , , . . . , r i and i = 1 , , . . . , m .Define K ( s, p ) = s M + sD + L ( p ) . For i = 1 , , . . . , m ,construct the local interpolation basis V ( i ) ∈ IC n × r i corre-sponding to the parameter sample p ( i ) using V ( i ) = [ K ( σ ( i )1 , p ( i ) ) − Bb ( i )1 , . . . , K ( σ ( i ) r i , p ( i ) ) − Bb ( i ) r i ] and concatenate the local bases to construct the global basis: V = orth (cid:0)(cid:2) V (1) V (2) . . . V ( m ) (cid:3)(cid:1) ∈ IR n × r , (16)where “ orth ” refers to an orthogonal basis so that V T V = I r . Realness of V is guaranteed by choosing the interpolationpoints and tangent directions in conjugate pairs. Then, thereduced model (11) obtained as in (12) using V from (16)satisfies the interpolation conditions (15); see [1], [2].Quality of the reduced model will depend on the choice ofinterpolation points and tangent directions. In this paper, wechoose them, and thus the local bases V ( i ) , using interpola-tory optimal H model reduction. In other words, for every p ( i ) , we construct the local basis V ( i ) to minimize/reducethe H -distance k H ( · , p ( i ) ) − H r ( · , p ( i ) ) k H = (17) (cid:16) π Z ∞−∞ k H ( ıω, p ( i ) ) − H r ( ıω, p ( i ) ) k F dω (cid:17) , where ı = − and k · k F denotes the Frobenius norm.Optimal H model reduction is a heavily studied topic,In the case of unstructured linear dynamical systems, i.e., H r ( s ) = C r ( sI r − A r ) − B r , the optimal reduced model inthe H -norm is a bitangential Hermite interpolant to H ( s ) atthe mirror images of the reduced poles [2], [18]. The IterativeRational Krylov Algorithm (IRKA) [18] and it variants, e.g.,[7], [20], [36], have been successfully applied in this settingto construct optimal interpolation points and directions. Sincewe require the reduced-model to have the second-order form,we employ the structured version of IRKA, namely theSecond Order IRKA (SOR-IRKA) [34], [35] to construct thelocal bases V ( i ) . SOR-IRKA produces a reduced-model thatsatisfies only a subset of optimal interpolation conditions atthe cost of preserving structure. Since the underlying systemhas a pole at zero in our case, we will modify SOR-IRKAfurther. This will be explained in detail in Section IV-B. Forother work on H -based model reduction of second-ordersystems, see, e.g., [6], [26], [37]. Remark 3.1:
As opposed to developing locally optimal H model reduction bases V ( i ) and concatenating them to construct the global basis V , following [4] one could intro-duce a composite error measure ( L error in the parameterspace and H error in the frequency domain). Then, onecan try to construct V directly to minimize this compositemeasure. We refer the reader to [4] and more recent works[17], [21] in this direction for the unstructured setting.IV. M ATCHING THE PARAMETRIC RESIDUECORRESPONDING TO THE POLE AT ZERO
Since L ( p ) = P L P and L = 0 where ∈ IR n × is thevector of ones, we obtain L ( p ) P − = P L = 0 . Therefore,for every p ∈ Ω , L ( p ) has a simple zero eigenvalue withthe eigenvector υ = P − , and consequently H ( s, p ) has asimple pole at zero for every p . This means that H ( s, p ) isnot an H -function. However, we can still perform an H -based model reduction on H ( s, p ) as long as we guaranteethat the error system, i.e., H ( s, p ) − H r ( s, p ) , stays an H -function for every p . This issue has been studied in the non-parametric case. [15] achieves a bounded H error normin model reduction of second order networks where theGalerkin projection is obtained via clustering techniques. In amore recent work, [37] splits a non-parametric second ordernetwork with proportional damping into an asymptoticallystable system and an average subsystem containing the zeroeigenvalue. Then, the asymptotically stable system is reducedvia interpolatory techniques and then re-combined with theaverage system leads to a reduced model with bounded (andsmall) H error. We also refer the reader to, e.g., [23], [26]–[28] for the first-order dynamics case.In reducing the parametric second-order model (9), weneed to enforce that H r ( s, p ) retains the zero eigenvalue andits parametric residue for every p ∈ Ω so that the errorstays bounded over the whole domain. Next, we establishthe subspace conditions on the model reduction basis V toachieve this goal. A. Subspace conditions for matching the parametric residue
For a given a parameter, the next result establishes theconditions on V to match the residue at zero. Theorem 4.1:
Given the parametric full-order model (9),let the parametric reduced model (11) be obtained as in (12).Let ˆ p ∈ Ω be a parameter of interest. Define ˆ P = diag (ˆ p ) and ˆ υ = ˆ P − . Then for ˆ p ∈ Ω , the reduced model H r ( s, ˆ p ) retains the simple pole of H ( s, ˆ p ) at zero and itscorresponding parameter-dependent residue if ˆ υ ∈ span ( V ) . Proof:
First, we show that L r (ˆ p ) has a simple zeroeigenvalue. Using ˆ υ ∈ span ( V ) , write V as V = (cid:2) V ˆ υ (cid:3) where V ∈ IR n × ( r − and ˆ υ / ∈ span ( V ) . Then, using thefact L (ˆ p )ˆ υ = 0 , we obtain L r (ˆ p ) = V T L (ˆ p ) V = (cid:20) V T L (ˆ p ) V
00 0 (cid:21) . (18)Since ˆ υ / ∈ span ( V ) , L r (ˆ p ) has only one simple zeroeigenvalue. Moreover, since M and D are positive definiteand model reduction is performed via a Galerkin projectionas in (12), all the other poles of H r ( s, ˆ p ) have negative realparts except for this simple pole at zero.ow we need to show that the parametrically varyingresidues of H ( s, ˆ p ) and H r ( s, ˆ p ) corresponding to the poleat zero match. To find the residue of H ( s, ˆ p ) , we followan analysis inspired by [15]. Transform the second-orderdynamic (9) to its equivalent first-order form ˙ x ( t ; p ) = A ( p ) x ( t ; p ) + B u ( t ) , y ( t ; p ) = C x ( t ; p ) , where A ( p ) = (cid:20) I − M − L ( p ) − M − D (cid:21) , B = (cid:20) M − B (cid:21) , and C = (cid:2) C (cid:3) . (19)Let A ( p ) have the Jordan decomposition A ( p ) = Q Λ Q − = (cid:2) q Q (cid:3) (cid:20) (cid:21) (cid:20) ˜ q T ˜ Q T (cid:21) , (20)where the Jordan block ¯Λ ∈ IC (2 n − × (2 n − contains theeigenvalues with negative real parts, and q ∈ IR n and ˜ q ∈ IR n are, respectively, the right and left eigenvectorscorresponding to zero eigenvalue such that A T ( p )˜ q = 0 , A ( p ) q = 0 , ˜ q T q = 1 . (21)We note that this decomposition is parameter dependent butto simplify the notation, we write, e.g., Q instead of Q ( p ) .At p = ˆ p , using L (ˆ p )ˆ υ = 0 , and (20) and (21), we obtain q = (cid:20) ˆ υ (cid:21) and ˜ q = 1 α D (cid:20) ˆ D ˆ υM ˆ υ (cid:21) , (22)where α D = ˆ υ T D ˆ υ . Using (20), we write H ( s, ˆ p ) = C ( sI − A ( p )) − B = C Q ( sI − Λ) − Q − B = ( C q )(˜ q T B ) s + C Q ( sI − ¯Λ) − ˜ Q B . (23)Thus, φ = ( C q )(˜ q T B ) is the residue of H ( s, ˆ p ) for thepole at zero. Then, substituting q and ˜ q from (22), and C and B from (19) into φ = ( C q )(˜ q T B ) yields φ = C α − D (cid:20) ˆ υ ˆ υ T D ˆ υ ˆ υ T M (cid:21) B = α − D C ˆ υ ˆ υ T B. (24)Similarly, the residue of the reduced system H r ( s, ˆ p ) corre-sponding to the pole at zero is obtained as φ r = α − D r CV V T ˆ υ ˆ υ T V V T B, (25)where α D r = ˆ υ T V D r V T ˆ υ .Since V V T is an orthogonal projector, if ˆ υ ∈ span ( V ) ,we have V V T ˆ υ = ˆ υ , α D r = ˆ υ T V D r V T ˆ υ = ˆ υ T V V T DV V T ˆ υ = ˆ υ T D ˆ υ = α D , and thus φ r = α − D r CV V T ˆ υ ˆ υ T V V T B = φ .Theorem 4.1 establishes that if ˆ υ = ˆ P − ∈ span ( V ) , forthat parameter value ˆ p , the residues of H ( s, ˆ p ) and H r ( s, ˆ p ) match for the pole at s = 0 . This means that H ( s, ˆ p ) − H r ( s, ˆ p )= C ( sI − A (ˆ p )) − B − C r ( sI − A r (ˆ p )) − B r = φ s + H a ( s, ˆ p ) − (cid:18) φ r s + H a r ( s, ˆ p ) (cid:19) = H a ( s, ˆ p ) − H a r ( s, ˆ p ) , where H a ( s, ˆ p ) = C Q ( sI − ¯Λ) − ˜ Q B as in (23) and H a r ( s, ˆ p ) = C r Q r ( sI − ¯Λ r ) − ˜ Q r B r are asymptoticallystable. Therefore, the error system is asymptotically stableat ˆ p . We write this result as a corollary. Corollary 4.1:
Assume the set-up of Theorem 4.1. Then,the error system H ( s, ˆ p ) − H r ( s, ˆ p ) is asymptotically stable,and has bounded H and H ∞ norms. B. Algorithmic Implications
Theorem 4.1 and Corollary 4.1 hint at how to construct V so that the error system is asymptotically stable at aparameter value of interest. As stated in Section III-A,for the parameter samples p ( i ) for i = 1 , . . . , m , we willconstruct the local bases V ( i ) via SOR-IRKA to have local H optimality. However, we will modify SOR-IRKA bytaking into consideration that H ( s, p ) has a pole at zero forevery p , i.e., H ( s, p ) is not an H function. SOR-IRKA isan iterative algorithm that corrects the interpolation points inevery step. Due to the pole at zero, SOR-IRKA will driveone of the interpolation points to zero as it should so thatthe pole and residue at zero are matched. This will requirecomputing the vector K (0 , p ( i ) ) − Bb ( i )0 . However, due to thepole at zero, K (0 , p ( i ) ) is not invertible. Therefore, inspiredby Theorem 4.1, in SOR-IRKA, we will replace this vectorwith the zero eigenvector of L ( p ( i ) ) and thus the span of V ( i ) will contain this eigenvector. Hence, once the global basis V is constructed as in (16), Theorem 4.1 will guarantee that theerror system H ( s, p ) − H r ( s, p ) is asymptotically stable forthe sampled parameter values p ( i ) for i = 1 , . . . , m .To use H r ( s, p ) for an unsampled parameter value ˆ p andto still guarantee bounded error, we compute ˆ υ = ˆ P − ,construct the new basis b V = (cid:2) V ˆ υ (cid:3) , and obtain H r ( s, ˆ p ) as in (12), now using b V . Theorem 4.1 will then guarantee abounded error at ˆ p as well.The reduction step (12) does not need to be applied fromscratch for every new ˆ p . For the new basis b V , consider c M r : c M r = b V T M b V = (cid:20) V T M V V T M ˆ υ ˆ υ T M V ˆ υ T M ˆ υ (cid:21) . The terms V T M V , V T M and M V are calculated only once in the offline stageusing V , and only the vector M ˆ v needs computing for a newparameter ˆ p . The situation is similar for the other reducedquantities except for b L r ( p ) due to the nonaffine parametriza-tion of L ( p ) = P L P . An affine parametric approximation of L ( p ) to allow efficient online computations, via DEIM, forexample, [11], will be studied in a future work. C. Smaller number of parameters
Now we assume that L ( p ) is parametrized with a smallernumber of parameters. Let p = [ p p · · · p ν ] T ∈ Ω ν ⊆ IR ν and consider the parametrization L ( p ) = P L P with P = diag ( p I n , . . . , p ν I n ν ) , (26)where n + · · · + n ν = n and ν < n . This can be viewed assome of the peak voltage magnitudes E i varying together.This structure will drastically simplify the algorithmic con-siderations from Section IV-B. In (26) we can also set some p i ’s to to allow variations only in a subset set E i ’s. roposition 4.1: Consider the parametrization in (26). Let q ∈ IR q denote the zero vector and define e k = (cid:2) Tn + ··· + n k − Tn k Tn k +1 + ··· + n ν (cid:3) T ∈ IR n (27)for k = 1 , , . . . , ν . If { e , e , . . . , e ν } ∈ span ( V ) , then H r ( s, p ) retains the simple pole at zero and its correspondingparameter-dependent residue of H ( s, p ) for every p ∈ Ω ν . Proof:
For any ˆ p ∈ Ω ν , ˆ υ = (cid:2) p n · · · p ν n ν (cid:3) T is the eigenvector of L (ˆ p ) corresponding to the zero eigen-value. Note that ˆ υ = p e + · · · + p ν e ν . Therefore, if { e , e , . . . , e ν } ∈ span ( V ) , we have ˆ v ∈ span ( V ) for every ˆ p ∈ Ω ν and the desired result follows from Theorem 4.1.Proposition 4.1 reveals that in the case of the parametriza-tion (26), adding ν vectors to the span of V will be enoughto match the residue at s = 0 for every p ∈ Ω ν . Therefore,augmenting the global basis by a new vector for a given ˆ p as explained in Section IV-B is no longer necessary. A fixed global basis V satisfying { e , e , . . . , e ν } ∈ span ( V ) doesthe job for every p ∈ Ω ν . Note that one needs ν to be modestso that the reduced dimension stays modest.
1) Algorithmic details for implementing Proposition 4.1:
The global basis V in Proposition 4.1 can result fromany model reduction method of choice. As long as thevectors { e , . . . , e ν } are added to its span, the result willhold. We will form V as in (16) where the local basesresult from the modified implementation of SOR-IRKA asdescribed in Section IV-B. Given the parameter samples p ( i ) for i = 1 , . . . , m , let υ ( i ) denote the eigenvector of L ( p ( i ) ) corresponding to the zero eigenvalue. Our SOR-IRKA implementation will provide that { υ (1) , . . . , υ ( m ) } ∈ span ( V ) . As shown in the proof of Proposition 4.1, for any ˆ p ∈ Ω ν , ˆ υ = ˆ P − is spanned by ν vectors. We will choose m ≥ ν different parameter samples, obtaining a linearlyindependent set { υ (1) , . . . , υ ( m ) } . Since these vectors are inthe span of V , we will automatically satisfy the subspacecondition in Proposition 4.1. Therefore, our construction of V via modified SOR-IRKA with m ≥ ν parameter sampleswill guarantee bounded H and H ∞ error for every p ∈ Ω ν without explicitly adding the vectors { e , . . . , e ν } to themodel reduction basis V .V. N UMERICAL RESULTS
We use a linearized model of -bus Polish network[38] with n = 2736 . We focus on a single-input single-output model with B = C T = [1 0 · · · T ∈ IR n × and allow variation in peak voltage magnitudes, i.e., . ≤ p i ≤ . in L ( p ) . Recall that p i = 1 corresponds tothe non-parametric unity voltage magnitude case ( E i = 1 ). A. Case 1: two parameters
We consider a parametrization with ν = 2 parameters p and p as P = diag ( p I n , p I n ) . We pick two randomsamples, namely p (1) = [0 . . T and p (2) =[1 . . T , and apply the modified SOR-IRKA toobtain local bases V (1) ∈ IR n × and V (2) ∈ IR n × .An orthogonalization of [ V (1) V (2) ] leads to the globalbasis V ∈ IR n × , thus a reduced model H r ( s, p ) with Fig. 1. Example V-A: Relative H ∞ error over the parameter domain r = 40 . Due to Proposition 4.1 and the discussion in SectionIV-C.1, H r ( s, p ) matches the residue at s = 0 and providesbounded H and H ∞ error throughout the whole domain [ p , p ] ∈ Ω = [0 .
85 1 . × [0 .
85 1 . . To illustrate theaccuracy of H r ( s, p ) , in Figure 1 we show the relative H ∞ error over the full parameter space. As the figure illustrates,the structure-preserving reduced model H r ( s, p ) is a highfidelity approximation to H ( s, p ) over the full parameterspace with a maximum relative error less than . × − . B. Case 2: four parameters
In this example, we consider paremetrization via fourparameters p , p , p and p to generate the matrix P suchthat P = diag ( p I n , p I n , p I n , p I n ) . We randomly pickfour parameter sample sets:Sample set p p p p p (1) p (2) p (3) p (4) V ( i ) ∈ IR n × ; i = { , , , } anda parametric reduced model of order r = 80 ( V ∈ IR n × ) .As in the previous example, this reduced model guaranteesbounded error over the whole parameter space. To show theapproximation quality, we pick random samples in thefour-dimensional parameter space, and depict the resultingrelative H ∞ error in Figure 2, showing a maximum relativeerror less than − over this sample set.VI. C ONCLUSIONS AND F UTURE W ORK
We have developed a structure-preserving parametricmodel reduction approach for linearized swing equationsusing a global basis approach and H -based interpolatorymodel reduction. We have established the subspace condi-tions for the model reduction basis so that the error systemis an H and H ∞ function over the entire parameter space.The efficiency of our proposed approach has been illustratedvia two numerical examples.Parameter sampling for constructing the local bases wasnot the focus of this work. Any efficient parameter selection
50 100 150 200
Samples -3 -2 || H - H r || / || H a || Relative H error Vs. p
Fig. 2. Example V-B: Relative H ∞ error over 200 samples. methodology can be incorporated into our framework andwill be considered in a future together with the recentcomposite H × L -optimal basis constructions [17], [21].Extensions to the nonlinear parametric setting is also animportant topic to consider.A CKNOWLEDGEMENTS
We thank Dr. Vassilis Kekatos and Dr. Siddharth Bhela forvarious discussions and for providing the 2736-bus Polishnetwork model. R
EFERENCES[1] A.C. Antoulas, C. Beattie, and Gugercin. S.
Interpolatory methods formodel reduction . Computational Science and Engineering 21. SIAM,Philadelphia, 2020.[2] A.C. Antoulas, C.A. Beattie, and S. Gugercin. Interpolatory modelreduction of large-scale dynamical systems. In J. Mohammadpour andK. Grigoriadis, editors,
Efficient Modeling and Control of Large-ScaleSystems , pages 2–58. Springer-Verlag, 2010.[3] Z. Bai and Y. Su. Dimension reduction of large-scale second-orderdynamical systems via a second-order arnoldi method.
SIAM Journalon Scientific Computing , 26(5):1692–1709, 2005.[4] U. Baur, C. Beattie, P. Benner, and S. Gugercin. Interpolatoryprojection methods for parameterized model reduction.
SIAM Journalon Scientific Computing , 33(5):2489–2518, 2011.[5] C. Beattie and S. Gugercin. Interpolatory projection methods forstructure-preserving model reduction.
Systems & Control Letters ,58(3):225 – 232, 2009.[6] C.A. Beattie and P. Benner. H -optimality conditions for structureddynamical systems. Preprint MPIMD/14-18, Max Planck InstituteMagdeburg, Germany , 2014.[7] C.A. Beattie and S. Gugercin. Realization-independent H -approximation. In Proceedings of 51st IEEE Conference on Decisionand Control , pages 4953 – 4958, 2012.[8] P. Benner, A. Cohen, M. Ohlberger, and K. Willcox.
Model Reductionand Approximation: Theory and Algorithms . Computational Scienceand Engineering, SIAM Publications, Philadelphia, PA, 2017.[9] P. Benner, S. Gugercin, and S. W. R. Werner. Structure-preservinginterpolation for model reduction of parametric bilinear systems. e-print 2007.11269, arXiv, 2020. math.NA.[10] P. Benner, S. Gugercin, and S. W. R. Werner. Structure-preservinginterpolation of bilinear control systems. e-print 2005.00795, arXiv,2020. math.NA.[11] P. Benner, S. Gugercin, and K. Willcox. A survey of projection-basedmodel reduction methods for parametric dynamical systems.
SIAMReview , 57(4):483–531, 2015.[12] A. R. Bergen and D. J. Hill. A structure preserving model for powersystem stability analysis.
IEEE Transactions on Power Apparatus andSystems , PAS-100(1):25–35, 1981.[13] T. Bonin, H. Faßbender, A. Soppa, and M. Zaeh. A fully adaptiverational global arnoldi method for the model-order reduction ofsecond-order mimo systems with proportional damping.
Mathematicsand Computers in Simulation , 122:1 – 19, 2016. [14] V. Chahlaoui, K. A. Gallivan, A. Vandendorpe, and P. Van Dooren.Model reduction of second-order system. In P. Benner, V. Mehrmann,and D.C. Sorensen, editors,
Dimension Reduction of Large-ScaleSystems , volume 45 of
Lecture Notes in Computational Scienceand Engineering , pages 149–172. Springer-Verlag, Berlin/Heidelberg,Germany, 2005.[15] X. Cheng, Y. Kawano, and J. M. A. Scherpen. Reduction of second-order network systems with structure preservation.
IEEE Transactionson Automatic Control , 62(10):5026–5038, 2017.[16] X. Cheng, J. M. A. Scherpen, and Y. Kawano. Model reduction ofsecond-order network systems using graph clustering. In , pages 7471–7476, 2016.[17] A. R. Grimm.
Parametric Dynamical Systems: Transient Analysis andData Driven Modeling . PhD thesis, Virginia Tech, 2018.[18] S. Gugercin, A.C. Antoulas, and C.A. Beattie. H model reductionfor large-scale linear dynamical systems. SIAM Journal on MatrixAnalysis and Applications , 30(2):609–638, 2008.[19] J. S. Hesthaven, G. Rozza, and B. Stamm.
Certified reduced basismethods for parametrized partial differential equations . SpringerBriefs in Mathematics. Springer, Switzerland, 2016.[20] J. Hokanson and C. Magruder. H -optimal model reduction usingprojected nonlinear least squares. (1811.11962), 2018.[21] M. Hund, P. Mlinari´c, and J. Saak. An H ⊗ L -optimal model orderreduction approach for parametric linear time-invariant systems. Proc.Appl. Math. Mech. , 18(1):e201800084, 2018.[22] T. Ishizaki and J. Imura. Clustered model reduction of interconnectedsecond-order systems.
Nonlinear Theory and Its Applications, IEICE ,6(1):26–37, 2015.[23] H.-J. Jongsma, P. Mlinari´c, S. Grundel, P. Benner, and H.L. Trentel-man. Model reduction of linear multi-agent systems by clusteringwith H and H ∞ error bounds. Mathematics of Control, Signals,and Systems , 30(1):6, 2018.[24] P. Kundur.
Power System Stability and Control . McGraw-Hill, NewYork, NY, 1994.[25] D. G. Meyer and S. Srinivasan. Balancing and model reduction forsecond-order form linear systems.
IEEE Transactions on AutomaticControl , 41(11):1632–1644, 1996.[26] P. Mlinari´c.
Structure-Preserving Model Order Reduction for NetworkSystems . PhD thesis, Otto-von-Guericke-Universit¨at Magdeburg, 2020.[27] P. Mlinari´c, S. Grundel, and P. Benner. Efficient model order reductionfor multi-agent systems using QR decomposition-based clustering. In
Proceedings of 54th IEEE Conference on Decision and Control , pages4794–4799, 2015.[28] N. Monshizadeh, H.L. Trentelman, and M.K. Camlibel. Projection-based model reduction of multi-agent systems using graph partitions.
IEEE Transactions on Control of Network Systems , 1(2):145–154,2014.[29] T. Nishikawa and A.E. Motter. Comparative analysis of existingmodels for power-grid synchronization.
New Journal of Physics ,17(1):015012, 2015.[30] A. Quarteroni, A. Manzoni, and F. Negri.
Reduced Basis Methodsfor Partial Differential Equations: An Introduction . R. UNITEXT.Springer Cham, 2016.[31] T. Reis and T. Stykel. Balanced truncation model reduction of second-order systems.
Mathematical and Computer Modelling of DynamicalSystems , 14(5):391–406, 2008.[32] P. W. Sauer and M.A. Pai.
Power system dynamics and stability ,volume 101. Wiley Online Library, 1998.[33] T. Su and R. R. Craig. Model reduction and control of flexiblestructures using krylov vectors.
Journal of Guidance, Control, andDynamics , 14(2):260–267, 1991.[34] Z. Tomljanovi´c, C. Beattie, and S. Gugercin. Damping optimizationof parameter dependent mechanical systems by rational interpolation.
Advances in Computational Mathematics , pages 1–24, 2018.[35] S. A. Wyatt.