Structure-Preserving Interpolation of Bilinear Control Systems
SStructure-Preserving Interpolation of
Bilinear Control Systems
Peter Benner ∗ Serkan Gugercin † Steffen W. R. Werner ‡ In this paper, we extend the structure-preserving interpolatory model reductionframework, originally developed for linear systems, to structured bilinear control sys-tems. Specifically, we give explicit construction formulae for the model reduction basesto satisfy different types of interpolation conditions. First, we establish the analysisfor transfer function interpolation for single-input single-output structured bilinear sys-tems. Then, we extend these results to the case of multi-input multi-output structuredbilinear systems by matrix interpolation. The effectiveness of our structure-preservingapproach is illustrated by means of various numerical examples.
Keywords: model reduction, bilinear systems, structure-preserving approximation,structured interpolation.
AMS subject classifications:
The modeling of various real-world applications and processes results in dynamical control systemsusually including nonlinearities. Since linear approximations are very often incapable of capturingall the features of nonlinear systems, they are an insufficient description for use in optimization andcontroller design. A special class of nonlinear systems are bilinear control systems, which containthe multiplication of control and state variables, i.e., they are linear in state and control separately,but not together [28]. In the last decades, the class of bilinear systems became an essential toolin systems theory. They naturally appear in the modeling process of many physical phenomena,e.g., in the modeling of population, economical, thermal and mechanical dynamics [28, 29], ofelectrical circuits [2], of plasma devices [30, 31], or of medical processes [34]. Bilinear systems canalso result from approximation of general nonlinear systems employing the Carleman linearizationprocess [16, 26]. Moreover, bilinear systems are nowadays often used in the parameter control ofpartial differential equations (PDEs) [24, 25]. Looking back to the linear case, bilinear systemscan be used as a generalizing framework in the modeling of linear stochastic [11] and parameter-varying systems [7, 10, 15], allowing the application of established system-theoretic tools such asmodel order reduction for those system classes. ∗ Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany.E-mail: [email protected]
Otto von Guericke University, Faculty of Mathematics, Universit¨atsplatz 2, 39106 Magdeburg, Germany.E-mail: [email protected] † Department of Mathematics and Computational Modeling and Data Analytics Division, Academy of IntegratedScience, Virginia Tech, Blacksburg, VA 24061, USA.E-mail: [email protected] ‡ Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany.E-mail: [email protected] a r X i v : . [ m a t h . NA ] M a y n this paper, we focus on structured bilinear systems. Those structures arise from the underlyingphysical phenomena. For example, in case of bilinear mechanical systems, one has the bilinearcontrol system defined by M ¨ q ( t ) + D ˙ q ( t ) + Kq ( t ) = m (cid:88) j =1 N p ,j q ( t ) u j ( t ) + m (cid:88) j =1 N v ,j ˙ q ( t ) u j ( t ) + B u u ( t ) ,y ( t ) = C p q ( t ) + C v ˙ q ( t ) , (1)with M, D, K, N p ,j , N v ,j ∈ R n × n for all j = 1 , . . . , m , B u ∈ R n × m and C p , C v ∈ R p × n . In (1), q ( t ) ∈ R n , u ( t ) ∈ R m , and y ( t ) ∈ R p denote, respectively, the states (degrees of freedom), inputs(forcing terms), and the outputs (quantities of interest) of the underlying dynamical system. Dueto the usual demand for increasing accuracy in applications, the number of differential equations n , describing the dynamics of systems as in (1) quickly increases, resulting in a high demand oncomputational resources such as time and memory. One remedy is model order reduction: a new, reduced , system is created, consisting of a significantly smaller number of differential equations thanneeded to define the original one while still accurately approximating the input-to-output behavior.Then one can use this lower-order approximation as a surrogate model for faster simulations orthe design of controllers. The classical (unstructured) bilinear first-order systems are described bythe state-space representation E ˙ x ( t ) = Ax ( t ) + m (cid:88) j =1 N j x ( t ) u j ( t ) + Bu ( t ) ,y ( t ) = Cx ( t ) , (2)with E, A, N j ∈ R n × n for all j = 1 , . . . , m , B ∈ R n × m and C ∈ R p × n . There are differentmethodologies for model reduction of (2), e.g., the bilinear balanced truncation method [2, 11, 23],different types of moment matching approaches for the underlying multi-variate transfer functionsin the frequency domain [3, 5, 14, 17, 18], the interpolation of complete Volterra series [8, 19, 35]or even the construction of reduced-order bilinear systems from frequency data with the bilinearLoewner framework [4, 21].While it is possible to rewrite (1) into a classical bilinear system (2), the original structure iscompletely lost, which can lead to undesirable results in terms of accuracy, stability or physi-cal interpretation. Moreover, some other structured bilinear systems, such as those with internaldelays (see Section 2.2.2), cannot be represented in the form (2). Therefore, here we develop astructure-preserving model reduction approach for different system structures involving bilinearterms. Following [6], which studied structured linear dynamical systems, our goal is to generalizethe structured interpolation approach to a general set of multivariate transfer functions associ-ated with different structured bilinear control systems to preserve the system structure in thereduced-order model. The question we aim to answer is how we can construct an interpolatoryreduced-order model of, e.g., (1), that has the same structure. Towards this goal, we develop astructure-preserving interpolation framework for this special class of nonlinear systems, namelythe structured bilinear control systems; thus extending the theoretical analysis and computationalframework developed by [6] for linear systems to bilinear control systems.In Section 2, we review the theory for classical first-order bilinear systems and motivate themore general structure, we will consider, via two examples. Section 3 gives subspace constructionformulae for interpolatory model reduction bases in the case of single-input single-output (SISO)systems and illustrates the effectiveness of the approach employing two numerical examples. Thedeveloped theory is then extended further in Section 4 to the multi-input multi-output (MIMO)case by matrix interpolation. Section 5 concludes the paper. In this section, we present the basic properties of the structured bilinear systems considered inthis paper. To clarify the presentation, we first revisit the unstructured (classical) bilinear controlsystems as given in (2) and then generalize these concepts to the structured case.2 .1 Revisiting the classical first-order bilinear systems
Given the unstructured bilinear system (2), define N = (cid:2) N . . . N m (cid:3) and let I m k be the identitymatrix of dimension m k . Assuming for simplicity E to be invertible, the initial condition x (0) = 0,and some additional mild conditions, the output of (2) can be expressed in terms of a Volterraseries [33], i.e., y ( t ) = ∞ (cid:88) k =1 t (cid:90) t (cid:90) . . . t k − (cid:90) g k ( t , . . . , t k ) (cid:32) u ( t − j (cid:88) i =1 t i ) ⊗ · · · ⊗ u ( t − t ) (cid:33) d t k · · · d t , where g k , for k ≥
1, is the k -th regular Volterra kernel given by g k ( t , . . . , t k ) = Ce E − At k k − (cid:89) j =1 ( I m j − ⊗ E − N )( I m j ⊗ e E − At k − j ) × ( I m k − ⊗ E − B ) . (3)Using the multivariate Laplace transform [33], the regular Volterra kernels (3) yield a representationof (2) in the frequency domain by the so-called multivariate regular transfer functions G k ( s , . . . , s k ) = C ( s k E − A ) − k − (cid:89) j =1 ( I m j − ⊗ N )( I m j ⊗ ( s k − j E − A ) − ) × ( I m k − ⊗ B ) , (4)with s , . . . , s k ∈ C . This compact expression is actually the collection of the different combinationsof the bilinear matrices, i.e., we can write (4) as G k ( s , . . . , s k ) = [ C ( s k E − A ) − N · · · N ( s E − A ) − B,C ( s k E − A ) − N · · · N ( s E − A ) − B,. . .C ( s k E − A ) − N · · · N m ( s E − A ) − B,. . .C ( s k E − A ) − N m · · · N m ( s E − A ) − B ] . (5)For SISO systems, (4) simplifies to G k ( s , . . . , s k ) = C ( s k E − A ) − k − (cid:89) j =1 N ( s k − j E − A ) − B. As stated in Section 1, for the unstructured bilinear system case (2), there are already differ-ent model reduction techniques. For the structured bilinear systems we consider here, we willconcentrate on interpolatory methods.Note that the assumption of E being invertible is only made for ease of presentation. Theinterpolation theory and interpolatory properties of the reduced-order model developed in thefollowing sections hold for the general situation, yet the final construction of the reduced-ordermodel might need some additional treatment as in the linear and unstructured bilinear cases; see,e.g., [1, 12, 22]. For the transition from unstructured to structured bilinear systems, we start by recalling the case oflinear systems. The classical (unstructured) linear dynamical systems are described, in state-space,by E ˙ x ( t ) = Ax ( t ) + Bu ( t ) ,y ( t ) = Cx ( t ) , E, A ∈ R n × n , B ∈ R n × m and C ∈ R p × n . Assuming the initial condition Ex (0) = 0, theLaplace transform maps this problem to the frequency domain:( sE − A ) X ( s ) = BU ( s ) ,Y ( s ) = CX ( s ) , (6)where X ( s ) , U ( s ) and Y ( s ) denote the Laplace transforms of the time-dependent functions x ( t ), u ( t ), and y ( t ). Inspired by much richer structured systems than (6) appearing in the linear casesuch as those describing the dynamic response of a viscoelastic body, [6] introduced a more generalsystem of equations in the frequency domain, given by K ( s ) X ( s ) = B ( s ) U ( s ) ,Y ( s ) = C ( s ) X ( s ) , (7)with matrix-valued functions K : C → C n × n , B : C → C n × m and C : C → C p × n . Note that (7)contains (6) as a special case. Assuming the problem to be regular, i.e., there exists an s ∈ C forwhich the matrix functions are defined and K ( s ) is full-rank, the problem (7) leads to the generalformulation of structured transfer functions of linear systems G lin ( s ) = C ( s ) K ( s ) − B ( s ) , (8)describing the input-to-output behavior in the frequency domain.Inspired by (8) and the structure of the examples in Sections 2.2.1 and 2.2.2, we consider herea more general, structured formulation of the regular subsystem transfer functions correspondingto structured bilinear systems, namely G k ( s , . . . , s k ) = C ( s k ) K ( s k ) − k − (cid:89) j =1 (cid:0) I m j − ⊗ N ( s k − j ) (cid:1)(cid:0) I m j ⊗ K ( s k − j ) − (cid:1) × ( I m k − ⊗ B ( s )) , (9)for k ≥ N ( s ) = (cid:2) N ( s ) . . . N m ( s ) (cid:3) with the matrix functions C : C → C p × n , K : C → C n × n , B : C → C n × m , N j : C → C n × n for j = 1 , . . . , m . This general formulation includestransfer functions of classical bilinear systems (4) since we can choose C ( s ) = C, K ( s ) = sE − A, N ( s ) = N, B ( s ) = B. Sections 2.2.1 and 2.2.2 give two examples of structured system classes that can be formulated inthis general setting.For the construction of structured reduced-order bilinear models , we will use the projection ap-proach, i.e., we will construct two model reduction bases
W, V ∈ C n × r such that the reduced-orderbilinear system quantities will be computed by (cid:98) C ( s ) = C ( s ) V, (cid:98) K ( s ) = W H K ( s ) V, (cid:98) B ( s ) = W H B ( s ) , and (cid:98) N j ( s ) = W H N j ( s ) V, (10)for j = 1 , . . . , m . The structured reduced-order bilinear control system (cid:98) G is then given by theunderlying reduced-order matrices from (10) and with the corresponding structured multivariatesubsystem transfer functions (cid:98) G k ( s , . . . , s k ) = (cid:98) C ( s k ) (cid:98) K ( s k ) − k − (cid:89) j =1 (cid:0) I m j − ⊗ (cid:98) N ( s k − j ) (cid:1)(cid:0) I m j ⊗ (cid:98) K ( s k − j ) − (cid:1) ( I m k − ⊗ (cid:98) B ( s )) , for k ≥
1. 4 .2.1 Bilinear second-order systems
We revisit the example of second-order bilinear systems (1) given in Section 1. First, we notethat (1) can be rewritten in the first-order (unstructured) form (2) by introducing the new statevector x ( t ) = [ q T ( t ) , ˙ q T ] T such that we obtain (cid:20) J M (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) E ˙ x ( t ) = (cid:20) J − K − D (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) A x ( t ) + m (cid:88) j =1 (cid:20) N p ,j N v ,j (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) N j x ( t ) u j ( t ) + (cid:20) B u (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) B u ( t ) ,y ( t ) = (cid:2) C p C v (cid:3)(cid:124) (cid:123)(cid:122) (cid:125) C x ( t ) , (11)for any invertible matrix J ∈ R n × n . For this first-order companion realization (11), we know thefrequency domain representation to be given by the multivariate regular transfer functions (4). Ifwe now plug in the structured matrices from (11), we can make use of those special block structures.In general, we obtain( sE − A ) − = (cid:20) sJ − JK sM + E (cid:21) − = (cid:20) s J − − s ( s M + sD + K ) − KJ − ( s M + sD + K ) − − ( s M + sD + K ) − KJ − s ( s M + sD + K ) − (cid:21) for the frequency-dependent center terms and, therefore, N j ( sE − A ) − B = (cid:20) N p ,j + sN v ,j )( s M + sD + K ) − B u (cid:21) . Using this, we obtain for the first part of the k -th regular transfer function k − (cid:89) j =1 (cid:0) I m j − ⊗ N (cid:1)(cid:0) I m j ⊗ ( s k − j E − A ) − (cid:1) ( I m k − ⊗ B )= (cid:32) k − (cid:81) j =1 (cid:0) I m j − ⊗ ( N p + s k − j N v ) (cid:1)(cid:0) I m j ⊗ ( s k − j M + s k − j D + K ) − (cid:1)(cid:33) ( I m k − ⊗ B u ) , where we used the notion N p = (cid:2) N p , . . . N p ,m (cid:3) and N v = (cid:2) N v , . . . N v ,m (cid:3) . Multiplicationwith the remaining terms yields the regular transfer functions of (1) to be written in the form G k ( s , . . . , s k ) = ( C p + s k C v )( s k M + s k D + K ) − k − (cid:89) j =1 (cid:0) I m j − ⊗ ( N p + s k − j N v ) (cid:1) × (cid:0) I m j ⊗ ( s k − j M + s k − j D + K ) − (cid:1) ( I m k − ⊗ B u ) . (12)Having the general formulation of regular transfer functions (9) in mind, we see that we can rewrite(12) in the structured bilinear form (9) by setting C ( s ) = C p + sC v , K ( s ) = s M + sD + K, N ( s ) = N p + sN v , B ( s ) = B u . Now assume that we construct model reduction bases W and V and compute the reduced ordermodel by projection as in (10). This leads to the reduced-order bilinear system (cid:98) C ( s ) = C p V + s ( C v V ) , (cid:98) K ( s ) = s ( W H M V ) + s ( W H DV ) + ( W H KV ) , (cid:98) N ( s ) = ( W H N p ( I m ⊗ V )) + s ( W H N v ( I m ⊗ V )) , (cid:98) B ( s ) = W H B u . (13)5ote that the reduced-order bilinear system in (13) has the same structure as the original one andcan be viewed as a reduced second-order bilinear system, where the full-order matrices in (1) aresimply replaced by the reduced analogues from (13). Another structured bilinear control system example is the case of bilinear systems with an internaltime-delay, i.e., E ˙ x ( t ) = Ax ( t ) + A d x ( t − τ ) + m (cid:88) j =1 N j x ( t ) u j ( t ) + Bu ( t ) ,y ( t ) = Cx ( t ) , for a delay 0 ≤ τ ∈ R , which has regular transfer functions of the form G k ( s , . . . , s k ) = C ( s k E − A − e − s k τ A d ) − k − (cid:89) j =1 (cid:0) I m j − ⊗ N (cid:1) × (cid:0) I m j ⊗ ( s k − j E − A − e − s k − j τ A d ) − (cid:1) ( I m k − ⊗ B ); (14)see [21]. As in the case of the bilinear second-order systems, we see that (14) can be written in thesetting of (9) using C ( s ) = C, K ( s ) = sE − A − e − sτ A d , N ( s ) = N, and B ( s ) = B. As in Section 2.2.1, once the model reduction bases W and V are constructed, the resultingreduced-order model retains the delay structure of the original system as it is given by (cid:98) C ( s ) = CV, (cid:98) K ( s ) = s ( W H EV ) − ( W H AV ) − e − sτ ( W H A d V ) , (cid:98) N ( s ) = W H N ( I m ⊗ V ) , (cid:98) B ( s ) = W H B. In Sections 3 and 4, we will show how to construct the model reduction bases W and V suchthat the structured reduced-order bilinear control system provides interpolation of the full-ordersubsystems. In this section, we assume the SISO system case, i.e., m = p = 1. Therefore, the bilinear partconsists of, at most, one term N = N and the matrix functionals C and B map frequency pointsonly onto row and column vectors, respectively. In this setting, the regular transfer functionsdrastically simplify since (9) can now be written as G k ( s , . . . , s k ) = C ( s k ) K ( s k ) − k − (cid:89) j =1 N ( s k − j ) K ( s k − j ) − B ( s ) , (15)for k ≥
1. In the remainder of this section, we develop the theory for structure-preserving interpo-lation (both the case of simple and high-order (Hermite) interpolation) and then present numericalexamples to illustrate the analysis.
We want to construct the model reduction bases W and V and the corresponding reduced struc-tured-bilinear system via projection as in (10) such that its leading regular transfer functions6nterpolate those of the original one; i.e., G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ), where σ , . . . , σ k ∈ C are some selected interpolation points.The following two theorems answer the question of how the model reduction bases V and W can be constructed independent of each other. In other words, the interpolation conditions aresatisfied only via V or W , no matter how the respective other matrix is chosen. First, we considerthe model reduction basis V . Theorem 1 (Interpolation via V ) . Let G be a bilinear SISO system, described by (15) , and (cid:98) G thereduced-order bilinear SISO system constructed by (10) . Let σ , . . . , σ k ∈ C be interpolation pointsfor which the matrix functions C ( s ) , K ( s ) − , N ( s ) and B ( s ) are defined and (cid:98) K ( s ) is full-rank.Construct V using v = K ( σ ) − B ( σ ) ,v j = K ( σ j ) − N ( σ j − ) v j − , ≤ j ≤ k, span( V ) ⊇ span ([ v , . . . , v k ]) , and let W be an arbitrary full-rank truncation matrix of appropriate dimension. Then the subsystemtransfer functions of (cid:98) G interpolate those of G in the following way: G ( σ ) = (cid:98) G ( σ ) , G ( σ , σ ) = (cid:98) G ( σ , σ ) , . . . , G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ) . Proof.
First, we note that the constructed vectors are given by v = K ( σ ) − B ( σ ) ,v = K ( σ ) − N ( σ ) K ( σ ) − B ( σ ) , ... v k = K ( σ k ) − N ( σ k − ) K ( σ k − ) − · · · K ( σ ) B ( σ ) , and that by construction all those vectors are contained in span( V ). Therefore, for the first transferfunction we obtain (cid:98) G ( σ ) = (cid:98) C ( σ ) (cid:98) K ( σ ) − (cid:98) B ( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H B ( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H K ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) =: P v K ( σ ) − B ( σ )= C ( σ ) K ( σ ) − B ( σ )= G ( σ ) , where we used the fact that P v is an oblique projector onto span( V ), i.e., z = P v z holds for all z ∈ span( V ), and that K ( σ ) − B ( σ ) = v ∈ span( V ). Considering the second transfer function,we get (cid:98) G ( σ , σ ) = (cid:98) C ( σ ) (cid:98) K ( σ ) − (cid:98) N ( σ ) (cid:98) K ( σ ) − (cid:98) B ( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H N ( σ ) V ( W H K ( σ ) V ) − W H B ( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H N ( σ ) K ( σ ) − B ( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H K ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) =: P v K ( σ ) − N ( σ ) K ( σ ) − B ( σ )= C ( σ ) K ( σ ) − N ( σ ) K ( σ ) − B ( σ )= G ( σ , σ ) , using the same arguments as for the first transfer function and additionally the construction of v and the oblique projector P v . Continuing with this argumentation, the desired result follows byinduction over the transfer function index k . 7he proof of Theorem 1 shows that the recursive construction of the truncation matrix is neces-sary for the interpolation of higher-order transfer functions. Also, it should be noted that W wasan arbitrary full-rank truncation matrix of suitable dimensions but with no additional constraintsfor the interpolation of (15). Theorem 2 is the counterpart to Theorem 1 by only giving constraintsfor the left model reduction basis W , while V is now allowed to be arbitrary. Theorem 2 (Interpolation via W ) . Let G , (cid:98) G , and the interpolation points σ , . . . , σ k ∈ C be asin Theorem 1. Construct W using w = K ( σ k ) − H C ( σ k ) H ,w j = K ( σ k − j +1 ) − H N ( σ k − j +1 ) H w j − , ≤ j ≤ k, span( W ) ⊇ span ([ w , . . . , w k ]) , and let V be an arbitrary full-rank truncation matrix of appropriate dimension. Then the transferfunctions of (cid:98) G interpolate the transfer functions of G in the following way: G ( σ k ) = (cid:98) G ( σ k ) , G ( σ k − , σ k ) = (cid:98) G ( σ k − , σ k ) , . . . , G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ) . Proof.
The proof of this theorem follows analogous to the proof of Theorem 1. We only need tonote that the left projection space span( W ) involves the C ( s ) matrix, which takes always the lastargument of G k into account. Therefore, the order of the interpolation points is reversed and therecursion formula follows the transfer function order going from left to right. The rest followsas in the proof of Theorem 1 by taking the Hermitian conjugate of the matrix functions for theconstruction.The main difference between Theorem 1 and Theorem 2 is the order in which the interpolationpoints have to be used. Switching between the two projection schemes leads to a reverse orderingof the interpolation points for the intermediate transfer functions.The last theorem of this section states now the combination of Theorem 1 and Theorem 2 bytwo-sided projection. Theorem 3 (Interpolation by two-sided projection) . Let G and (cid:98) G be as in Theorem 1 and let V be constructed as in Theorem 1 for a given set of interpolation points σ , . . . , σ k ∈ C and W asin Theorem 2 for another set of interpolation points ς , . . . , ς θ ∈ C , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) and B ( s ) are defined and (cid:98) K ( s ) is full-rank. Then the transfer functions of (cid:98) G interpolate the transfer functions of G in the following way: G ( σ ) = (cid:98) G ( σ ) , G ( σ , σ ) = (cid:98) G ( σ , σ ) , . . . , G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ) ,G ( ς θ ) = (cid:98) G ( ς θ ) , G ( ς θ − , ς θ ) = (cid:98) G ( ς θ − , ς θ ) , . . . , G θ ( ς , . . . , ς θ ) = (cid:98) G θ ( ς , . . . , ς θ ) , (16) and additionally, G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) = (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) , (17) for ≤ q ≤ k and ≤ η ≤ θ .Proof. Since the interpolation conditions in (16) follow directly from Theorem 1 and Theorem 2,we only need to prove (17), the mixed interpolation conditions. For q and η as described in the8heorem, we obtain (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ )= (cid:98) C ( ς θ ) (cid:98) K ( ς θ ) − η − (cid:89) j =1 (cid:98) N ( ς θ − j ) (cid:98) K ( ς θ − j ) − (cid:32) q − (cid:89) i =0 (cid:98) N ( σ q − i ) (cid:98) K ( σ q − i ) − (cid:33) (cid:98) B ( σ )= (cid:98) C ( ς θ ) (cid:98) K ( ς θ ) − η − (cid:89) j =1 (cid:98) N ( ς θ − j ) (cid:98) K ( ς θ − j ) − W H (cid:32) q − (cid:89) i =0 N ( σ q − i ) K ( σ q − i ) − (cid:33) B ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) ∈ span( V ) = C ( ς θ ) K ( ς θ ) − η − (cid:89) j =1 N ( ς θ − j ) K ( ς θ − j ) − (cid:124) (cid:123)(cid:122) (cid:125) =: h, h H ∈ span( W ) (cid:32) q − (cid:89) i =0 N ( σ q − i ) K ( σ q − i ) − (cid:33) B ( σ )= G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) , where we used the construction of span( V ) in the third and of span( W ) in the fourth lines asdenoted and following the strategy in the proof of Theorem 1.It is an important observation that we can interpolate higher-order transfer functions by onlyevaluating lower ones for the construction of the model reduction bases. Following Theorem 3, wecan in fact interpolate transfer functions up to order k + θ . Also, we recognize that the two-sidedprojection-based interpolation is able to match k + θ + k · θ interpolation conditions at the sametime. Those results are similar to the unstructured systems case [3]. The special case of identicalsets of interpolation points is discussed in the following section regarding Hermite interpolation. As in the linear case, we can use the projection framework to interpolate not only the transferfunctions but also their derivatives. In the setting of the multivariate transfer function appearingin bilinear systems, this amounts to partial derivatives with respect to the different frequencyarguments. For ease of notation, we introduce an abbreviation for partial derivatives ∂ s j ··· s jkk f ( z , . . . , z k ) := ∂ j + ... + j k f∂s j · · · ∂s j k k ( z , . . . , z k ) , denoting the differentiation of an analytic function f : C k → C (cid:96) with respect to the variables s , . . . , s k and evaluated at z , . . . , z k ∈ C . Moreover, the Jacobian of f is denoted by ∇ f = (cid:2) ∂ s f . . . ∂ s k f (cid:3) as the concatenation of all partial derivatives.The following theorem states a Hermite interpolation result via V only. Theorem 4 (Hermite interpolation via V ) . Let G be a bilinear SISO system, described by (15) ,and (cid:98) G the reduced-order bilinear SISO system constructed by (10) . Let σ , . . . , σ k ∈ C be theinterpolation points for which the matrix functions C ( s ) , K ( s ) − , N ( s ) and B ( s ) are analytic and (cid:98) K ( s ) is full-rank. Construct V using v ,j = ∂ s j ( K − B )( σ ) , j = 0 , . . . , (cid:96) ,v ,j = ∂ s j K − ( σ ) ∂ s (cid:96) ( N K − B )( σ ) , j = 0 , . . . , (cid:96) , ... v k,j k = ∂ s jk K − ( σ k ) k − (cid:89) j =1 ∂ s (cid:96)k − j ( N K − )( σ k − j ) ∂ s (cid:96) ( N K − B )( σ ) , j k = 0 , . . . , (cid:96) k , span( V ) ⊇ span([ v , , . . . , v k,(cid:96) k ]) , nd let W be an arbitrary full-rank truncation matrix of appropriate dimension. Then the transferfunctions of (cid:98) G interpolate the transfer functions of G in the following way: ∂ s j G ( σ ) = ∂ s j (cid:98) G ( σ ) , j = 0 , . . . , (cid:96) ,∂ s (cid:96) s j G ( σ , σ ) = ∂ s (cid:96) s j (cid:98) G ( σ , σ ) , j = 0 , . . . , (cid:96) , ... ∂ s (cid:96) ··· s (cid:96)k − k − s jkk G k ( σ , . . . , σ k ) = ∂ s (cid:96) ··· s (cid:96)k − k − s jkk (cid:98) G k ( σ , . . . , σ k ) , j k = 0 , . . . , (cid:96) k . Proof.
First, we note that the case k = 1 was already proven in [6] and (cid:96) = . . . = (cid:96) k = 0corresponds to Theorem 1. For k = 2, we start with j = 0 to investigate the partial derivativewith respect to s involving the bilinear term. Using the product rule, the partial derivative canbe written as ∂ s (cid:96) ( N K − B )( σ ) = (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i N ( σ ) (cid:33) (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i ( K − B )( σ ) (cid:33) , for some appropriate constants c i , c i ∈ C , and i , i = 0 , . . . , (cid:96) . Now, we can show ∂ s (cid:96) (cid:98) G ( σ , σ ) = (cid:98) C ( σ ) (cid:98) K ( σ ) − ∂ s (cid:96) ( (cid:98) N (cid:98) K − (cid:98) B )( σ )= (cid:98) C ( σ ) (cid:98) K ( σ ) − (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i (cid:98) N ( σ ) (cid:33) (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i ( (cid:98) K − (cid:98) B )( σ ) (cid:33) = (cid:98) C ( σ ) (cid:98) K ( σ ) − W H (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i N ( σ ) (cid:33) × V (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i (( W H K V ) − W H B )( σ ) (cid:33) = (cid:98) C ( σ ) (cid:98) K ( σ ) − W H (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i N ( σ ) (cid:33) (cid:32) (cid:96) (cid:88) i =0 c i ∂ s i ( K − B )( σ ) (cid:33) = (cid:98) C ( σ ) (cid:98) K ( σ ) − W H ∂ s (cid:96) ( N K − B )( σ )= C ( σ ) V ( W H K ( σ ) V ) − W H K ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) =: P v , K ( σ ) − ∂ s (cid:96) ( N K − B )( σ )= C ( σ ) K ( σ ) − ∂ s (cid:96) ( N K − B )( σ )= ∂ s (cid:96) G ( σ , σ ) , where we first used the construction of v ,j and then that of v , with the projector P v , ontospan( V ). By induction over j , the results for the case k = 2 follow from [6]; and by inductionover k and j k , using the same arguments, the rest of the theorem follows.We note the difference between Theorem 1 and Theorem 4 in terms of the subspace construction.While for the previous interpolation results, we are able to recursively construct the next part ofthe model reduction subspace by using the previous one, this is not possible in Theorem 4 due tothe frequency dependence of the bilinear term N ( s ). Also, it follows that for the interpolation ofthe (cid:96) -th derivative, (cid:96) = (cid:96) + . . . + (cid:96) k , of the k -th transfer function G k in the interpolation points σ , . . . , σ k , the minimal dimension of the projection space span( V ) is given by (cid:96) + k .As before, we can consider the counterpart to Theorem 4. In addition to reversing the orderof interpolation points, the order of the derivatives needs to be reverted as well for the Hermiteinterpolation. 10 heorem 5 (Hermite interpolation via W ) . Let G , (cid:98) G the original and reduced-order models,respectively, and the interpolation points σ , . . . , σ k ∈ C be as in Theorem 4. Construct W using w ,j k = ∂ s jk ( K − H C H )( σ k ) , j k = 0 , . . . , (cid:96) k ,w ,j k − = ∂ s jk − ( K − H N H )( σ k − ) ∂ s (cid:96)k ( K − H C H )( σ k ) , j k − = 0 , . . . , (cid:96) k − , ... w k,j = ∂ s j ( K − H N H )( σ ) k − (cid:89) j =2 ∂ s (cid:96)j ( K − H N H )( σ j ) ∂ s (cid:96)k × ( K − H C H )( σ k ) , j = 0 , . . . , (cid:96) , span( W ) ⊇ span([ w , , . . . , w k,(cid:96) k ]) , and let V be an arbitrary full-rank truncation matrix of appropriate dimension. Then the transferfunctions of (cid:98) G interpolate the transfer functions of G in the following way ∂ s jk G ( σ k ) = ∂ s jk (cid:98) G ( σ k ) , j k = 0 , . . . , (cid:96) k ,∂ s jk − s (cid:96)k G ( σ k − , σ k ) = ∂ s jk − s (cid:96)k (cid:98) G ( σ k − , σ k ) , j k − = 0 , . . . , (cid:96) k − , ... ∂ s j s (cid:96) ··· s (cid:96)kk G k ( σ , . . . , σ k ) = ∂ s j s (cid:96) ··· s (cid:96)kk (cid:98) G k ( σ , . . . , σ k ) , j = 0 , . . . , (cid:96) . Proof.
Observing that the order of the derivatives changed in the same way as the interpolationpoints, the proof works analogously to the proof of Theorem 4 while building on the ideas fromthe proof of Theorem 2.An interesting fact in the structured linear case, as stated in [6], is the implicit matching ofHermite interpolation conditions without sampling the derivatives of the transfer function. Next,we extend this construction to the structured bilinear case. This result becomes a special case ofTheorem 3 by using identical sets of interpolation points for V and W . Theorem 6 (Implicit Hermite interpolation by two-sided projection) . Let G and (cid:98) G be as inTheorem 4. Also let V and W be constructed as in Theorems 1 and 2, respectively, for the sameset of interpolation points σ , . . . , σ k ∈ C , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) and B ( s ) are analytic and (cid:98) K ( s ) is full-rank. Then the transfer functions of (cid:98) G interpolate the transferfunctions of G in the following way: G ( σ ) = (cid:98) G ( σ ) , . . . , G k − ( σ , . . . , σ k − ) = (cid:98) G k − ( σ , . . . , σ k − ) ,G ( σ k ) = (cid:98) G ( σ k ) , . . . , G k − ( σ , . . . , σ k ) = (cid:98) G k − ( σ , . . . , σ k ) , and additionally, G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ) , ∇ G k ( σ , . . . , σ k ) = ∇ (cid:98) G k ( σ , . . . , σ k ) ,G q + η ( σ , . . . , σ q , σ k − η +1 , . . . , σ k ) = (cid:98) G q + η ( σ , . . . , σ q , σ k − η +1 , . . . , σ k ) , hold for ≤ q, η ≤ k .Proof. While most of the results directly follow from Theorem 3 by using identical sets of interpo-lation points for V and W , the Hermite interpolation of the complete Jacobian ∇ G k of the k -thorder transfer function is new. Since k = 1 (the linear subsystem) is covered by [6], we assume k >
1. Therefore, and by the structure of the multivariate transfer functions G k , three differentcases can occur depending on the differentiation variable, i.e., we have ∂ s : ∂ s ( N K − B ) = ( ∂ s N ) K − B + N (cid:0) ∂ s ( K − B ) (cid:1) ,∂ s j : ∂ s ( N K − ) = ( ∂ s N ) K − + N (cid:0) ∂ s K − (cid:1) , for 1 < j < k,∂ s k : ∂ s ( CK − ) = ( ∂ s C ) K − + C (cid:0) ∂ s K − (cid:1) ,
11s possible derivative terms. Since those three cases work analogously to each other, we restrictourselves, for the sake of compactness, to the first one. First, we extend the expression of thepartial derivative further into ∂ s ( N K − B ) = ( ∂ s N ) K − B + N (cid:0) −K − ( ∂ s K ) K − B + K − ( ∂ s B ) (cid:1) . Therefore, for the complete partial derivative, we obtain ∂ s (cid:98) G k ( σ , . . . , σ k )= (cid:98) C ( σ k ) (cid:98) K ( σ k ) − k − (cid:89) j =1 (cid:98) N ( σ k − j ) (cid:98) K ( σ k − j ) − ∂ s ( (cid:98) N (cid:98) K − (cid:98) B )( σ )= (cid:98) C ( σ k ) (cid:98) K ( σ k ) − k − (cid:89) j =1 (cid:98) N ( σ k − j ) (cid:98) K ( σ k − j ) − × (cid:104)(cid:16) ∂ s (cid:98) N (cid:17) (cid:98) K − (cid:98) B − (cid:98)
N K − (cid:16) ∂ s (cid:98) K (cid:17) (cid:98) K − (cid:98) B + (cid:98) N (cid:98) K − (cid:16) ∂ s (cid:98) B (cid:17)(cid:105) ( σ )= (cid:98) C ( σ k ) (cid:98) K ( σ k ) − k − (cid:89) j =1 (cid:98) N ( σ k − j ) (cid:98) K ( σ k − j ) − ∂ s (cid:98) N ( σ ) (cid:98) K ( σ ) − (cid:98) B ( σ ) − (cid:98) C ( σ k ) (cid:98) K ( σ k ) − k − (cid:89) j =1 (cid:98) N ( σ k − j ) (cid:98) K ( σ k − j ) − (cid:98) N ( σ ) (cid:98) K ( σ ) − ∂ s (cid:98) K ( σ ) (cid:98) K ( σ ) − (cid:98) B ( σ )+ (cid:98) C ( σ k ) (cid:98) K ( σ k ) − k − (cid:89) j =1 (cid:98) N ( σ k − j ) (cid:98) K ( σ k − j ) − (cid:98) N ( σ ) (cid:98) K ( σ ) − ∂ s (cid:98) B ( σ )= C ( σ k ) K ( σ k ) − k − (cid:89) j =1 N ( σ k − j ) K ( σ k − j ) − (cid:124) (cid:123)(cid:122) (cid:125) =: h , h H ∈ span( W ) ∂ s N ( σ ) K ( σ ) − B ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) ∈ span( V ) − C ( σ k ) K ( σ k ) − k − (cid:89) j =1 N ( σ k − j ) K ( σ k − j ) − N ( σ ) K ( σ ) − (cid:124) (cid:123)(cid:122) (cid:125) =: h , h H ∈ span( W ) ∂ s K ( σ ) K ( σ ) − B ( σ ) (cid:124) (cid:123)(cid:122) (cid:125) ∈ span( V ) + C ( σ k ) K ( σ k ) − k − (cid:89) j =1 N ( σ k − j ) K ( σ k − j ) − N ( σ ) K ( σ ) − (cid:124) (cid:123)(cid:122) (cid:125) = h , h H ∈ span( W ) ∂ s B ( σ )= ∂ s G k ( σ , . . . , σ k ) , where we used, as denoted by the underbraces, the construction of either span( W ) or span( V ),and the fact that the model reduction bases V and W are constant matrices. As stated before, theresults for the other partial derivatives follow analogously, which proves interpolation of the fullJacobian in the end.As in the previous section, by using two-sided projection we can match interpolation conditionsfor a larger number of interpolation points and higher-order transfer functions. Following theresults of Theorem 3 we can expect, using derivatives for the two-sided projection, to match atleast ( k + (cid:96) ) + ( θ + ν ) + ( k + (cid:96) ) · ( θ + ν ) transfer function values, where k, (cid:96) relate to span( V ) and θ, ν to span( W ), and where (cid:96) = (cid:96) + . . . + (cid:96) k and ν = ν + . . . + ν θ denote the orders of the partialderivatives and k, θ the orders of the transfer functions to interpolate. Theorem 7 (Hermite interpolation by two-sided projection) . Let G and (cid:98) G be as in Theorem 4and let V be constructed as in Theorem 4 for a given set of interpolation points σ , . . . , σ k ∈ C nd orders of partial derivatives (cid:96) , . . . , (cid:96) k , and W as in Theorem 5 for another set of interpolationpoints ς , . . . , ς θ ∈ C and orders of partial derivatives ν , . . . , ν θ , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) and B ( s ) are analytic and (cid:98) K ( s ) has full-rank. Then the transfer functions of (cid:98) G interpolate the transfer functions of G in the following way: ∂ s j G ( σ ) = ∂ s j (cid:98) G ( σ ) , j = 0 , . . . , (cid:96) , ... ∂ s (cid:96) ··· s (cid:96)k − k − s jkk G k ( σ , . . . , σ k ) = ∂ s (cid:96) ··· s (cid:96)k − k − s jkk (cid:98) G k ( σ , . . . , σ k ) , j k = 0 , . . . , (cid:96) k ,∂ s iθ G ( ς θ ) = ∂ s iθ (cid:98) G ( ς θ ) , i θ = 0 , . . . , ν θ , ... ∂ s i s ν ··· s νθθ G θ ( ς , . . . , ς θ ) = ∂ s i s ν ··· s νθθ (cid:98) G θ ( ς , . . . , ς θ ) , i = 0 , . . . , ν , and additionally, ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ )= ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) holds for j q = 0 , . . . , (cid:96) q ; i θ − η +1 = 0 , . . . , ν θ − η +1 ; ≤ q ≤ k and ≤ η ≤ θ .Proof. As for Theorem 3, the first parts of the result just summarize the theorems stating theone-sided projection approaches (Theorems 4 and 5), i.e., we only need to prove the additionalinterpolation constraints with the mixed partial derivatives. It holds ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ )= ∂ s νθ ( (cid:98) C (cid:98) K − )( ς θ ) · · · ∂ s νθ − η +2 ( (cid:98) N (cid:98) K − )( ς θ − η +2 ) ∂ s iθ − η +1 ( (cid:98) N (cid:98) K − )( ς θ − η +1 ) × ∂ s jq ( (cid:98) N (cid:98) K − )( σ q ) ∂ s (cid:96)q − ( (cid:98) N (cid:98) K − )( σ q − ) · · · ∂ s (cid:96) ( (cid:98) N (cid:98) K − (cid:98) B )( σ )= ∂ s νθ ( CK − )( ς θ ) · · · ∂ s νθ − η +2 ( N K − )( ς θ − η +2 ) ∂ s iθ − η +1 ( N K − )( ς θ − η +1 ) (cid:124) (cid:123)(cid:122) (cid:125) =: h, h ∈ span( W ) × ∂ s jq ( N K − )( σ q ) ∂ s (cid:96)q − ( N K − )( σ q − ) · · · ∂ s (cid:96) ( N K − B )( σ ) (cid:124) (cid:123)(cid:122) (cid:125) ∈ span( V ) = ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ )for j q = 0 , . . . , (cid:96) q ; i θ − η +1 = 0 , . . . , ν θ − η +1 ; 1 ≤ q ≤ k and 1 ≤ η ≤ θ .For an easier understanding of Theorem 7, we consider here a small theoretical example, wherewe only interpolate the linear part choosing k = θ = 1, the interpolation points σ, ς and for thepartial derivatives (cid:96) = (cid:96) = 2 and ν = ν = 1. Then using the first part of Theorem 7 we enforceinterpolation of the following terms by means of span( V ): G ( σ ) , ∂ s G ( σ ) , ∂ s G ( σ ) , And similarly via span( W ), we enforce interpolation of G ( ς ) , ∂ s G ( ς ) . By using two-sided projection, we can now additionally match higher-order transfer functions andtheir partial derivatives, namely G ( σ, ς ) , ∂ s G ( σ, ς ) , ∂ s G ( σ, ς ) , ∂ s G ( σ, ς ) , ∂ s s G ( σ, ς ) , ∂ s s G ( σ, ς ) . As already realized in Theorem 6, two-sided projection with the same sets of interpolation pointsleads to additional interpolation of derivatives. This also works in combination with Theorem 7.The following corollary states a particular special case.13 orollary 1.
Assume G and (cid:98) G are constructed as in Theorem 7 for identical sets of interpolationpoints σ , . . . , σ k ∈ C and matching orders of the partial derivatives, i.e., (cid:96) = ν , . . . , (cid:96) k = ν k .Then additionally to the interpolation results of Theorem 7 it holds ∇ (cid:16) ∂ s (cid:96) ··· s (cid:96)kk G k ( σ , . . . , σ k ) (cid:17) = ∇ (cid:16) ∂ s (cid:96) ··· s (cid:96)kk (cid:98) G k ( σ , . . . , σ k ) (cid:17) . Proof.
The proof follows directly from Theorem 6 by setting the last partial derivative as the finalinterpolation condition of the left and right projection spaces.
We illustrate the SISO analysis using two numerical examples, having the structured bilinearities asin Sections 2.2.1 and 2.2.2. We compare our resulting structure-preserving interpolation frameworkto other approaches from the literature that have been used to approximate structured bilinearsystems without preserving the structure , as in, e.g., [2, 21].We compare the approximation error both in time and frequency domains. In time domain, wedisplay a point-wise relative output error for a given input signal, namely | y ( t ) − ˆ y ( t ) || y ( t ) | , for t ∈ [0 , t f ], and in frequency domain, we display the point-wise relative error of the first andsecond subsystem transfer functions, i.e., | G ( ω i) − (cid:98) G ( ω i) || G ( ω i) | and | G ( ω i , ω i) − (cid:98) G ( ω i , ω i) || G ( ω i , ω i) | , for the frequencies ω , ω ∈ [ ω min , ω max ].The experiments reported here have been executed on a machine with 2 Intel(R) Xeon(R) Silver4110 CPU processors running at 2.10 GHz and equipped with 192 GB total main memory. Thecomputer runs on CentOS Linux release 7.5.1804 (Core) using MATLAB 9.7.0.1190202 (R2019b). First, we consider a damped mass-spring system. The linear parts of the dynamics are modeledas in [27], describing a chain of masses connected by springs and dampers, where each mass isadditionally connected to a separate spring and damper. In order to focus on only the mechanicalstructure, we removed the holonomic constraint from [27]. For the bilinear part, the springs aremodeled to be dependent on the applied external force, such that a displacement to the rightincreases the stiffness due to compression of the springs and to the left decreases it due to theappearing strain. This results in a structured bilinear control system of the form M ¨ q ( t ) + D ˙ q ( t ) + Kq ( t ) = N p q ( t ) u ( t ) + B u u ( t ) ,y ( t ) = C p q ( t ) , (18)with M, D, K, N p ∈ R n × n and B u , C T p ∈ R n . The input matrix is chosen to apply the externalforce only to the first mass, i.e., B = e , and the output gives the displacement of the second mass,i.e., C = e T , where e i denotes the i -th column of the identity matrix I n . The bilinear term is ascaled version of the stiffness matrix N p = − SKS, where S is a diagonal matrix containing entries as linspace(0.2, 0, n) . For our experiment, wehave chosen the original system to consist of n = 1 000 masses.We construct three reduced-order models: (i) our structure-preserving bilinear interpolation,denoted by StrInt, (ii) two unstructured classical bilinear approximations by converting (18) tofirst-order form (11) followed by interpolatory model reduction of this first-order system, denotedby FOInt. Note that FOInt yields a reduced-order model of the form (11), which does not retain14 20 40 60 80 100 − −
20 Time t A m p li t ud e (a) Time response. − − Time t M ag n i t ud e (b) Relative errors.
Original System StrInt FOInt(12) FOInt(24)Figure 1: Time simulation results for the damped mass-spring system.10 − − − − Frequency ω (rad/sec) M ag n i t ud e (a) Frequency response. − − − − Frequency ω (rad/sec) M ag n i t ud e (b) Relative errors.
Original System StrInt FOInt(12) FOInt(24)Figure 2: Frequency domain results of the first transfer functions for the damped mass-springsystem.the underlying physical structure. Also, it needs to be remarked that the computational effortfor the construction of FOInt is higher than for the structure-preserving approach due to solvingunderlying linear systems of doubled size, even in a structure exploiting implementation; see,e.g., [13]. Since the original system is a mechanical model, we use only a one-sided projection topreserve the mechanical properties in the reduced-order model, i.e., we apply Theorem 1 and set W = V . For all approximants, we focus on the first and second transfer functions and choose purelyimaginary interpolation points. We construct StrInt and FOInt(12) by using the interpolationpoints ± logspace(-4, 4, 3) i such that the resulting reduced-order bilinear systems are of order r = 12, giving two different interpolations in the same frequency points. Since bilinear second-order systems can be rewritten as first-order systems by doubling the state-space dimension, weconstruct additionally a second unstructured approximation FOInt(24) of order r = 24 by using ± logspace(-4, 4, 6) i, which has twice the order of StrInt.Figure 1 shows the time output of the original system, as well as that of the structure-preserving(StrInt) and first-order interpolations (FOInt(12), FOInt(24)), where we applied the input signal u ( t ) = sin(200 t ) + 200 , which can be seen as a step signal with a sinusoidal disturbance. We see that while all three outputsare indistinguishable in the beginning, FOInt(12) becomes unstable after approximately 20 timesteps and FOInt(24) after around 50 time steps, while StrInt accurately approximates the originalsystem over the whole time range of interest. Even though the linear dynamics in FOInt(12) andFOInt(24) are asymptotically stable, these reduced-order models completely lack the underlyingphysical mechanical structure and they become unstable for the chosen input signal. On the otherhand, by using one-sided projection, StrInt preserves all the mechanical (and physical) properties150 − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (a) StrInt. − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (b) FOInt(12). − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (c) FOInt(24). − − − − − − − − − Figure 3: Relative errors of the second transfer functions for the damped mass-spring system.of the original system in terms of symmetry and definiteness of the system matrices, which thenleads to the stable time simulation behavior in this case. Figures 2 and 3 show the approxima-tion results in the frequency domain for the first two transfer functions. Comparing StrInt andFOInt(12), the structure-preserving approximation is orders of magnitude better than the unstruc-tured approximation of the same size. StrInt and FOInt(24) behave mainly the same, while, forhigher frequencies, we can observe a numerical drift-off of the unstructured approximation.
This example, taken from [21], models a semi-discretized heated rod with distributed control andhomogeneous Dirichlet boundary conditions, which is cooled by a delayed feedback and is describedby the PDE ∂ t v ( ζ, t ) = ∂ ζ v ( ζ, t ) − ζ ) v ( ζ, t ) + 2 sin( ζ ) v ( ζ, t −
1) + u ( t ) , with ( ζ, t ) ∈ (0 , π ) × (0 , t f ) and boundary conditions v (0 , t ) = v ( π, t ) = 0 for t ∈ [0 , t f ]. After aspatial discretization using central finite differences, we obtain a bilinear time-delay system of theform ˙ x ( t ) = Ax ( t ) + A d x ( t −
1) +
N x ( t ) u ( t ) + Bu ( t ) ,y ( t ) = Cx ( t ) , with A, A d , N ∈ R n × n , B, C T ∈ R n , and where we have chosen n = 5 000 for our experiments.16 2 4 6 8 10 − t A m p li t ud e (a) Time response. − − Time t M ag n i t ud e (b) Relative errors.
Original System StrInt BiLoewnerFigure 4: Time simulation results for the time-delay system.10 − − − Frequency ω (rad/sec) M ag n i t ud e (a) Frequency response. − − − − Frequency ω (rad/sec) M ag n i t ud e (b) Relative errors.
Original System StrInt BiLoewnerFigure 5: Frequency domain results of the first transfer functions for the time-delay system.To compare with our structure preserving approximation (StrInt), in this example, we use theapproach from [21] to construct an unstructured bilinear system (2) without time-delay usingthe bilinear Loewner framework, denoted by BiLoewner. For the structured interpolation, wehave used the interpolation points ± logspace(-4, 4, 2) i for the first transfer function and ± logspace(-2, 2, 2) i for the second transfer function with the two-sided projection approachfrom Theorem 3. The resulting reduced-order bilinear time-delay system has order r = 8. For thebilinear Loewner method, we have chosen the interpolation points ± logspace(-4, 4, 80) i andused the rank truncation idea to obtain a classical (unstructured) bilinear system, also of order 8.With the input signal u ( t ) = cos(10 t )20 + cos(5 t )20 , Figure 4 shows that (a) the output trajectories of the original system, the structure-preservinginterpolation and the bilinear system without time-delay are indistinguishable in the eye ballnorm (b) but the relative error reveals that StrInt is several orders of magnitude better thanBiLoewner while having the same state-space dimension. The same behavior can be observed inthe frequency domain for the first and second transfer functions as shown in Figures 5 and 6,i.e., by preserving the special structure of the original system we obtain a significantly betterapproximation of the same size.
In this section, we will generalize the results from SISO structured bilinear systems to MIMO onesas in (9) and give a numerical example to illustrate the theory.170 − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (a) StrInt. − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (b) BiLoewner. − − − − − − Figure 6: Relative errors of the second transfer functions for the time-delay system.
In principle, all the results from Section 3 can directly be extended to the MIMO system case (9).However, one needs to realize that in this case, the quantities to be interpolated, i.e., the subsystemtransfer functions, are matrix-valued. The main difference from the SISO case lies in the collec-tion of the bilinear matrices into N ( s ) = (cid:2) N ( s ) . . . N m ( s ) (cid:3) and the corresponding Kroneckerproducts that produce the different combinations of the linear and bilinear parts in the k -th ordertransfer functions, e.g., in (5). Additionally, we will use the following notation (cid:101) N ( s ) := N ( s )... N m ( s ) as alternative way of concatenating the bilinear terms. In this paper, we will only focus on ma-trix interpolation, i.e., we will interpolate the full matrix-valued structured subsystem transferfunctions. There is a concept of tangential interpolation [3, 20] to handle matrix-valued functionsin which interpolation is enforced only in selected directions. We will consider that frameworkin a separate work since the definition of tangential interpolation is not unified yet for bilinearsystems [9, 32], let alone the structured ones we consider here.The following theorem extends the results from Theorems 1 to 3 to MIMO structured bilinearsystems. Theorem 8 (Matrix interpolation) . Let G be a bilinear system, as described by (9) , and (cid:98) G thereduced-order bilinear system, constructed by (10) . Given sets of interpolation points σ , . . . , σ k ∈ C and ς , . . . , ς θ ∈ C , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) , B ( s ) are defined and (cid:98) K ( s ) is full-rank, the following statements hold:(a) If V is constructed as V = K ( σ ) − B ( σ ) ,V j = K ( σ j ) − N ( σ j − )( I m ⊗ V j − ) , ≤ j ≤ k, span( V ) ⊇ span ([ V , . . . , V k ]) , hen the following interpolation conditions hold true: G ( σ ) = (cid:98) G ( σ ) ,G ( σ , σ ) = (cid:98) G ( σ , σ ) , ... G k ( σ , . . . , σ k ) = (cid:98) G k ( σ , . . . , σ k ) . (b) If W is constructed as W = K ( ς θ ) − H C ( ς θ ) H ,W i = K ( ς θ − i +1 ) − H (cid:101) N ( ς k − i +1 ) H ( I m ⊗ W i − ) , ≤ i ≤ θ, span( W ) ⊇ span ([ W , . . . , W θ ]) , then the following interpolation conditions hold true: G ( ς θ ) = (cid:98) G ( ς θ ) ,G ( ς θ − , σ θ ) = (cid:98) G ( ς θ − , σ θ ) , ... G θ ( ς , . . . , ς θ ) = (cid:98) G θ ( ς , . . . , ς θ ) . (c) Let V be constructed as in part (a) and W as in (b), then, additionally to the results in (a)and (b), the interpolation conditions G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) = (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) , hold for ≤ q ≤ k and ≤ η ≤ θ .Proof. Starting with part (a), we remember that the transfer functions can be rewritten by mul-tiplying out the Kronecker products as G k ( σ , . . . , σ k ) = [ C ( σ k ) K ( σ k ) − N ( σ k − ) · · · N ( σ ) K ( σ ) − B ( σ ) − , C ( σ k ) K ( σ k ) − N ( σ k − ) · · · N ( σ ) K ( σ ) − B ( σ ) − , · · ·C ( σ k ) K ( σ k ) − N m ( σ k − ) · · · N m ( σ ) K ( σ ) − B ( σ ) − ] . From the construction of V , it follows that applying Theorem 1 for the transfer functions ineach single entry gives the result. Part (b) directly follows from part (a) by replacing the matrixfunctions by their Hermitian conjugate versions except for N ( s ) = (cid:2) N . . . N m (cid:3) , where thesingle entries have to be transposed conjugated. Therefore, the differently stacked (cid:101) N ( s ) is usedhere to give (cid:101) N ( s ) H = (cid:2) N ( s ) H . . . N m ( s ) H (cid:3) . Finally, Part (c) follows directly from part (a), (b)and Theorem 3 for the single transfer function entries.For Hermite interpolation as in Theorems 4, 5 and 7, a similar extension to the MIMO casefollows. Theorem 9 (Hermite matrix interpolation) . Let G be a bilinear system, described by (9) , and (cid:98) G thereduced-order bilinear system, constructed by (10) . Given sets of interpolation points σ , . . . , σ k ∈ C and ς , . . . , ς θ ∈ C , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) , B ( s ) are analytic and (cid:98) K ( s ) is full-rank, the following statements hold: a) If V is constructed as V ,j = ∂ s j ( K − B )( σ ) , j = 0 , . . . , (cid:96) ,V ,j = ∂ s j K − ( σ ) ∂ s (cid:96) ( N ( I m ⊗ K − B ))( σ ) , j = 0 , . . . , (cid:96) , ... V k,j k = ∂ s jk K − ( σ k ) k − (cid:89) j =1 ∂ s (cid:96)k − j (cid:0) ( I m j − ⊗ N )( I m j ⊗ K ) (cid:1) ( σ k − j ) × ∂ s (cid:96) (( I m k − ⊗ N )( I m k − ⊗ K )( I m k − ⊗ B ))( σ ) , j k = 0 , . . . , (cid:96) k , span( V ) ⊇ span([ V , , . . . , V k,(cid:96) k ]) , then the following interpolation conditions hold true: ∂ s j G ( σ ) = ∂ s j (cid:98) G ( σ ) , j = 0 , . . . , (cid:96) , ... ∂ s (cid:96) ··· s (cid:96)k − k − s jkk G k ( σ , . . . , σ k ) = ∂ s (cid:96) ··· s (cid:96)k − k − s jkk (cid:98) G k ( σ , . . . , σ k ) , j k = 0 , . . . , (cid:96) k . (b) If W is constructed as W ,i θ = ∂ s iθ ( K − H C H )( ς θ ) , i θ = 0 , . . . , ν θ ,W ,i θ − = ∂ s iθ − ( K − H (cid:101) N H )( ς θ − ) (cid:0) I m ⊗ ∂ s νθ ( K − H C H )( ς θ ) (cid:1) , i θ − = 0 , . . . , ν θ − , ... W θ,i = ∂ s i ( K − H (cid:101) N H )( ς ) (cid:32) θ − (cid:89) i =2 ∂ s νi ( I m i − ⊗ K − H (cid:101) N H )( ς i ) (cid:33) × (cid:0) I m θ − ⊗ ∂ s νθ ( K − H C H )( ς θ ) (cid:1) , i = 0 , . . . , ν , span( W ) ⊇ span([ W , , . . . , W θ,ν θ ]) , then the following interpolation conditions hold true: ∂ s iθ G ( ς θ ) = ∂ s iθ (cid:98) G ( ς θ ) , i θ = 0 , . . . , ν θ , ... ∂ s i s ν ··· s νθθ G θ ( ς , . . . , ς θ ) = ∂ s i s ν ··· s νθθ (cid:98) G θ ( ς , . . . , ς θ ) , i = 0 , . . . , ν . (c) Let V be constructed as in part (a) and W as in part (b), then, additionally to the results in(a) and (b), the conditions ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ )= ∂ s (cid:96) ··· s (cid:96)q − q − s jqq s iθ − η +1 q +1 s νθ − η +2 q +2 ··· s νθq + η (cid:98) G q + η ( σ , . . . , σ q , ς θ − η +1 , . . . , ς θ ) , hold for j q = 0 , . . . , (cid:96) q ; i θ − η +1 = 0 , . . . , ν θ − η +1 ; ≤ q ≤ k and ≤ η ≤ θ .Proof. The results follow directly from Theorems 4, 5 and 7 with the same argumentation as inTheorem 8.For completeness, also the implicit interpolation results are stated in the following corollarywithout additional proofs.
Corollary 2 (Two-sided matrix interpolation with identical point sets) . Let G be a bilinear system,described by (9) , and (cid:98) G the reduced-order bilinear system, constructed by (10) . Given a set ofinterpolation points σ , . . . , σ k ∈ C , for which the matrix functions C ( s ) , K ( s ) − , N ( s ) , B ( s ) areanalytic and (cid:98) K ( s ) is full-rank, the following statements hold:
20 20 40 60 80 100 − −
20 Time t A m p li t ud e (a) Time response. − − Time t M ag n i t ud e (b) Relative errors.
Original System StrInt FOInt(36) FOInt(72)Figure 7: Time simulation results for the MIMO damped mass-spring system. (a) Let V and W be constructed as in Theorem 8 (a) and (b) for the interpolation points σ , . . . , σ k , then additionally it holds ∇ G k ( σ , . . . , σ k ) = ∇ (cid:98) G k ( σ , . . . , σ k ) . (b) Let V and W be constructed as in Theorem 9 (a) and (b) for the interpolation points σ , . . . , σ k , then additionally it holds ∇ (cid:16) ∂ s (cid:96) ··· s (cid:96)kk G k ( σ , . . . , σ k ) (cid:17) = ∇ (cid:16) ∂ s (cid:96) ··· s (cid:96)kk (cid:98) G k ( σ , . . . , σ k ) (cid:17) . We illustrate the matrix interpolation results in a numerical example. The experiments reportedhere have been executed on the same machine and with the same MATLAB version as in Section 3.3.We reconsider the damped mass-spring system example from Section 3.3.1 with the followingmodifications: The mass, damping and stiffness matrices from (18) stay unchanged. The inputforces are now applied to the first and last masses, i.e., the input term becomes B u = (cid:2) e , − e n (cid:3) ,and we observe the displacement of the second and fifth masses, which gives the output matrix C p = (cid:2) e , e (cid:3) T . Therefore, we have 2 inputs and outputs. We consider the same idea of bilinearsprings as before but working in different directions, i.e., we have N p , = − S KS and N p , = S KS , where S is chosen, as before, as diagonal matrix with linspace(0.2, 0, n) , and S is chosento be a diagonal matrix with linspace(0, 0.2, n) as entries. Overall, we have a damped mass-spring system of the form M ¨ x ( t ) + D ˙ x ( t ) + Kx ( t ) = N p , x ( t ) u ( t ) + N p , x ( t ) u ( t ) + B u u ( t ) ,y ( t ) = C p x ( t ) , (19)with n = 1 000 masses for our experiments.As in Section 3.3.1, we compare the structure-preserving interpolation method (StrInt) with theunstructured one, using the first-order realization of (19) (FOInt). For the construction of StrIntand FOInt(36), we choose ± logspace(-4, 4, 3) i as interpolation points for the first transferfunction and ± logspace(-3, 3, 3) i for the second one. Additionally, we construct anotherfirst-order approximation, FOInt(72), twice as large as the structured interpolation by taking ± logspace(-4, 4, 6) i and ± logspace(-3, 3, 6) i, as interpolation points for the first and sec-ond transfer functions, respectively. Also, we restrict ourselves again to a one-sided projection asin part (a) of Theorem 8 by setting W = V , which yields the reduced order r = 36 for StrInt andFOInt(36), and r = 72 for FOInt(72). 210 − − − − Frequency ω (rad/sec) M ag n i t ud e (a) Frequency response. − − − − Frequency ω (rad/sec) M ag n i t ud e (b) Relative errors.
Original System StrInt FOInt(36) FOInt(72)Figure 8: Frequency domain results of the first transfer functions for the MIMO damped mass-spring system.Figure 7 shows the results in time domain, where we have chosen the input signal u ( t ) = (cid:20) sin(200 t ) + 200 − cos(200 t ) − (cid:21) and measured point-wise the relative errors as (cid:107) y ( t ) − ˆ y ( t ) (cid:107) (cid:107) y ( t ) (cid:107) , for t ∈ [0 , (cid:107) G ( ω i) − (cid:98) G ( ω i) (cid:107) (cid:107) G ( ω i) (cid:107) and (cid:107) G ( ω i , ω i) − (cid:98) G ( ω i , ω i) (cid:107) (cid:107) G ( ω i , ω i) (cid:107) , for ω , ω ∈ [10 − , ]. For both transfer function levels, we observe that FOInt(36) is not asaccurate as StrInt and FOInt(72), which both nicely approximate the transfer functions exceptfor higher frequencies, where the unstructured approximation seems to have the same numericaldrift-off effect as in the SISO case. We extended the structure-preserving interpolation framework to bilinear control systems. First,we developed the subspace conditions for structured interpolation for single-input single-outputsystems, both for simple and Hermite interpolation. These results were extended to structuredmulti-input multi-output bilinear systems as well in the setting of full matrix interpolation. Theeffectiveness of the proposed approach was illustrated for two structured bilinear dynamical sys-tems: a mass-spring-damper system and a model with internal delay. The theory developed herecan be applied to a much broader class of structures than these two examples.In our examples, we made the rather simple choice of logarithmically equidistant interpolationpoints on the first two transfer function levels; thus the crucial problem of choosing good/optimal220 − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (a) StrInt. − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (b) FOInt(36). − − − − Frequency ω (rad/sec) F r e q u e n c y ω ( r a d / s ec ) (c) FOInt(72). − − − − − − − Figure 9: Relative errors of the second transfer functions for the MIMO damped mass-spring sys-tem.interpolation points remains open. This question is not fully resolved even for structure-preservinginterpolation of linear dynamical systems. Another issue to further investigate is the rapidly-enlarging reduced-order dimension in case of the matrix interpolation approach for multi-inputmulti-output systems. While in the linear case, tangential interpolation can be used to control thegrowth of the basis, there is no uniform treatment of tangential interpolation for bilinear systemsyet. This issue will be studied in a separate work.
Acknowledgments
Benner and Werner were supported by the German Research Foundation (DFG) Research TrainingGroup 2297 “MathCoRe”, Magdeburg, and the German Research Foundation (DFG) PriorityProgram 1897: “Calm, Smooth and Smart – Novel Approaches for Influencing Vibrations byMeans of Deliberately Introduced Dissipation”. Gugercin was supported in parts by NationalScience Foundation under Grant No. DMS-1720257 and DMS-1819110. Part of this material isbased upon work supported by the National Science Foundation under Grant No. DMS-1439786and by the Simons Foundation Grant No. 50736 while Gugercin and Benner were in residence atthe Institute for Computational and Experimental Research in Mathematics in Providence, RI,during the “Model and dimension reduction in uncertain and dynamic systems” program.We would like to thank Jens Saak for constructive discussions about the used notation andwriting down of this paper, and Igor Pontes Duff Pereira and Ion Victor Gosea for providing23ATLAB codes used for the generation of the bilinear time-delay example.
References [1] M. I. Ahmad, P. Benner, and P. Goyal. Krylov subspace-based model reduction for a classof bilinear descriptor systems.
J. Comput. Appl. Math. , 315:303–318, 2017. doi:10.1016/j.cam.2016.11.009 .[2] S. Al-Baiyat, A. S. Farag, and M. Bettayeb. Transient approximation of a bilinear two-areainterconnected power system.
Electric Power Systems Research , 26(1):11–19, 1993. doi:10.1016/0378-7796(93)90064-L .[3] A. C. Antoulas, C. A. Beattie, and S. Gugercin.
Interpolatory Methods for Model Reduc-tion . Computational Science & Engineering. Society for Industrial and Applied Mathematics,Philadelphia, PA, 2020. doi:10.1137/1.9781611976083 .[4] A. C. Antoulas, I. V. Gosea, and A. C. Ionita. Model reduction of bilinear systems inthe Loewner framework.
SIAM J. Sci. Comput. , 38(5):B889–B916, 2016. doi:10.1137/15M1041432 .[5] Z. Bai and D. Skoogh. A projection method for model reduction of bilinear dynamical systems.
Linear Algebra Appl. , 415(2–3):406–425, 2006. doi:10.1016/j.laa.2005.04.032 .[6] C. A. Beattie and S. Gugercin. Interpolatory projection methods for structure-preservingmodel reduction.
Syst. Control Lett. , 58(3):225–232, 2009. doi:10.1016/j.sysconle.2008.10.016 .[7] P. Benner and T. Breiten. On H -model reduction of linear parameter-varying systems. Proc.Appl. Math. Mech. , 11(1):805–806, 2011. doi:10.1002/pamm.201110391 .[8] P. Benner and T. Breiten. Interpolation-based H -model reduction of bilinear control systems. SIAM J. Matrix Anal. Appl. , 33(3):859–885, 2012. doi:10.1137/110836742 .[9] P. Benner, T. Breiten, and T. Damm. Generalized tangential interpolation for model reductionof discrete-time MIMO bilinear systems.
Internat. J. Control , 84(8):1398–1407, 2011. doi:10.1080/00207179.2011.601761 .[10] P. Benner, X. Cao, and W. Schilders. A bilinear H model order reduction approach tolinear parameter-varying systems. Adv. Comput. Math. , 45:2241–2271, 2019. doi:10.1007/s10444-019-09695-9 .[11] P. Benner and T. Damm. Lyapunov equations, energy functionals, and model order reductionof bilinear and stochastic systems.
SIAM J. Control Optim. , 49(2):686–711, 2011. doi:10.1137/09075041X .[12] P. Benner and P. Goyal. Multipoint interpolation of Volterra series and H -model reductionfor a family of bilinear descriptor systems. Syst. Control Lett. , 97:1–11, 2016. doi:10.1016/j.sysconle.2016.08.008 .[13] P. Benner and J. Saak. Efficient balancing-based MOR for large-scale second-order sys-tems.
Math. Comput. Model. Dyn. Syst. , 17(2):123–143, 2011. doi:10.1080/13873954.2010.540822 .[14] T. Breiten and T. Damm. Krylov subspace methods for model order reduction of bilinearcontrol systems.
Syst. Control Lett. , 59(8):443–450, 2010. doi:10.1016/j.sysconle.2010.06.003 .[15] A. Bruns and P. Benner. Parametric model order reduction of thermal models using thebilinear interpolatory rational Krylov algorithm.
Math. Comput. Model. Dyn. Syst. , 21(2):103–129, 2015. doi:10.1080/13873954.2014.924534 .2416] T. Carleman. Application de la th´eorie des ´equations int´egrales lin´eaires aux syst`emesd’´equations diff´erentielles non lin´eaires.
Acta Math. , 59:63–87, 1932. doi:10.1007/BF02546499 .[17] M. Condon and R. Ivanov. Krylov subspaces from bilinear representations of nonlinearsystems.
Compel-Int. J. Comp. Math. Electr. Electron. Eng. , 26(2):399–406, 2007. doi:10.1108/03321640710727755 .[18] L. Feng and P. Benner. A note on projection techniques for model order reduction of bilinearsystems. In
AIP Conference Proceedings , volume 936, pages 208–211, 2007. doi:10.1063/1.2790110 .[19] G. M. Flagg and S. Gugercin. Multipoint Volterra series interpolation and H optimal modelreduction of bilinear systems. SIAM J. Matrix Anal. Appl. , 36(2):549–579, 2015. doi:10.1137/130947830 .[20] K. Gallivan, A. Vandendorpe, and P. Van Dooren. Model reduction of MIMO systems viatangential interpolation.
SIAM J. Matrix Anal. Appl. , 26(2):328–349, 2004. doi:10.1137/S0895479803423925 .[21] I. V. Gosea, I. Pontes Duff, P. Benner, and A. C. Antoulas. Model order reduction of bilineartime-delay systems. In
Proc. of 18th European Control Conference (ECC) , pages 2289–2294,2019. doi:10.23919/ECC.2019.8796085 .[22] S. Gugercin, T. Stykel, and S. Wyatt. Model reduction of descriptor systems by inter-polatory projection methods.
SIAM J. Sci. Comput. , 35(5):B1010–B1033, 2013. doi:10.1137/130906635 .[23] C. S. Hsu, U. B. Desai, and C. A. Crawley. Realization algorithms and approximation methodsof bilinear systems. In
The 22nd IEEE Conference on Decision and Control, San Antonio,TX, USA , pages 783–788, 1983. doi:10.1109/CDC.1983.269628 .[24] A. Y. Khapalov. Controllability of the semilinear parabolic equation governed by a multi-plicative control in the reaction term: a qualitative approach. In , volume 2, pages 1487–1491,2003. doi:10.1109/CDC.2003.1272822 .[25] S. G. Korpeoglu and I. Kucuk. Optimal control of a bilinear system with a quadratic costfunctional. In , pages 1–6, 2018. doi:10.1109/ICCUBEA.2018.8697554 .[26] K. Kowalski and W.-H. Steeb.
Nonlinear Dynamical Systems and Carleman Linearization .World Scientific, Singapore, 1991. doi:10.1142/1347 .[27] V. Mehrmann and T. Stykel. Balanced truncation model reduction for large-scale systemsin descriptor form. In P. Benner, V. Mehrmann, and D. C. Sorensen, editors,
DimensionReduction of Large-Scale Systems , volume 45 of
Lect. Notes Comput. Sci. Eng. , pages 83–115.Springer-Verlag, Berlin/Heidelberg, Germany, 2005. doi:10.1007/3-540-27909-1_3 .[28] R. R. Mohler. Natural bilinear control processes.
IEEE Transactions on Systems Science andCybernetics , 6(3):192–197, 1970. doi:10.1109/TSSC.1970.300341 .[29] R. R. Mohler.
Bilinear Control Processes: With Applications to Engineering, Ecology andMedicine , volume 106 of
Mathematics in Science and Engineering . Academic Press, NewYork, London, 1973.[30] Y. Ou.
Optimal Control of a Class of Nonlinear Parabolic PDE Systems Arising in FusionPlasma Current Profile Dynamics . PhD thesis, Lehigh University, Bethlehem, Pennsylvania,USA, 2010.[31] K. Qian and Y. Zhang. Bilinear model predictive control of plasma keyhole pipe weldingprocess.
J. Manuf. Sci. Eng. , 136(3):031002, 2014. doi:10.1115/1.4025337 .2532] A. C. Rodriguez, S. Gugercin, and J. Boggaard. Interpolatory model reduction of param-eterized bilinear dynamical systems.
Adv. Comput. Math. , 44(6):1887–1916, 2018. doi:10.1007/s10444-018-9611-y .[33] W. J. Rugh.
Nonlinear System Theory: The Volterra/Wiener Approach . The Johns HopkinsUniversity Press, Baltimore, 1981.[34] J. Saputra, R. Saragih, and D. Handayani. Robust H ∞ controller for bilinear system tominimize HIV concentration in blood plasma. J. Phys.: Conf. Ser. , 1245:012055, 2019. doi:10.1088/1742-6596/1245/1/012055 .[35] L. Zhang and J. Lam. On H model reduction of bilinear systems. Automatica J. IFAC ,38(2):205–216, 2002. doi:10.1016/S0005-1098(01)00204-7doi:10.1016/S0005-1098(01)00204-7