Matrix Equations, Sparse Solvers: M-M.E.S.S.-2.0.1 -- Philosophy, Features and Application for (Parametric) Model
MMatrix Equations, Sparse Solvers:
M-M.E.S.S. -2.0.1 —Philosophy, Features and Application for (Parametric) ModelOrder Reduction
Peter Benner ∗ Martin Köhler ∗ Jens Saak ∗ May 12, 2020
Abstract
Matrix equations are omnipresent in (numerical) linear algebra and systems theory. Especially in modelorder reduction (MOR) they play a key role in many balancing based reduction methods for linear dynamicalsystems. When these systems arise from spatial discretizations of evolutionary partial differential equations,their coefficient matrices are typically large and sparse. Moreover, the numbers of inputs and outputs of thesesystems are typically far smaller than the number of spatial degrees of freedom. Then, in many situationsthe solutions of the corresponding large-scale matrix equations are observed to have low (numerical) rank.This feature is exploited by
M-M.E.S.S. to find successively larger low-rank factorizations approximating thesolutions. This contribution describes the basic philosophy behind the implementation and the features ofthe package, as well as its application in the model order reduction of large-scale linear time-invariant (LTI)systems and parametric LTI systems.
The
M-M.E.S.S. toolbox [54] for MATLAB ® (or package for GNU Octave ) in version 2.0.1 focuses on thesolution of large-scale symmetric algebraic and differential matrix equations and their application in modelorder reduction (MOR) and linear-quadratic regulator (LQR) problems. The basis for all considerations andproblem formulations are linear dynamical systems of the form E (cid:219) x ( t ) = Ax ( t ) + Bu ( t ) , y ( t ) = Cx ( t ) + Du ( t ) , ( Σ )where E , A ∈ R n × n , B ∈ R n × m , C ∈ R p × n , D ∈ R p × m , and x ( t ) ∈ R n , for all time instances t ∈ [ , T ] . Weassume that E is invertible, and often in addition that ( Σ ) is asymptotically stable.Some of the supported matrix equations have applications in H ∞ -control, where the slightly more structuredsystem E (cid:219) x ( t ) = Ax ( t ) + B u ( t ) + B w ( t ) , y ( t ) = C x ( t ) + D u ( t ) + D w ( t ) , z ( t ) = C x ( t ) + D u ( t ) + D w ( t ) , ( Σ ∞ )is considered. M-M.E.S.S. aims at systems, where n ∈ N is too large to store an n × n matrix in the computer’s memory.This will usually be accounted for by the facts, that p , m (cid:28) n and E , A are sparse, or have a sparse realizationthat we can exploit in computations. We present more details about the exploitable structures in Section 2.Similarly, for systems ( Σ ∞ ) the matrices B , B , C , C are considered thin and rectangular and the parts D ij , i , j ∈ { , } correspondingly small. ∗ Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany, {benner,koehlerm,saak}@mpi-magdeburg.mpg.de a r X i v : . [ c s . M S ] M a y he contribution of this document is two-fold. On the one hand, we give the first concise introduction to M-M.E.S.S. , its general philosophy and current features. On the other hand, we show how the software, thatis in core intended for the solution of large-scale matrix equations, can be employed in the implementation ofbasic parameteric model order reduction (PMOR) methods for systems of the form ( Σ ).Before moving on to the historical evolution of the package, we state the equations that can currently besolved by M-M.E.S.S. . The following is a list of all equations for which at least one solver function exists:
Algebraic Lyapunov equations = APE T + E PA T + BB T = A T QE + E T QA + C T C (CALE) Algebraic Riccati equations = APE T + E PA T + BB T − E PC T CPE T = A T QE + E T QA + C T C − E T QBB T QE (CARE)0 = ˜ APE T + E P ˜ A T + ˜ B ˜ B T − E P (cid:16) ˜ C T ˜ C − ˜ C T ˜ C (cid:17) PE T = ˜ A T QE + E T Q ˜ A + ˜ C T ˜ C − E T Q (cid:16) ˜ B ˜ B T − ˜ B ˜ B T (cid:17) QE ( H ∞ − ARE )In the last pair of equations, the matrix ˜ A is sparse plus low-rank (splr), i.e. ˜ A = A + UV T , where U , V aretall and skinny. Moreover the matrices ˜ B , ˜ B , ˜ C , ˜ C are derived from the given system data by scaling andpotentially rotation of the matrices B , B , C , C .For finite time horizon linear-quadratic control problems, one needs to solve differential Riccati equations.We restrict to providing only the controller equations here, while the dual “filter-type” equations are supportedas well. Autonomous differential Riccati equations − E T (cid:219) Q ( t ) E = A T Q ( t ) E + E T Q ( t ) A + C T C − E T Q ( t ) BB T Q ( t ) E (ADRE) Non-autonomous differential Riccati equations − E ( t ) T (cid:219) Q ( t ) E ( t ) = (cid:0) A ( t ) + (cid:219) E ( t ) (cid:1) T Q ( t ) E ( t ) + E ( t ) T Q ( t ) (cid:0) A ( t ) + (cid:219) E ( t ) (cid:1) (NDRE) + C ( t ) T C ( t ) − E ( t ) T Q ( t ) B ( t ) B ( t ) T Q ( t ) E ( t ) The last equations are formulated for the time-varying counterpart of ( Σ ), i.e. the system where all matrices areallowed to depend on time as well. Both DREs contain the case of differential Lyapunov equations. Optimizedsolvers for those are still work in progress and must at the moment be implemented by setting either B , or C (inthe dual equation) to zero and thus eliminating the quadratic term. Available solution methods in M-M.E.S.S. are described in Section 2.Classic Lyapunov equation based balanced truncation is known to preserve asymptotic stability of theoriginal system in the reduced-order model. Other balancing-based methods have been developed to preserveother properties like passivity or contractivity. For these special balancing-type MOR methods, other matrixequations need to be solved that do not have a tailored solver in
M-M.E.S.S. , yet. Still, they can be reformulatedinto one of the types above. In order to have a more complete picture what equations can be solved with thecurrent
M-M.E.S.S. , we list them here, but get back to them in Section 3 and describe their reformulations intothe special cases above and why they can still be solved using
M-M.E.S.S. . Positive real balancing = APE T + E PA T + ( E PC T − B )( D + D T ) − ( E PC T − B ) T = A T QE + E T QA + ( E T QB − C T )( D + D T ) − ( E T QB − C T ) T (PRARE) ounded real balancing = APE T + E PA T + BB T + ( E PC T + BD T )( I − DD T ) − ( E PC T + BD T ) T = A T QE + E T QA + C T C + ( E T QB + C T D )( I − D T D ) − ( E T QB + C T D ) T (BRARE) Linear-quadratic Gaussian balancing = APE T + E PA T + BB T − ( E PC T + BD T )( I + DD T ) − ( E PC T + BD T ) T = A T QE + E T QA + C T C − ( E T QB + C T D )( I + D T D ) − ( E T QB + C T D ) T (LQGARE) M-M.E.S.S.
Early days, the
LyaPack years
The package
M-M.E.S.S. originates in the work of Penzl [14, 47, 48]around the year 2000. More precisely, we understand
M-M.E.S.S. as a continuation and successor of Penzl’s
LyaPack -toolbox [49] for MATLAB. While most of the basic ideas from the original package have beenpreserved, some features have been abandoned and some have been altered to improve efficiency and reliability.The treatment of generalized state-space systems, i.e. systems ( Σ ) with nontrivial, i.e. non-identity, E -matrices have been added first. These changes still happened under the LyaPack -label in versions 1.1–1.8until about 2007.
Transition to
M-M.E.S.S. and present
The transition to the relabeled
M-M.E.S.S. -1.0 package includeda complete reorganization of the process data. Also,
LyaPack used string manipulations and eval -calls tomimicked function pointers, which we replaced by proper function handles supported in modern MATLAB and
GNU Octave . Moreover, the formulation of the low-rank alternating directions implicit (LR-ADI) iteration,which always was the heart and soul of
LyaPack , was greatly updated to allow for cheaper evaluation ofstopping criteria and an iteration inherent generation of real solution factors, which could only be achievedthrough post-processing in
LyaPack .The necessity for an a priori selection of shift parameters for convergence acceleration used to be a majorpoint of criticism regarding the ADI based solvers. The selection of shift generation methods was extended in
M-M.E.S.S. and especially a new method that automatically generates the shifts during the iteration [34] wasadded, which makes the solvers accessible also to non-experts.Other than that, version 1.0 saw general code modernization to support optimized features in MATLABand to be 100%
GNU Octave compatible.The two major contributions of version 2.0 were the inclusion of the RADI iteration[7] for (CARE) andseveral solvers for differential Riccati equations in both the autonomous (ADRE) and non-autonomous (NDRE)cases.Moreover, over time more system classes, including specially structured differential algebraic equation(DAE) based systems and second-order systems, have been added.
Future development plans
The most immediate upcoming feature in the near future is the inclusionof Krylov subspace projection methods for algebraic Lyapunov [58, 60] and Riccati equations [61, 39, 59].The infrastructure and solvers are under current development and the feature is going to be part of version3.0. The plans for the more distant future include, inclusion of low-rank solvers for Sylvester equations [12]and non-symmetric AREs [13], as well as the discrete-time counterparts of the existing equations, i.e. Steinequations [32, 38, 12, 34, 51] and discrete time Riccati equations. Also, more complex sets of equations likeLur’e equations [50] and Lyapunov-plus-positive equations [19, 8, 57] are currently investigated and will beadded if solvers can be implemented in a robust and efficient way using the
M-M.E.S.S. infrastructure.
The following section introduces
M-M.E.S.S. and its basic implementation philosophy. It further elaborateson supported system structures beyond the basic form in ( Σ ) and describes current basic features of thepackage. Section 3 is dedicated to the description of MOR methods contained or demonstrated in M-M.E.S.S. , default so_1 / so_2 dae_1 dae_2 dae_1/2/3_so System standard / gen-eralized state-space form second-order1st / 2nd com-panion form semi-explicitindex-1 DAE semi-explicitindex-2 Stokes-type DAEs semi-explicitsecond-orderindex-1/2/3DAEs usingcompanion formDemos FDM [49],Rail [15] TripleChain [65,53] DAE1 (BIPSPower-systemsmodel [24]) DAE2Stokes [56],Kárman vortexshedding [68] constrainedTripleChainTable 1: Supported system structures via user-supplied function sets (usfs). while Section 4 shows how the existing tools in
M-M.E.S.S. can be used to implement basic PMOR methodsfrom the literature. The last section demonstrates how
M-M.E.S.S. can be employed in parametric modelorder reduction, solving a selection of the above equations, for the benchmark example introduced in theseparate chapter [ ? ] of this volume. Similarly, this benchmark setting is considered in the other softwarechapters [ ? , 40, 31], in order to compare the applicability of the individual packages in a standardized setting. M-M.E.S.S. — Philosophy and Features
The
M-M.E.S.S. philosophy relies on three simple principles: abstract state-space system
All routines assume to work on a system of the form ( Σ ), or ( Σ ∞ ). For asimple spatially discretized parabolic PDE, ( Σ ) is exactly given by the sparse matrices describing thesemi-discretized system. For other systems, ( Σ ) may be a dense, inaccessible realization, like, e.g. aprojection to a hidden manifold for a Stokes-type DAE system. implicit reformulation When the system matrices are potentially dense or even inaccessible, or otherwiseprohibitive to use, the matrices are never formed explicitly, but only their actions are expressed in termsof the original data. For the aforementioned DAEs this means, only the given semi-explicit systemmatrices are employed, but the algorithm runs as if it was formulated on the hidden manifold, i.e. for theimplicit ordinary differential equation. This technique is often also called implicit index-reduction . Forsecond-order systems, similarly, it is sometimes prohibitive to work with the double-sized phase-spacerealization in companion form. Again, all operations are executed only using the original second-ordermatrices, while solutions live in the double sized space. operation abstraction
The abstraction of operations is realized via the so-called user-supplied function sets(usfs), which we have inherited from
LyaPack . In comparison to
LyaPack we have slightly extended thisset of functions. At the same time, we have removed the necessity to provide empty functions, whichare now automatically replaced by a do_nothing function. While making things far more complicatedin, e.g. the default case (see Table 1), where all matrices are expected to be available, this allows tohide the actual matrix realization from the algorithms. This way, in principle, the algorithms can runmatrix-free with respect to A and E as demonstrated in [16].The basic structure and design, of M-M.E.S.S. , was decided when object-oriented features in MATLABwere in their early stages and essentially absent in
GNU Octave . Still some of the design follows object-oriented paradigms. We mimic the object-orientation by passing three central data-structures through allrelevant functions. These three items of type struct are eqn
This structure essentially holds all relevant information about the underlying system ( Σ ), or ( Σ ∞ ) anddetermines which equation in the dual pair we are aiming to solve, by eqn.type=’N’ , or eqn.type=’T’ representing the transposition on the left multiplication by A . oper The operator structure, generated by the function operatormanager , holds all function handles for therelevant operations with the system matrices A and E . A list of these operations can be found in Table 2.Most functions in the list are accompanied by two functions, with appendices _pre and _post , called Y = oper.mul_A(eqn,opts,opA,B,opB) Y = A opA B opB Y = opr.mul_E(eqn,opts,opE,B,opB) Y = E opE B opB Y = oper.mul_ApE(eqn,opts,opA,p,opE,B,opB) Y = (cid:0) A opA + pE opE (cid:1) B opB X = oper.sol_A(eqn,opts,opA,B,opB) A opA X = B opB X = oper.sol_E(eqn,opts,opE,B,opB) E opE X = B opB X = oper.sol_ApE(eqn,opts,opA,p,opE,B,opB) (cid:0) A opA + pE opE (cid:1) X = B opB result = oper.init(eqn,opt,oper,f1,f2) general initialization and sanity checks [W,res0] = oper.init_res(eqn,opts,oper,V) Compute initial residual factor W from V , and res0 = (cid:107) W (cid:107) [eqn,opts,oper] =eval_matrix_functions(eqn,opts,oper,t) In the time-varying case, fix all theabove to time instance t . n = oper.size(eqn,opts,oper) Returns the dimension n in ( Σ ).Table 2: User supplied function names and their actual operation. at the beginning and the end of a function working with them. They are intended for the generation andclean up of helper data, like the pre-factorization of matrices, when a sparse direct solver is used, or thegeneration of a preconditioner for an iterative solver. opts The actual options structure is a structure of structures, i.e. it has a substructure for each algorithm/func-tion, but also holds central information on the top level. For example opts.norm defines the norm thatshould consistently be used in all operations and hierarchy levels of the potentially cascaded algorithms,while substructures like opts.adi , or opts.shifts provide the specific control parameters for theLR-ADI algorithm and the shift computation.Note that for all matrix operations in the usfs, we allow for corresponding _pre and _post functions. Otherfunctions like init or size do not support _pre and _post .While the function handles in oper work on the original A from ( Σ ), sometimes it is necessary to actuallywork with low-rank updated versions of A in the form A + UV T . We have seen an example in ( H ∞ − ARE ),where ˜ A is in the very form. Another prominent appearance is the Newton-Kleinman iteration (see [33] forclassic iteration and [14] for the low-rank version) for (CARE), where in iteration j , the step equation (for thesecond equation in the pair) takes the form (cid:0) A − BK j − (cid:1) T X j E + E T X j (cid:0) A − BK j − (cid:1) = (cid:2) C K j − (cid:3) T (cid:2) C K j − (cid:3) . Therefore, most solvers in
M-M.E.S.S. assume that this structure can be given. The flag eqn.haveUV set to anon-zero value indicates that this is the case. Then the fields eqn.U and eqn.V need to hold the correspondingdense rectangular matrices of compatible dimensions. Similarly, the field eqn.haveE indicates that a non-trivial, i.e. non-identity E matrix is present and needs to be used via the function handles in Table 2.Note that it is prohibitive to form A + UV T explicitly, since even for very sparse A it can easily be a densematrix. Especially, it is prohibitive to use direct solvers based on matrix decompositions on it, since then evenif A + UV T manages to preserve some sparsity, the fill-in will make the triangular factors dense. Therefore, alllinear systems with A + UV T are solved via the Sherman-Morrison-Woodbury matrix-inversion formula (see,e.g. [27, Section 2.1.4]) in M-M.E.S.S. . algebraic Lyapunov equations mess_lradi The low-rank alternating directions implicit (LR-ADI) iterationin residual based formulation and with automatic shift selectionfor (CALE). [34] algebraic Riccati equations mess_lrnm
An inexact Kleinman-Newton iteration with line search for (CARE). [68] mess_lrradi
The RADI iteration for (CARE) [7] mess_lrri
A low-rank version of the Riccati iteration [36] for ( H ∞ − ARE ) differential Riccati equations mess_bdf_dre Low-rank formulation of backward differentiation formulas forlarge-scale differential Riccati equations (ADRE), (NDRE) [35] mess_rosenbrock_dre
Low-rank formulation of Rosenbrock methods for large-scale differ-ential Riccati equations (ADRE) [35] mess_splitting_dre
Splitting schemes for large-scale differential Riccati equa-tions (ADRE), (NDRE) [62, 63]Table 3: Solver functions with algorithm and feature descriptions and latest and most feature complete literaturereferences.
We provide two solvers for the standard cases in (CALE) and (CARE) that are purely matrix-based, intendedfor large-scale sparse matrix coefficients and classic 2-term low-rank factorizations of the constant terms andcores in the quadratic terms. The functions are called mess_lyap and mess_care and mimic the calls of lyap and care from MATLAB’s control systems toolbox, or the
GNU Octave control package, for densematrices.Other than that, we have the functions in Table 3 that allow for more flexible tuning, solve a large varietyof equations and, especially, benefit from the full potential of the user supplied functions. In the table wegive references to the most state of the art presentations of the algorithms in the literature, on which ourimplementations are based.
M-M.E.S.S.
The basic model order reduction (MOR) facilities in
M-M.E.S.S. are limited. Still, all building blocks forprojection-based MOR using balancing methods, where matrix equations are most obviously applied, areavailable. For the sake of completeness and to fix our notation, we repeat the basics of projection basedMOR. Given a state-space system of the form ( Σ ), we search for the two rectangular transformation matrices V , W ∈ R n × r that define the actual oblique projection T = V ( W T V ) − W T , but transform the system into thereduced coordinates directly. The reduced-order model then takes the formˆ E (cid:219) ˆ x ( t ) = ˆ A ˆ x ( t ) + ˆ Bu ( t ) , ˆ y ( t ) = ˆ C ˆ x ( t ) + Du ( t ) , (ROM) here ˆ E = W T EV , ˆ A = W T AV ∈ R r × r , ˆ B = W T B ∈ R r × m , and ˆ C = CV ∈ R p × r .The number of actual MOR routines in M-M.E.S.S. is rather limited. In version 2.0.1, we have mess_balanced_truncation implementing classic Lyapunov balancing [42, 64, 37], for systems ( Σ ) real-ized with sparse E and A ([14, 29, 53]), and mess_tangential_irka implementing the tangential iterativeKrylov algorithm (IRKA) [28] for first and second order systems. Our Gramian computation methods areintegrated in a range of MOR software packages, though. While sssMOR [20] directly calls M-M.E.S.S. -1.0.1,for other packages like MOREMBS [23] and MORPACK [43] we have contributed tailored versions of ouralgorithms.Also, we provide tools like a square root method (SRM) function to compute the transformation matrices V and W from given Gramian factors. This function currently only uses the classic Lyapunov balancing errorbound in the adaptive mode. This is subject to change in future versions. Consider that all matrices in ( Σ ) are available. As an example we use the Steel Profile benchmark [15, 45],included in M-M.E.S.S. , using the version with n = E r , A r , B r , C r for maximum reduced order 25 using the tangential IRKA [28] is as easy as calling: eqn = getrail(1);opts.irka.r = 25;[Er, Ar, Br, Cr] = mess_tangential_irka(eqn.E_,eqn.A_, eqn.B, eqn.C, opts) This will use default values for maximum iteration numbers and stopping criteria, which can be refined viathe opts.irka structure. For a list of available options see help mess_tangential_irka .Analogously, to compute a (Lyapunov) balanced truncation approximation of maximum order 50 and withan absolute H ∞ -error tolerance of 10 − for the same model the simplest call is: eqn = getrail(1);[Er, Ar, Br, Cr] = mess_balanced_truncation(eqn.E_, eqn.A_, eqn.B, eqn.C, 50, ...1e-2); Note that Lyapunov balancing leaves the D matrix untouched in general, while it is absent in this exampleanyway. Note, further, that the interface may change slightly in future releases to make it more consistent withthat of the IRKA function and to allow for the addition of the other balancing methods.The balanced truncation approximation can be achieved in a step by step procedure first computing the twoGramian factors, then applying them in the square root method to determine V and W , and finally compressingthe large-scale matrices to the reduced-order system matrices. This can all be executed using the proceduralbuilding blocks of mess_balanced_truncation . The example bt_mor_rail_tol in the DEMOS/Rail folder, residing in the main installation folder of
M-M.E.S.S. -2.0.1, demonstrates this procedure. The step-wise approach can also be used for a number of structured systems like second-order systems, or semi-explicitDAE systems, while mess_balanced_truncation only supports generalized systems with invertible E , andall coefficients given explicitly as matrices, at the moment. See Table 4 for an overview of demonstrationexamples explaining these procedures. We have shown the Riccati equations defining the Gramians employed in positive-real, bounded-real, andlinear-quadratic Gaussian balanced truncation in equations (PRARE), (BRARE), (LQGARE) in the Introduc-tion. Assuming, we have computed the Gramian factors, the reduced-order models can be derived, along thelines of the demonstration examples from Table 4. This can be done using the same
M-M.E.S.S. function atleast for a fixed desired reduced order. The error bound based order decision in the square root method needsadaptation to the specific error bound in some cases, though, see e.g. [2, Section 7.5] for a comparison of thebounds and procedures.Here, we restrict ourselves to presenting how the specially structured Riccati equations can be solved withthe existing functionality in
M-M.E.S.S. . For positive-real systems, by definition D + D T is positive-definite, when it is invertible. This is always the casewhen the Riccati equations exist and do not degenerate to a set of Lur’e equations. Then we can decompose bt_mor_DAE1_tol Balanced truncation for a semi-explicit power systems model ofdifferential index 1 [24] bt_mor_DAE2
Balanced truncation for Stokes and Oseen equations of index 2 [30, 17]
BT_TripleChain
First-order and structure-preserving balanced truncation for amodel with three coupled mass-spring-damper chains [65, 53, 52]
BT_sym_TripleChain
As above, but exploiting state-space symmetry of the tailored com-panion form first-order reformulation
BT_DAE3_SO
First-order and structure-preserving balanced truncation for avariant of the above system that has a constraint turning it into anindex-3 DAE [55, 66, 67]Table 4: Demonstration examples for balanced truncation of structured systems in
M-M.E.S.S. . D + D T into Cholesky factors, i.e. R T R = D + D T . Using these Cholesky factors, we define˜ E = E , ˜ A = A + UV T , ˜ B = BR − , ˜ C = R − T C , with U = ˜ B and V T = ˜ C , and a straight forward calculation shows that (PRARE) can be rewritten in the form0 = ˜ AP ˜ E T + ˜ E P ˜ A T + ˜ B ˜ B T + ˜ E P ˜ C T ˜ CP ˜ E T = ˜ A T Q ˜ E + ˜ E T Q ˜ A + ˜ C T ˜ C + ˜ E T Q ˜ B ˜ B T Q ˜ E . This resembles the Riccati case in ( H ∞ − ARE ) with a low-rank updated matrix A and only the positivequadratic term present. This case is supported by the mess_lrri routine. Note that D + D T is of smalldimension, such that this reformulation is always feasible. The bounded-real assumptions guarantee that I − DD T and I − D T D are symmetric positive definite. Therefore,we can decompose them into Cholesky factors, i.e. R T R = I − DD T and L T L = I − D T D . Now, we define˜ E = E , ˜ A = A + UV T , ˜ B = BL − , ˜ C = R − C , with U = BD T and V T = ( I − DD T ) − C and another technical, but straight forward, calculation showsthat (BRARE) can be rewritten in the form0 = ˜ AP ˜ E T + ˜ E P ˜ A T + ˜ B ˜ B T + ˜ E P ˜ C T ˜ CP ˜ E T = ˜ A T Q ˜ E + ˜ E T Q ˜ A + ˜ C T ˜ C + ˜ E T Q ˜ B ˜ B T Q ˜ E . This, again, falls into the class of equations in ( H ∞ − ARE ) with a low-rank updated matrix A and only thepositive square term present. As mentioned above, this case is supported by the mess_lrri routine. For thesame reason as above, this reformulation can always be done. For linear-quadratic Gaussian balanced truncation, an important special case (see, e.g. [11, 44, 41, 3]) is D =
0. In that case (LQGARE) obviously reduces to the standard Riccati equation (CARE) that can besolved using mess_lrnm or mess_lrradi . The corresponding M-M.E.S.S. workflow is demonstrated in the lqgbt_mor_FDM example for a simple heat equation model semi-discretized by the finite difference method. n the other hand, when D (cid:44)
0, it is, by standard assumptions in
M-M.E.S.S. , real and all eigenvalues of DD T and D T D are non-negative. Therefore, I + DD T and I + D T D are symmetric and positive definite andanalogous to the above, we can decompose into Cholesky factorizations R T R = I + DD T and L T L = I + D T D .We now define ˜ E = E , ˜ A = A + UV T , ˜ B = BL − , ˜ C = R − C , with U = − BD T and V T = ( I + DD T ) − C . An analogous calculation to the bounded-real case showsthat (LQGARE) can be rewritten in the form0 = ˜ AP ˜ E T + ˜ E P ˜ A T + ˜ B ˜ B T − ˜ E P ˜ C T ˜ CP ˜ E T = ˜ A T Q ˜ E + ˜ E T Q ˜ A + ˜ C T ˜ C − ˜ E T Q ˜ B ˜ B T Q ˜ E . Due to the differing signs, here, we end up with a standard Riccati equation (CARE), just like in the case D =
0. Again the transformation is always feasible in the sense of
M-M.E.S.S. applicability.
M-M.E.S.S.
Parametric MOR (PMOR) aims to preserve symbolic parameters in the original system description also in thereduced-order model. In the most general case that means the system E ( µ ) (cid:219) x ( µ, t ) = A ( µ ) x ( µ, t ) + B ( µ ) u ( t ) , y ( µ, t ) = C ( µ ) x ( µ, t ) + D ( µ ) u ( t ) , ( Σ ( µ ) )is transformed into ˆ E ( µ ) (cid:219) ˆ x ( µ, t ) = ˆ A ( µ ) ˆ x ( µ, t ) + ˆ B ( µ ) u ( t ) , ˆ y ( t ) = ˆ C ( µ ) ˆ x ( µ, t ) + D ( µ ) u ( t ) . (ROM( µ ))By default, M-M.E.S.S. -2.0.1 does not support PMOR. It is, however, very easy to implement basic PMORroutines building up on the methods from the previous section. The key ingredient, that at the same timeestablishes the link to the previous section in many methods, is the necessity to evaluate standard MORproblems in certain training points for given parameter values µ ( i ) ( i = , . . . , k ), e.g. on a sparse-grid in theparameter domain. While piecewise MOR approaches (e.g. [4]) aim to find constant global (with respect tothe parameter) transformation matrices V and W to derive (ROM( µ )), other methods aim to establish it byinterpolation of some kind. The literature basically provides three approaches interpolating different systemfeatures, see, e.g., [10] for further categorization of PMOR methods:• matrix interpolation, i.e. function interpolation of the parameter-dependent coefficient matrices, or thetransformation matrices, (e.g. [46, 25, 26, 1]),• interpolation of the transfer functions in the parameter variable [5],• interpolation of system poles (e.g. [9, 69]).We demonstrate the basic steps for piecewise and interpolatory methods along the lines of [4, 5] in theremainder of this section and give numerical illustrations in Section 5. We have mentioned above that the aim, here, is to find V and W constant, such that ˆ E ( µ ) = W T E ( µ ) V ,ˆ A ( µ ) = W T A ( µ ) V , ˆ B ( µ ) = W T B ( µ ) , ˆ C ( µ ) = C ( µ ) V . The strong point of this method is that it trivially allowsthe ROMs in the parameters µ ( i ) to vary in their reduced-order. This is due to the fact that V = (cid:104) V ( ) · · · V ( k ) (cid:105) and W = (cid:104) W ( ) · · · W ( k ) (cid:105) , with V ( i ) and W ( i ) the transformation matrices at parameter sample µ ( i ) . This concatenation should befollowed by a rank truncation to eliminate linear dependencies.It essentially does not matter how the single transformation matrices have been generated. We follow thepresentation in [4], where IRKA is used. In the numerical experiments we also compare to versions usingbalanced truncation in the training samples. .2 Interpolation of transfer functions The representation of ( Σ ( µ ) ) in frequency domain after Laplace transformation in the time variable ( t ), leadsto the transfer function H ( µ, s ) = C ( µ )( sE ( µ ) − A ( µ )) − B ( µ ) . For fixed µ , the IRKA method seeks to interpolate this function in s -direction, while the well known balancedtruncation method computes an approximation to this function, with a computable error bound. Therefore, itis an obvious task to try to extend these features by interpolation in µ -direction. Baur and Benner meet thisgoal in [5] for local balanced truncation approximations of ( Σ ( µ ) ), achieving both stability preservation andan error bound, i.e. the selling points of balanced truncation. Moreover their method shares the flexibility withrespect to the ROM orders, since the interpolation is done via the transfer function, that has fixed dimensionindependent of the realization of the system. On the other hand, interpolation on matrix manifolds and withrespect to system invariants need to fix the dimensions of those objects.For simplicity we restrict ourselves to the case of scalar parameters. The approach in [5] defines (ROM( µ ))via its transfer function, which is chosen as an interpolant of the formˆ H ( µ, s ) = k (cid:213) i = (cid:96) i ( µ ) ˆ H ( i ) ( s ) = k (cid:213) i = (cid:96) i ( µ ) ˆ C ( i ) (cid:16) s ˆ E ( i ) − ˆ A ( i ) (cid:17) − ˆ B ( i ) = k (cid:213) i = ˆ C ( i ) ( µ ) (cid:16) s ˆ E ( i ) − ˆ A ( i ) (cid:17) − ˆ B ( i ) with scalar coefficients functions (cid:96) i ( µ ) , ˆ H ( i ) ( s ) the transfer function of the ROM at parameter sample µ ( i ) andˆ C ( i ) ( µ ) = (cid:96) i ( µ ) ˆ C ( i ) . One can use the last identity to define the matrices for the ROM-realizationˆ E = ˆ E ( ) ... ˆ E ( k ) , ˆ A = ˆ A ( ) ... ˆ A ( k ) , ˆ B = ˆ B ( ) ...ˆ B ( k ) , ˆ C ( µ ) = (cid:2) ˆ C ( ) ( µ ) · · · ˆ C ( k ) ( µ ) (cid:3) , such that ˆ H ( µ, s ) = ˆ C ( µ ) (cid:16) s ˆ E − ˆ A (cid:17) − ˆ B . Note that the parameter could as well be put into ˆ B . The specific choice of Lagrange polynomials is notnecessary. We present experiments with both classic polynomial interpolation and spline interpolation in thenext section. Since we are dealing with scalar coefficient functions here, it is advisable for a modern MATLAB-implementation to exploit the power of Chebfun [22, 21]. We do this for the polynomial interpolation and thegeneration of the grid of training parameters, while the splines use our own implementation. The experiments reported here have been executed in MATLAB R2019a on a Lenovo X380 Yoga equippedwith an Intel ® i7 8770 and 32GB of main memory running 64bit Linux based on Ubuntu 18.04. Theexperiments use M-M.E.S.S. -2.0.1 [54] and Chebfun version 5.7.0 [22, 21].The source code of the implementations used to compute the presented results can be obtainedfrom: https://doi.org/10.5281/zenodo.3678213 and is authored by Jens Saak and Steffen W. R. Werner.For easier comparison with the other reported software packages, all experiments use the thermal blockbenchmark introduced in a separate chapter of this volume. It describes a simple heat transfer model on thedomain depicted in Figure 1a. Here, we investigate the one parameter version of the benchmark. That means,the heat transfer coefficients on the four circular sub-domains are given as 0 . µ , 0 . µ , 0 . µ , and 0 . µ for asingle scalar parameter µ ∈ [ − , ] = M ⊂ R . The full order model has dimension n = Ω Ω Ω Ω Ω Γ in Γ N Γ N Γ D (a) Computational domain and boundaries. − − − − − Frequency (rad/sec) P a r a m e t e r (b) Sigma magnitude plot. − − − − − − − − − − Figure 1: Computational domain and sigma magnitude plot for the thermal block model.Method ROMs Full One-sided
Piecewise
BT(10 − ) 9/12/15/13/12/9/8/9/8/7 102 (52) 200 (36)BT(20) 20/20/20/20/20/20/20/20/20/20 199 (64) 200 (72)IRKA 20/20/20/20/20/20/20/20/20/20 200 (132) 200 (132) Lagrange
BT(10 − ) 9/9/12/15/12/9/8/8/7/7 96 –IRKA 20/20/20/20/20/20/20/20/20/20 200 – B-Spline
BT(10 − ) 9/9/12/15/12/9/8/8/7/7 96 –IRKA 20/20/20/20/20/20/20/20/20/20 200 –Table 5: Reduced orders of the training-sample ROMs and final ROM (numbers in () are after additionaltruncation with tolerance 10 − ). 11 − − − − − Frequency (rad/sec) P a r a m e t e r (a) Piecewise BT(10 − ) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (b) Truncated piecewise BT(10 − ) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (c) Piecewise BT(20) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (d) Truncated piecewise BT(20) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (e) Piecewise IRKA approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (f) Truncated piecewise IRKA approximation. − − − − − − Figure 2: Relative sigma-magnitude errors of different piecewise parametric reduction approaches for thethermal block model. 12 − − − − − Frequency (rad/sec) P a r a m e t e r (a) Piecewise BT(10 − ) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (b) Truncated piecewise BT(10 − ) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (c) Piecewise BT(20) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (d) Truncated piecewise BT(20) approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (e) Piecewise IRKA approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (f) Truncated piecewise IRKA approximation. − − − − − − Figure 3: Relative sigma-magnitude errors of different piecewise parametric one-sided reduction approachesfor the thermal block model. 13 − − − − − Frequency (rad/sec) P a r a m e t e r (a) Lagrange-BT approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (b) Bspline-BT approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (c) Lagrange-IRKA approximation. − − − − − Frequency (rad/sec) P a r a m e t e r (d) Bspline-IRKA approximation. − − − − − − − − − Figure 4: Relative sigma-magnitude errors of different transfer function interpolation methods for parametricreduction for the thermal block model. 14 nput but 4 outputs. In Figure 1b we present the sigma-magnitude plot of the full order model (FOM), i.e. weplot (cid:107) H ( µ, s )(cid:107) = σ max ( H ( µ, s )) over the full parameter range and the frequency range [ − , ] . The plotis based on 100 logarithmically equi-spaced sample points ( logspace -generated) in each direction. We alsouse this sampling for all relative sigma-magnitude error plots in the other figures. The error plots analogouslyshow (cid:13)(cid:13) H ( µ, s ) − ˆ H ( µ, s ) (cid:13)(cid:13) /(cid:107) H ( µ, s )(cid:107) .Excluding the 10 000 evaluations for the pre-sampling of the original transfer function, all computationsfor generation of the ROMs and evaluation of the approximation errors can be executed in less than 8 minutes.We compare both IRKA and classic (Lyapunov) balanced truncation (BT) in the piecewise as well as thetransfer function interpolation context. For IRKA we use fixed order r =
20 in all training samples, while forBT we run in two modes. Since we have the BT error-bound that allows for adaptive processing, i.e. automaticchoice of the reduced order, we do that with absolute error tolerance 10 − . On the other hand, for a more faircomparison to IRKA we also run BT for fixed order r =
20. We refer to these two modes as BT ( − ) andBT ( ) in the following.For the piecewise approaches we use 10 logarithmically equi-spaced ( logspace -generated) parametersamples in M as the training positions. For the interpolatory approaches we choose 10 Chebyshev-rootsgenerated by Chebfun. We have mentioned the final rank-truncation after basis concatenation in Section 4.1.We use a tolerance equal to eps in the standard case. Alternatively, to further compress the final parametricROM, we truncate with tolerance 10 − and refer to this approach by the name truncated piecewise .For the training, BT can not reuse information from previous samples very easily. On the other hand,IRKA can be initialized with the ROM from the previous parameter sample, which in most cases made itconverge after less than 5 steps (mostly being stopped by criterion monitoring the relative change of the modelin the H -norm). For further implementation details we refer to the scripts in the code package.Although, BT guarantees the local ROMs in the sample points to preserve the asymptotic stability ofthe original model, and also IRKA preserves stability upon convergence, this feature is in general lost afterconcatenating the bases to the global one. Still, for a one-sided projection the stability of the global ROM canbe preserved. Due to stability and symmetry of the thermal block model, Bendixson’s theorem [6] guaranteesthis. Therefore, we compare to a one-sided approach that simply combines V and W into one matrix. Thecomparison can be found in Figures 2 and 3. And the corresponding ROM orders are given in the first blockof Table 5.For the interpolatory approaches, we compare Lagrange polynomials and variation diminishing B-splinesof order 2. Here, we always use BT ( − ) in the BT case, since the results are already hard to distinguishfrom the IRKA-based ones in this case and we do not expect much improvement from the higher local orders.It can be seen from Table 5 that the piecewise BT models are, in parts significantly, smaller than thepiecewise IRKA models. This comes at the price that the accuracy is not as good in parts of the domain.Nonetheless, e.g. the truncated one-sided BT ( − ) approximation yields a relative error of below 1% on amajority (around 70%) of the investigated frequency-parameter domain with a model size that is 3 . . References [1] D. Amsallem and C. Farhat. An online method for interpolating linear parametric reduced-order models.
SIAM J. Sci. Comput. , 33(5):2169–2198, 2011.[2] A. C. Antoulas.
Approximation of Large-Scale Dynamical Systems , volume 6 of
Adv. Des. Control .SIAM Publications, Philadelphia, PA, 2005.[3] B. Batten King, N. Hovakimyan, K. A. Evans, and M. Buhl. Reduced order controllers for distributedparameter systems: LQG balanced truncation and an adaptive approach.
Mathematical and ComputerModelling , 43(9):1136 – 1149, 2006.[4] U. Baur, C. A. Beattie, P. Benner, and S. Gugercin. Interpolatory projection methods for parameterizedmodel reduction.
SIAM J. Sci. Comput. , 33(5):2489–2518, 2011.[5] U. Baur and P. Benner. Modellreduktion für parametrisierte Systeme durch balanciertes Abschneiden undInterpolation (Model Reduction for Parametric Systems Using Balanced Truncation and Interpolation). at-Automatisierungstechnik , 57(8):411–420, 2009.
6] I. Bendixson. Sur les racines d’une équation fondamentale.
Acta Math. , 25(1):359–365, 1902.[7] P. Benner, Z. Bujanović, P. Kürschner, and J. Saak. RADI: A low-rank ADI-type algorithm for largescale algebraic Riccati equations.
Numer. Math. , 138(2):301–330, February 2018.[8] P. Benner and P. Goyal. Balanced truncation model order reduction for quadratic-bilinear systems.e-prints 1705.00160, arXiv, 2017. math.OC.[9] P. Benner, S. Grundel, and N. Hornung. Parametric model order reduction with a small H -error usingradial basis functions. Adv. Comput. Math. , 41(5):1231–1253, 2015.[10] P. Benner, S. Gugercin, and K. Willcox. A survey of model reduction methods for parametric systems.
SIAM Review , 57(4):483–531, 2015.[11] P. Benner and J. Heiland. LQG-balanced truncation low-order controller for stabilization of laminarflows. In R. King, editor,
Active Flow and Combustion Control 2014 , volume 127 of
Notes on NumericalFluid Mechanics and Multidisciplinary Design , pages 365–379. Springer International Publishing, 2015.[12] P. Benner and P. Kürschner. Computing real low-rank solutions of Sylvester equations by the factoredADI method.
Comput. Math. Appl. , 67(9):1656–1672, 2014.[13] P. Benner, P. Kürschner, and J. Saak. Low-rank Newton-ADI methods for large nonsymmetric algebraicRiccati equations.
J. Frankl. Inst. , 353(5):1147–1167, 2016.[14] P. Benner, J.-R. Li, and T. Penzl. Numerical solution of large-scale Lyapunov equations, Riccatiequations, and linear-quadratic optimal control problems.
Numer. Lin. Alg. Appl. , 15(9):755–777, 2008.[15] P. Benner and J. Saak. A semi-discretized heat transfer model for optimal cooling of steel profiles. InP. Benner, V. Mehrmann, and D. Sorensen, editors,
Dimension Reduction of Large-Scale Systems , vol-ume 45 of
Lect. Notes Comput. Sci. Eng. , pages 353–356. Springer-Verlag, Berlin/Heidelberg, Germany,2005.[16] P. Benner, J. Saak, F. Schieweck, P. Skrzypacz, and H. K. Weichelt. A non-conforming compositequadrilateral finite element pair for feedback stabilization of the Stokes equations.
J. Numer. Math. ,22(3):191–220, October 2014.[17] P. Benner, J. Saak, and M. M. Uddin. Balancing based model reduction for structured index-2 unstabledescriptor systems with application to flow control.
Numer. Algebra Control Optim. , 6(1):1–20, March2016.[18] P. Benner and S. W. R. Werner. Limited balanced truncation for large-scale sparse second-order systems(version 2.0), 2020.[19] T. Breiten.
Interpolatory Methods for Model Reduction of Large-Scale Dynamical Systems . Dissertation,Department of Mathematics, Otto-von-Guericke University, Magdeburg, Germany, 2013.[20] A. Castagnotto, M. Cruz Varona, L. Jeschek, and B. Lohmann. sss & sssMOR: Analysis and reductionof large-scale dynamic systems in MATLAB. at-Automatisierungstechnik , 65(2):134–150, 2017.[21] Chebfun Developers. Chebfun — numerical computing with functions.[22] T. A Driscoll, N. Hale, and L. N. Trefethen.
Chebfun Guide . Pafnuty Publications, 2014. .[23] J. Fehr, D. Grunert, P. Holzwarth, B. Fröhlich, N. Walker, and P. Eberhard. Morembs—a model orderreduction package for elastic multibody systems and beyond. In
Reduced-Order Modeling (ROM) forSimulation and Optimization: Powerful Algorithms as Key Enablers for Scientific Computing , pages141–166. Springer International Publishing, Cham, 2018.[24] F. Freitas, J. Rommes, and N. Martins. Gramian-based reduction method applied to large sparse powersystem descriptor models.
IEEE Trans. Power Syst. , 23(3):1258–1270, August 2008.[25] M. Geuß, D. Butnaru, B. Peherstorfer, H. Bungartz, and B. Lohmann. Parametric model order reductionby sparse-grid-based interpolation on matrix manifolds for multidimensional parameter spaces. In
Proceedings of the European Control Conference, Strasbourg, France , pages 2727–2732, 2014.[26] M. Geuß, H. Panzer, A. Wirtz, and B. Lohmann. A general framework for parametric model order re-duction by matrix interpolation. In
Workshop on Model Reduction of Parametrized Systems II (MoRePaSII) , 2012.
27] G. H. Golub and C. F. Van Loan.
Matrix Computations . Johns Hopkins Studies in the MathematicalSciences. Johns Hopkins University Press, Baltimore, fourth edition, 2013.[28] S. Gugercin, A. C. Antoulas, and C. Beattie. H model reduction for large-scale linear dynamicalsystems. SIAM J. Matrix Anal. Appl. , 30(2):609–638, 2008.[29] S. Gugercin and J.-R. Li. Smith-type methods for balanced truncation of large systems. In P. Benner,V. Mehrmann, and D. Sorensen, editors,
Dimension Reduction of Large-Scale Systems , volume 45 of
Lect. Notes Comput. Sci. Eng. , pages 49–82. Springer-Verlag, Berlin/Heidelberg, Germany, 2005.[30] M. Heinkenschloss, D. C. Sorensen, and K. Sun. Balanced truncation model reduction for a class ofdescriptor systems with application to the Oseen equations.
SIAM J. Sci. Comput. , 30(2):1038–1063,2008.[31] C. Himpe. Comparing (empirical-Gramian-based) model order reduction algorithms. e-prints2002.12226, arXiv, 2020. math.OC.[32] K. Jbilou and A. Messaoudi. A computational method for symmetric Stein matrix equations. InP. Van Dooren, S. P. Bhattacharyya, R. H. Chan, V. Olshevsky, and A. Routray, editors,
NumericalLinear Algebra in Signals, Systems and Control , volume 80 of
Lect. Notes Electr. Eng. , New York, 2011.Springer-Verlag.[33] D. L. Kleinman. On an iterative technique for Riccati equation computations.
IEEE Trans. Autom.Control , 13(1):114–115, 1968.[34] Patrick Kürschner.
Efficient Low-Rank Solution of Large-Scale Matrix Equations . Dissertation, Otto-von-Guericke-Universität, Magdeburg, Germany, April 2016. Shaker Verlag, ISBN 978-3-8440-4385-3.[35] N. Lang.
Numerical Methods for Large-Scale Linear Time-Varying Control Systems and related Differ-ential Matrix Equations . Dissertation, Technische Universität Chemnitz, Germany, June 2017. Logos-Verlag, Berlin, ISBN 978-3-8325-4700-4.[36] A. Lanzon, Y. Feng, and B. D. O. Anderson. An iterative algorithm to solve algebraic Riccati equationswith an indefinite quadratic term. In , pages 3033–3039, July2007.[37] A. J. Laub, M. T. Heath, C. C. Paige, and R. C. Ward. Computation of system balancing transforma-tions and other applications of simultaneous diagonalization algorithms.
IEEE Trans. Autom. Control ,32(2):115–122, 1987.[38] T. Li, P. C.-Y. Weng, E. K.-w. Chu, and W.-W. Lin. Large-scale Stein and Lyapunov equations, Smithmethod, and applications.
Numer. Algorithms , 63(4):727–752, 2013.[39] Yiding Lin and Valeria Simoncini. A new subspace iteration method for the algebraic Riccati equation.
Numer. Lin. Alg. Appl. , 22(1):26–47, 2015.[40] P. Mlinarić, S. Rave, and J. Saak. Parametric model order reduction using pyMOR. e-print 2003.05825,arXiv, 2020. cs.MS.[41] J. Möckel, T. Reis, and T. Stykel. Linear-quadratic Gaussian balancing for model reduction of differential-algebraic systems.
Internat. J. Control , 84(10):1627–1643, 2011.[42] B. C. Moore. Principal component analysis in linear systems: controllability, observability, and modelreduction.
IEEE Trans. Autom. Control , AC–26(1):17–32, 1981.[43] MORPACK (model order reduction package). https://tu-dresden.de/ing/maschinenwesen/ifkm/dmt/forschung/projekte/morpack .[44] D. Mustafa and K. Glover. Controller reduction by H ∞ -balanced truncation. IEEE Trans. Autom.Control , 36(6):668–682, 1991.[45] Oberwolfach Benchmark Collection. Steel profile. hosted at MORwiki – Model Order Reduction Wiki,2005.[46] H. Panzer, J. Mohring, R. Eid, and B. Lohmann. Parametric model order reduction by matrix interpola-tion. at-Automatisierungstechnik , 58(8):475–484, 2010.[47] T. Penzl. A cyclic low rank Smith method for large sparse Lyapunov equations.
SIAM J. Sci. Comput. ,21(4):1401–1418, 2000.
48] T. Penzl. Eigenvalue decay bounds for solutions of Lyapunov equations: the symmetric case.
Syst.Control Lett. , 40:139–144, 2000.[49] T. Penzl. Lyapack Users Guide. Technical Report SFB393/00-33, Sonderforschungsbereich 393
Nu-merische Simulation auf massiv parallelen Rechnern , TU Chemnitz, 09107 Chemnitz, Germany, 2000.Available from .[50] F. Poloni and T. Reis. A deflation approach for large-scale Lur’e equations.
SIAM J. Matrix Anal. Appl. ,33(4):1339–1368, 2012.[51] I. Pontes Duff and P. Kürschner. Numerical computation and new output bounds for time-limitedbalanced truncation of discrete-time systems. e-print 1902.01652, arXiv, 2019. math.NA.[52] T. Reis and T. Stykel. Balanced truncation model reduction of second-order systems.
Math. Comput.Model. Dyn. Syst. , 14(5):391–406, 2008.[53] J. Saak.
Efficient Numerical Solution of Large Scale Algebraic Matrix Equations in PDE Control andModel Order Reduction . Dissertation, Technische Universität Chemnitz, Chemnitz, Germany, July 2009.[54] J. Saak, M. Köhler, and P. Benner. M-M.E.S.S. – the matrix equations sparse solvers library. seealso: .[55] J. Saak and M. Voigt. Model reduction of constrained mechanical systems in M-M.E.S.S.
IFAC-PapersOnLine 9th Vienna International Conference on Mathematical Modelling MATHMOD 2018,Vienna, Austria, 21–23 February 2018 , 51(2):661–666, 2018.[56] M. Schmidt.
Systematic discretization of input/output maps and other contributions to the control ofdistributed parameter systems . Ph.D. Thesis, Technische Universität Berlin, Berlin, 2007.[57] S. D. Shank, V. Simoncini, and D. B. Szyld. Efficient low-rank solution of generalized Lyapunovequations.
Numer. Math. , 134:327–342, 2016.[58] V. Simoncini. A new iterative method for solving large-scale Lyapunov matrix equations.
SIAM J. Sci.Comput. , 29(3):1268–1288, 2007.[59] V. Simoncini. Analysis of the rational Krylov subspace projection method for large-scale algebraicRiccati equations.
SIAM J. Matrix Anal. Appl. , 37(4):1655–1674, 2016.[60] V. Simoncini and V. Druskin. Convergence analysis of projection methods for the numerical solution oflarge Lyapunov equations.
SIAM J. Numer. Anal. , 47(2):828–843, 2009.[61] V. Simoncini, D. B. Szyld, and M. Monsalve. On two numerical methods for the solution of large-scalealgebraic Riccati equations.
IMA J. Numer. Anal. , 34(3):904–920, 2014.[62] T. Stillfjord. Low-rank second-order splitting of large-scale differential Riccati equations.
IEEE Trans.Autom. Control , 60(10):2791–2796, 2015.[63] T. Stillfjord. Adaptive high-order splitting schemes for large-scale differential Riccati equations.
Numer.Algorithms , 78:1129–1151, July 2018.[64] M. S. Tombs and I. Postlethwaite. Truncated balanced realization of a stable non-minimal state-spacesystem.
Internat. J. Control , 46(4):1319–1330, 1987.[65] N. Truhar and K. Veselić. An efficient method for estimating the optimal dampers’ viscosity for linearvibrating systems using Lyapunov equation.
SIAM J. Matrix Anal. Appl. , 31(1):18–39, 2009.[66] M. M. Uddin.
Computational Methods for Model Reduction of Large-Scale Sparse Structured De-scriptor Systems . Dissertation, Department of Mathematics, Otto-von-Guericke University, Magdeburg,Germany, 2015.[67] M. M. Uddin.
Computational Methods for Approximation of Large-Scale Dynamical Systems . CRCPress, Boca Raton, FL, 2019.[68] H. K. Weichelt.
Numerical Aspects of Flow Stabilization by Riccati Feedback . Dissertation, Otto-von-Guericke-Universität, Magdeburg, Germany, January 2016.[69] Y. Yue, L. Feng, and P. Benner. An adaptive pole-matching method for interpolating reduced-ordermodels. e-print 1908.00820, arXiv, 2019. math.NA.. Dissertation, Otto-von-Guericke-Universität, Magdeburg, Germany, January 2016.[69] Y. Yue, L. Feng, and P. Benner. An adaptive pole-matching method for interpolating reduced-ordermodels. e-print 1908.00820, arXiv, 2019. math.NA.