Parametric model order reduction using pyMOR
aa r X i v : . [ c s . M S ] J u l Parametric model order reduction using pyMOR
Petar Mlinarić ∗ Stephan Rave † Jens Saak ∗ July 3, 2020
Abstract pyMOR is a free software library for model order reduction that includes both reduced basisand system-theoretic methods. All methods are implemented in terms of abstract vector andoperator interfaces, which allows direct integration of pyMOR ’s algorithms with a wide array ofexternal PDE solvers. In this contribution, we give a brief overview of the available methods andexperimentally compare them for the parametric instationary thermal-block benchmark definedin [12]. pyMOR is a free software library for building model order reduction applications with the Pythonprogramming language [9, 11]. Originally only implementing reduced basis methods, since version0.5, released in January 2019, it additionally implements system-theoretic methods such as balancedtruncation [10] and IRKA [2]. Here, we focus on version 2019.2, released in December 2019, whichadded support for parametric system-theoretic methods.We consider model reduction of the thermal-block model defined in [12], which takes the form E ˙ x ( t ; µ ) = A ( µ ) x ( t ; µ ) + Bu ( t ) , x (0; µ ) = 0 ,y ( t ; µ ) = Cx ( t ; µ ) , with system matrices E, A ( µ ) ∈ R n × n , input matrix B ∈ R n × , output matrix C ∈ R p × n , state x ( t ) ∈ R n , input u ( t ) ∈ R , and output y ( t ) ∈ R p , where µ ∈ P ⊂ R d is the parameter. Thematrix-valued function A additionally has parameter-affine form A ( µ ) = A + P di =1 µ i A i , where µ = ( µ , µ , . . . , µ d ) . We also consider a non-parametric version, for which we write A instead of A ( µ ) .We begin, in Section 2, with a brief discussion of pyMOR ’s software design. In Section 3, we givea brief overview of the methods implemented in pyMOR The central goal of pyMOR ’s design is to allow an easy integration with external PDE solver libraries.To this end, generic interfaces for vectors and operators have been defined that give pyMOR access tothe solver’s internal data structures representing vectors, matrices or nonlinear operators, as well asoperations on them, e.g., the computation of inner products or the solution of linear equation system. ∗ Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany, {mlinaric,saak}@mpi-magdeburg.mpg.de † University of Münster, Orleans-Ring 10, 48149 Münster, Germany, [email protected] pyMOR , for instance POD computation orPetrov-Galerkin projection, are expressed in terms of these interfaces. Compared to a file-basedexchange of matrices or solution snapshots, this approach enables the usage of problem adaptedsolvers implemented in the PDE library or the reduction of very large MPI-distributed problems [9].
The majority of MOR methods implemented in pyMOR are projection-based methods, i.e., they consistof finding basis matrices V and W and defining the reduced-order model as b E ˙ b x ( t ; µ ) = b A ( µ ) b x ( t ; µ ) + b Bu ( t ) , b x (0; µ ) = 0 , b y ( t ; µ ) = b C b x ( t ; µ ) , where b E = W T EV , b A ( µ ) = W T A ( µ ) V = b A + P di =1 µ i b A i , b A i = W T A i V , b B = W T B , and b C = CV .If im( V ) = im( W ) , we call it a Galerkin projection and otherwise a Petrov-Galerkin projection.In the following, we give short descriptions of some projection-based methods with remarks ontheir implementation in pyMOR . We consider a weak POD-Greedy algorithm [8] to build a basis matrix V for which the maximumstate-space approximation error max µ ∈S train N X i =1 k x ( t i ; µ ) − V b x ( t i ; µ ) k H (Ω) for constant input u ≡ over some training set S train of parameters is minimized in the Sobolev H -norm. To this end, in each iteration of the greedy algorithm the current reduced-order model issolved for all µ ∈ S train and the parameter µ max is selected for which an (online-efficient) estimate ofthe MOR error is maximized [7]. For this parameter, the matrix of full-order model (FOM) solutionsnapshots X = (cid:2) x ( t ; µ max ) x ( t ; µ max ) · · · x ( t N ; µ max ) (cid:3) , is computed, and the first left-singular vectors of its H -orthonormal projection onto the H -orthogonalcomplement of im( V ) are added to V .Note that, in the non-parametric case, POD-Greedy reduces to POD, i.e., using the first few leftsingular vectors of the snapshot matrix X as a Galerkin projection basis. For non-parametric models, balanced truncation (BT) consists of solving two Lyapunov equations
AP E T + EP A T + BB T = 0 ,A T QE + E T QA + C T C = 0 . (1)Based on the solutions P and Q , it computes V and W of the Petrov-Galerkin projection. pyMOR provides bindings to dense Lyapunov equation solvers in SciPy [16],
Slycot [14] (Python wrappersfor
SLICOT [13]), and
Py-M.E.S.S. [6]. For reduction of large-scale models, there are bindings for2ow-rank solvers in
Py-M.E.S.S. . Since
Py-M.E.S.S. does not allow generic vectors, there is also animplementation of the alternating direction implicit iteration in pyMOR [3].It is known that BT preserves asymptotic stability and has a priori bounds for Hardy H ∞ and H errors depending on the truncated Hankel singular values (the square roots of the eigenvalues of E T QEP ).For parametric models, there are several possible extensions of BT [4, 17, 15]. We focus on thesimplest global basis approach by concatenating several local basis matrices. Let µ (1) , µ (2) , . . . , µ ( ℓ ) ∈P be parameter samples and V (1) , V (2) , . . . , V ( ℓ ) and W (1) , W (2) , . . . , W ( ℓ ) corresponding local basismatrices. To guarantee asymptotic stability, we use Galerkin projection with (cid:2) V (1) V (2) · · · V ( ℓ ) W (1) W (2) · · · W ( ℓ ) (cid:3) after orthogonalization and rank truncation. LQG balanced truncation (LQGBT) is a variant of BT related to the linear quadratic Gaussian (LQG)optimal control problem. Unlike BT, LQGBT consists of solving Riccati equations
AP E T + EP A T − EP C T CP E T + BB T = 0 ,A T QE + E T QA − E T QBB T QE + C T C = 0 . Similar to BT, it guarantees preservation of asymptotic stability and has an a priori error bound.As for Lyapunov equations, pyMOR provides bindings for external Riccati equation solvers and animplementation of the low-rank RADI method [5].Additionally, there is bounded-real BT in pyMOR , but it currently relies on a dense solver whichdoes not respect the vector and operator interfaces, so it is not possible to use it with a PDE solver.
Iterative rational Krylov algorithm (IRKA) is a locally optimal MOR method in the Hardy H norm.In each step, it computes (tangential) rational Krylov subspaces im( V ) = span n ( σ E − A ) − Bb , ( σ E − A ) − Bb , . . . , ( σ r E − A ) − Bb r o , im( W ) = span (cid:8) ( σ E − A ) − T C T c , ( σ E − A ) − T C T c , . . . , ( σ r E − A ) − T C T c r (cid:9) . (2)The interpolation points σ , σ , . . . , σ r for the next step are chosen as reflected poles − λ , − λ , . . . , − λ r of the projected matrix pencil λW T EV − W T AV (vectors b , b , . . . , b r and c , c , . . . , c r are computedbased on the eigenvectors). Even if the original model has real poles, the projected poles can becomplex. Since the complex number support is limited in PDE solvers, solving complex shiftedlinear systems ( σE − A ) x = b needs to be done using an iterative method. Implementing efficientpreconditions for such systems is a future research topic for pyMOR . For this reason, we demonstrateIRKA only on the non-parametric example in Section 4.1. In the parametric case, we only use one-sided IRKA (OS-IRKA), where W in (2) is replaced by V , which guarantees real interpolation pointsfor the heat equation example we consider. To generate the global basis matrix, we concatenate thelocal basis matrices V ( i ) and do a rank truncation. All system-theoretic methods in pyMOR can be called similarly. For instance, BT can be run with bt = BTReductor(fom, mu=mu)rom = bt.reduce(10) fom is the (parametric) full-order model (an instance of
LTIModel ) and mu is the parametersample. The reduce method of bt accepts the reduced order as a parameter (among others) andreturns the non-parametric reduced-order model rom (again an instance of LTIModel ). The basismatrices are then available as
VectorArrays in bt.V and bt.W . Here, we present results of applying MOR methods discussed in Section 3 to parametric models,in particular the thermal block example. To demonstrate pyMOR ’s integration with external PDEsolvers, we used
FEniCS H norm to quantify the results, which is defined for non-parametric, asymp-totically stable systems E ˙ x ( t ) = Ax ( t ) + Bu ( t ) , x (0) = 0 ,y ( t ) = Cx ( t ) , (3)as the L norm of the impulse response h : [0 , ∞ ) → R p × defined by h ( t ) = C exp( tE − A ) E − B ,assuming E is invertible [2]. This can be computed using k h k L ([0 , ∞ ); R p × ) = tr (cid:0) CP C T (cid:1) = tr (cid:0) B T QB (cid:1) , (4)where P and Q are as in (1). Note that for a reduced-order model b E ˙ b x ( t ) = b A b x ( t ) + b Bu ( t ) , b x (0) = 0 , b y ( t ) = b C b x ( t ) , the error system (cid:20) E b E (cid:21) (cid:20) ˙ x ( t )˙ b x ( t ) (cid:21) = (cid:20) A b A (cid:21) (cid:20) x ( t ) b x ( t ) (cid:21) + (cid:20) B b B (cid:21) u ( t ) ,y ( t ) − b y ( t ) = h C − b C i (cid:20) x ( t ) b x ( t ) (cid:21) , is of the same form as the FOM (3), which allows us to compute H errors, i.e., the H norm of theerror system, using (4).We chose to use the H norm because it is independent of the input u . Additionally, it can becomputed efficiently using the low-rank Lyapunov equation solver available in pyMOR .We begin with the non-parametric version in Section 4.1, comparing system-theoretic methodswith POD. Then, in Sections 4.2 and 4.3 we compare methods for parametric versions.The source code of the implementations used to compute the presented results can be obtainedfrom https://doi.org/10.5281/zenodo.3928528 and is authored by Petar Mlinarić and Stephan Rave. Figure 1 compares BT, LQGBT, IRKA, OS-IRKA, and POD in terms of H error. The POD modelwas trained using the step response ( u ( t ) = 1 for t > ). We see that BT, LQGBT, and IRKA givesimilar results, while OS-IRKA and POD give worse errors. Interestingly, POD is mostly better thanOS-IRKA in this example. 4 − − − − Reduced order A b s o l u t e H e rr o r BTLQGBTIRKAOS-IRKAPODFigure 1: Comparison of the methods from Section 3 for the non-parametric model (Section 4.1) − − − − − − − Parameter H n o r m Figure 2: The H norms of the one-parameter model for different parameter values In this setting, as the training set we chose logarithmically equi-spaced parameter values from − to . For testing, we added additional in-between points. We used BT and OS-IRKA to getreduced models of order for each parameter value and concatenated their local bases as explainedin Section 3.2.1. After truncation, BT’s global basis was of order and OS-IRKA’s was . Tohave a fairer comparison, we further truncated BT’s global basis to the same order as OS-IRKA.Figure 2 shows the H norm of the full-order model for different parameters, from which we seethat it only changes by about an order of magnitude over the parameter range. Therefore, we restrictto showing only the absolute H errors in the following plots. In particular, Figure 3 shows theabsolute H error for BT and OS-IRKA. Possibly related to BT being a Petrov-Galerkin projectionmethod, its global basis produces worse results than the local bases. On the other hand, OS-IRKAimproves with using the global basis.Finally, Figure 4 compares BT and OS-IRKA with RB. For RB, we used the same training setto generate a model of order . In this example, OS-IRKA performed best near the boundaries ofthe parameter set and comparable to other methods in the middle. On the other hand, BT gaveworst results near the boundaries. RB produced an almost flat absolute H error curve, which is notsurprising since it tries to minimize the worst error.5 − − − − − − − − − − − Parameter A b s o l u t e H e rr o r BT local (10) BT global (67) − − − − − − − − − Parameter A b s o l u t e H e rr o r OS-IRKA local (10) OS-IRKA global (67)Figure 3: Comparison of using local and global bases (see Section 3.2.1) for balanced truncation (BT)and one-sided iterative rational Krylov algorithm (OS-IRKA) for the one-parameter model
Here, we randomly sampled points e i from the uniform distribution over [ − , to generate thetraining set µ ( i ) = 10 e i and additional such points for testing. As before, we used BT and OS-IRKA to find reduced models of order at each training parameter point. Here, after truncation,BT’s global basis was of order and OS-IRKA’s was . Figure 5 compares them, where the first parameter values are from the training set and the other for testing. As we had in the previousexample, OS-IRKA gives better results with the global basis.Figure 6 compares the two methods with RB. We see that they give comparable results, althoughthey are rather different methods. On closer inspection, we note that, in this example, BT gives bettererrors the most and RB shows the smallest maximum error and the least variation in error. We briefly presented pyMOR , a freely available Python package for MOR, built on generic interfacesfor easy integration with external PDE solvers. We then described some of the MOR methods im-6 − − − − − − − − − Parameter A b s o l u t e H e rr o r BT (67) OS-IRKA (67) RB (67)Figure 4: Comparison of methods for the one-parameter model for fixed reduced order (67)plemented in pyMOR , which includes both system-theoretic and reduced basis methods. Lastly, wecompared methods on a thermal block benchmark discretized with
FEniCS . References [1] M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring,M. E. Rognes, and G. N. Wells. The FEniCS project version 1.5.
Archive of Numerical Software ,3(100):9–23, 2015.[2] A. C. Antoulas, C. A. Beattie, and S. Gugercin. Interpolatory model reduction of large-scaledynamical systems. In Javad Mohammadpour and Karolos M. Grigoriadis, editors,
EfficientModeling and Control of Large-Scale Systems , pages 3–58. Springer US, 2010.[3] L. Balicki. Low-rank alternating direction implicit iteration in pyMOR.
GAMM Archive forStudents , 2(1):1–13, February 2020.[4] U. Baur and P. Benner. Modellreduktion für parametrisierte Systeme durch balanciertes Ab-schneiden und Interpolation (Model Reduction for Parametric Systems Using Balanced Trunca-tion and Interpolation). at-Automatisierungstechnik , 57(8):411–420, 2009.[5] P. Benner, Z. Bujanović, P. Kürschner, and J. Saak. RADI: A low-rank ADI-type algorithm forlarge scale algebraic Riccati equations.
Numer. Math. , 138(2):301–330, February 2018.[6] Peter Benner, Martin Köhler, and Jens Saak. M.E.S.S. – the matrix equations sparse solverslibrary. .[7] M. A. Grepl and A. T. Patera. A posteriori error bounds for reduced-basis approximations ofparametrized parabolic partial differential equations.
ESAIM: M2AN , 39(1):157–181, 2005.[8] B. Haasdonk. Convergence rates of the POD-Greedy method.
ESAIM: Math. Model. Numer.Anal. , 47(3):859–873, 2013.[9] R. Milk, S. Rave, and F. Schindler. pyMOR – generic algorithms and interfaces for model orderreduction.
SIAM J. Sci. Comput. , 38(5):S194–S216, 2016.7 − − − Parameter index A b s o l u t e H e rr o r BT local (10) BT global (128) − − − − Parameter index A b s o l u t e H e rr o r OS-IRKA local (10) OS-IRKA global (128)Figure 5: Comparison of using local and global bases (see Section 3.2.1) for balanced truncation (BT)and one-sided iterative rational Krylov algorithm (OS-IRKA) for the four-parameter model. The first parameters are used to construct local bases and global bases are tested on further parameters(cf. Figure 3)[10] B. C. Moore. Principal component analysis in linear systems: controllability, observability, andmodel reduction. IEEE Trans. Autom. Control , AC–26(1):17–32, 1981.[11] pyMOR developers and contributors. pyMOR - model order reduction with Python. https://pymor.org .[12] S. Rave and J. Saak. A non-stationary thermal-block benchmark model for parametric modelorder reduction. e-print 2003.00846, arXiv, 2020. math.NA.[13]
SLICOT . .[14] Slycot developers and contributors. Slycot. https://github.com/python-control/Slycot .[15] N. T. Son and T. Stykel. Solving parameter-dependent Lyapunov equations using the reducedbasis method with application to parametric model order reduction. SIAM J. Matrix Anal. Appl. ,38(2):478–504, 2017. 8 − − − Parameter index A b s o l u t e H e rr o r BT (128) OS-IRKA (128) RB (128)Figure 6: Comparison of methods for the four-parameter model for fixed reduced order (128)[16] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski,P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman,N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng,E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero,C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, and P. van Mulbregt. SciPy 1.0:fundamental algorithms for scientific computing in Python.
Nat. Methods , February 2020.[17] P. Wittmuess, C. Tarin, A. Keck, E. Arnold, and O. Sawodny. Parametric model order reduc-tion via balanced truncation with Taylor series representation.