Optimization of a partial differential equation on a complex network
OOPTIMIZATION OF A PARTIAL DIFFERENTIAL EQUATION ON ACOMPLEX NETWORK
MARTIN STOLL ∗ AND
MAX WINKLER † Abstract.
Differential equations on metric graphs can describe many phenomena in the physicalworld but also the spread of information on social media. To efficiently compute the solution is ahard task in numerical analysis. Solving a design problem, where the optimal setup for a desiredstate is given, is even more challenging. In this work, we focus on the task of solving an optimizationproblem subject to a differential equation on a metric graph with the control defined on a small set ofDirichlet nodes. We discuss the discretization by finite elements and provide rigorous error boundsas well as an efficient preconditioning strategy to deal with the large-scale case. We show in variousexamples that the method performs very robustly.
Key words.
Complex Networks, Optimal Control, Preconditioning, Saddle Point Systems,Error Estimation
AMS subject classifications.
1. Introduction.
Graphs and networks are ubiquitous in the modeling of physi-cal phenomena, the representation of data, and other applications [4]. While the graphLaplacian is a crucial tool in the analysis of such networks [6, 29, 33, 36], there aremany examples where it is often not sufficient to only reflect the binary relationshipsof being connected or not.In order to address further challenges one can extend the concept of a graph toa so-called metric graph. This is a graph where an edge length is assigned to eachedge and we also define a differential operator, often called the
Hamiltonian , on thegraph domain. Metric graphs with PDEs defined on them have become an importantmodeling tool with the development of numerical schemes currently lacking behindthe groundbreaking analytical and modeling work done in (applied) mathematics andother disciplines. A recent paper by Benzi and Arioli [2] seems to be one of the firstgeneral works devoted to numerical schemes on metric graphs with the discussionfrom discretization to efficient solution. There are some recent works dealing withPDEs on metric graphs, typically tailored to a particular application, see for examplethe following results related to gas networks [10, 11, 13, 20, 38, 39].In this paper, we build upon the work in [2] and consider an optimal control prob-lem on a metric graph with the differential equation as the constraint. Such problemshave received an enormous amount of attention over the last years when defined onbounded domains in R d (cf. [14, 15, 35] and the references mentioned therein). Foroptimal control problems on networks the amount of material is very limited and werefer to [16] for a challenging gas network application. In the present article we do notwant to focus on a particular application but rather discuss the spatial discretization,its errors, the resulting linear systems and the choice of a suitable preconditionediterative solver.In such optimal control problems the control can appear as a distributed or bound-ary control. We here focus on the more challenging case when the control is placedon network nodes with a Dirichlet condition. For related results on discretization ∗ Technische Universität Chemnitz, Department of Mathematics, Chair of Scientific Computing,09107 Chemnitz, Germany, [email protected] † Technische Universität Chemnitz, Department of Mathematics, Chair of Numerical Methodsfor PDEs, 09107 Chemnitz, Germany [email protected] a r X i v : . [ m a t h . O C ] J u l MARTIN STOLL AND MAX WINKLER strategies for elliptic optimal Dirichlet control problems in bounded domains we referto [1, 5, 18, 21, 26, 37]. It is also the purpose of the present paper to extend the ideasdeveloped in these articles to the situation that the state equation is formulated on ametric graph. To this end, we study the necessary optimality conditions based on anadjoint approach and corresponding finite discretizations.As a first main result, discretization error estimates for the the numerical approx-imation of the Dirichlet control as well as for the corresponding states are derived.Additionally, we introduce a preconditioning strategy that is build upon a Schur-complement approach, a technique that has been applied very successfully for optimalcontrol problems on bounded domains both in the elliptic and parabolic cases [12, 17,24, 23, 27, 34].The paper is structured as follows. First, we formulate the model state equa-tion and summarize some preliminary results in Section 2. Moreover, we introducea related optimal control problem and derive the optimality system. Our discretiza-tion strategy is studied in Section 3. There, we derive rigorous discretization errorestimates for the finite element approximation of the state equation and the approx-imation of a discrete Kirchhoff operator which appears in the discrete optimalitysystem of the studied Dirichlet control problem. These results are then used for errorestimates of the approximate solutions of the optimal control problem. An efficientsolver for the optimality system and corresponding preconditioners are investigated inSection 4. Finally, in Section 5, we test the theoretically predicted behavior in severalnumerical experiments. To be more precise, we confirm that the discretization errorestimates are sharp and that the preconditioned method is robust.
2. Optimal Dirichlet control problems on metric graphs.2.1. Differential equations on metric graphs.
We consider an undirectedgraph G = ( V , E ) consisting of a vertex set V = { v i } ni =1 and the edge set E [6]. Eachedge e ∈ E connects a pair of nodes ( v ea , v eb ) with v ea , v eb ∈ V . Furthermore, we definea weight function w : V × V → R satisfying w ( v, v ) = w ( v , v ) for all v, v ∈ V for anundirected graph. We further assume that this function is positive if there is an edge e ∈ E with endpoints v and v and zero otherwise. The degree of the vertex v ∈ V is defined as d ( v ) = P v ∈V w ( v, v ) . The diagonal degree matrix D ∈ R n × n is definedas D v,v = d ( v ). When using a node v ∈ V as an index for a matrix or vector, weactually mean the corresponding index within V . This notation is more elegant for ourpurposes. We can then obtain the graph Laplacian as L = D − W with the entries ofthe weight matrix W v,v given by w ( v, v ) for v, v ∈ V . The Laplacian in this form israrely used as typically its normalized form [36] is employed for segmentation purposes.The normalized Laplacian is defined by L s = D − / LD − / = I − D − / WD − / , which is a symmetric matrix that is often used in machine learning and imagingapplications [36]. We will rely on the incidence matrix E ∈ R n × m where m = |E| isthe number of edges and n = |V| the number of vertices. The graph Laplacian canthen be represented by L = EE > ∈ R n × n . The graph Laplacian is in certain applications not sufficient to describe the intricaterelationships as it only reflects information about the nodes being connected and theedge information are stored in a weight. A more sophisticated representation followsfrom the now introduced concept of metric graphs.We want to extend the concept of the combinatorial graph and identify each edge
PTIMAL CONTROL OF PDES ON NETWORKS L e := | v ea − v eb | on the real line. Such a graph is called a metric graph . A metric graph can be equipped with a differential operator, such ase. g. the one-dimensional Schrödinger operator(2.1) ( H y )( x ) := (cid:18) − d d x + c ( x ) (cid:19) y ( x )with a potential function c , and on each edge e ∈ E one can formulate a differentialequation of the form ( H y | e )( x e ) = f | e ( x e ) for all x e ∈ (0 , L e ) , where the functions f | e : (0 , L e ) → R , e ∈ E , are given source terms. Here, x e arelocal coordinates associated to the edge e . With a slight abuse of notation we willsometimes evaluate y | e in one of the vertices v ∈ V of e . Depending on the orientationof the local coordinate x e we then mean either y | e (0) or y | e ( L e ).Additionally, we can impose boundary or vertex conditions to couple these equa-tions. There are of course several different vertex conditions and we will distinguishamong two different types. First, in vertices v ∈ V K ⊂ V , we have homogeneousNeumann–Kirchhoff conditions , i. e., there holds(2.2) ( K y )( v ) := X e ∈E v dd x y | e ( v ) = 0with E v the edge set incident to the vertex v . In vertices v ∈ V D := V \V K the solution y fulfills Dirichlet conditions (2.3) y | e ( v ) = u v , for all e ∈ E v , where u ∈ R n D , n D := |V D | , is a vector containing the Dirichlet data. Note that ifall nodes are Dirichlet nodes then the problem trivially decouples into a set of one-dimensional problems. Other boundary conditions are not discussed here but will besubject of future research. For more information on the vertex conditions on metricgraphs we refer to [4, 9, 32].The Kirchhoff-Neumann conditions are the natural boundary conditions for thedifferential operator (2.1), as for each v ∈ V and test functions φ ∈ ⊗ e ∈E C ∞ ( e ) thatare continuous in v and vanish in v ∈ V \ { v } , the formula( K y )( v ) φ ( v ) = X e ∈E v y | e ( v ) φ | e ( v )= X e ∈E v Z e [ y ( x ) φ ( x ) + y ( x ) φ ( x )] d x (2.4)is fulfilled. Here and in the following, we use the notation dd x y | e ( x ) = y | e ( x ) and omitthe subscript e unless the context otherwise requires. The state equation consideredthroughout this article finally reads(2.5) − y + c y = f on all e ∈ E , ( K y )( v ) = 0 for v ∈ V K ,y ( v ) = u v for v ∈ V D , MARTIN STOLL AND MAX WINKLER where f ∈ ⊗ e ∈E L ( e ) and u ∈ R n D are given data.For an appropriate treatment of this boundary value problem we require somefurther notation. We introduce the function spaces L (Γ) = O e ∈E L ( e ) and H (Γ) = O e ∈E H ( e ) ∩ C (Γ)equipped with the norms k y k L (Γ) := X e ∈ E k y k L ( e ) , k y k H (Γ) := k y k L (Γ) + X e ∈E k y k L ( e ) . The L (Γ)-inner product is denoted by ( f, g ) L (Γ) := P e ∈E R e f ( x ) g ( x ) d x . To estab-lish the essential boundary conditions we moreover define H D (Γ) := { y ∈ H (Γ) : y ( v ) = 0 ∀ v ∈ V D } . With the integration-by-parts formula (2.4) one can derive the weak formulationof (2.1)–(2.3):
Find y ∈ H (Γ) with y ( v ) = u v , v ∈ V D , such that (2.6) a ( y, w ) = ( f, w ) L (Γ) ∀ w ∈ H D (Γ) , where the bilinear form a : H (Γ) × H (Γ) → R is defined by a ( y, w ) := X e ∈E Z e ( y ( x ) w ( x ) + c ( x ) y ( x ) w ( x )) d x. Throughout this article, the potential function c belongs to L ∞ (Γ) := ⊗ e ∈E L ∞ ( e )and fulfills c ≥ f belongs to L (Γ). With theLax-Milgram-Lemma we then directly conclude that (2.6) possesses a unique solution. As an extension of the above we nowconsider an optimal control problem of the form(2.7) Minimize 12 k y − ¯ y k L (Γ) + β | u | over y ∈ H (Γ) , u ∈ R n D subject to the constraint(2.8) − y + c y = f on all e ∈ E , ( K y )( v ) = 0 v ∈ V K ,y ( v ) = u v v ∈ V D . Here, |·| is the Euclidean norm in R n D . The state equation is understood in the weaksense (2.6). We can decompose the state into a part depending linearly on u and aconstant contribution, i. e., y = y u + y f , with y u ∈ H (Γ), y u ( v ) = u v for v ∈ V D , and y f ∈ H D (Γ) satisfying a ( y u , w ) = 0 and a ( y f , w ) = ( f, w ) L (Γ) for all w ∈ H D (Γ). This decomposition allows to introduce a linear control–to–stateoperator S : R n D → L (Γ) defined by u S ( u ) := y u , and one can eliminate the state PTIMAL CONTROL OF PDES ON NETWORKS j ( u ) := 12 k Su + y f − ¯ y k L (Γ) + α | u | s.t. u ∈ R n D . As S is linear, the functional j is quadratic and hence Fréchet differentiable andconvex. As a consequence, by differentiation using the chain rule, we can derive theoptimality condition which is the variational problem(2.10) ( Su + y f − ¯ y, Sw ) L (Γ) + β u > w = 0 ∀ w ∈ R n D . Due to the convexity of j this condition is also sufficient for u being the unique globalsolution of (2.9). In order to derive a more handy form of the optimality conditionwe investigate the adjoint operator of S . Lemma
The adjoint operator S ∗ : L (Γ) → R n D possesses the representation S ∗ = −K ◦ P , where K is defined as in (2.2) and P : L (Γ) → H D (Γ) is defined by y P ( y ) := p with p ∈ H D (Γ) fulfilling a ( v, p ) = ( y, v ) L (Γ) ∀ v ∈ H D (Γ) . Proof.
Standard regularity results for elliptic problems imply that P even mapsonto H (Γ) ∩ H D (Γ). Consequently, p solves − p + c p = y almost everywhere in Γ.Together with the integration-by-parts formula (2.4) we obtain for all z ∈ R n D ( y, Sz ) L (Γ) = ( − p + c p, Sz ) L (Γ) = a ( Sz, p ) − ( K p ) > z. Due to p ∈ H D (Γ) there holds a ( Sz, p ) = 0. This implies S ∗ y = −K p and proves theassertion.As a consequence, we can rewrite the optimality condition (2.10) in the form( Su + y f − ¯ y, Sw ) L (Γ) = ( S ∗ ( Su + y f − ¯ y ) , w ) R n D = − ( K p, w ) R n D , where the so-called adjoint state p ∈ H D (Γ) is the weak solution of(2.11) (cid:18) − d d x + c (cid:19) p = y − ¯ y on all e ∈ E , ( K p )( v ) = 0 v ∈ V K ,p ( v ) = 0 v ∈ V D , with y = Su + y f . This allows for a reformulation of the optimality condition (2.10)by means of(2.12) β u − K p = 0 . To summarize the previous investigations we state the following theorem:
Theorem
The pair ( u, y ) ∈ R n D × H (Γ) is the unique global solution of (2.7) – (2.8) if and only if some adjoint state p ∈ H D (Γ) exists, such that the system (2.13) y ( v ) = u v v ∈ V D , a ( y, w ) = ( f, w ) L (Γ) ∀ w ∈ H D (Γ) ,a ( w, p ) = ( y − ¯ y, w ) L (Γ) ∀ w ∈ H D (Γ) ,β u − K p = 0 is fulfilled. MARTIN STOLL AND MAX WINKLER
3. Discretization and error analysis.
In [2] the authors study a finite ele-ment discretization of the variational problem (2.6) and their approach will also formthe basis of our investigations. For the convenience of the reader we briefly repeatthis discretization approach. On each edge e ∈ E we introduce an equidistant gridwith vertices { v ea = v e , v e , . . . , v en e = v eb } . This induces a decomposition of the one-dimensional edge e into intervals I ek := ( v ek , v ek +1 ), k = 0 , . . . , n e −
1. The global finiteelement space is defined by V h := { y h ∈ C (Γ) : y h | I ek ∈ P , k = 0 , . . . , n e − , e ∈ E} . Here, the discretization parameter h > I ek , k = 0 , . . . , n e − e ∈ E . By ψ ej , j = 1 , . . . , n e −
1, and φ v , v ∈ V , we denote thenodal basis functions of V h fulfilling ψ ej ( v ej ) = 1 for j = 1 , . . . , n e − φ v ( v ) = 1for v ∈ V . Hence, each function y h ∈ V h can be represented by(3.1) y h ( x ) = X e ∈E n e − X j =1 y ej ψ ej ( x ) + X v ∈V y v φ v ( x ) . The Galerkin approximation of (2.6) then reads:
Find y h ∈ V h with y h ( v ) = u v for v ∈ V D such that (3.2) a ( y h , w h ) = ( f, w h ) L (Γ) ∀ w h ∈ V h,D , with V h,D := V h ∩ H D (Γ). Analogous to the continuous case, we split the discretestate y h into y u,h := S h u , where S h : R n D → V h , denotes the discrete harmonic extension (harmonic w. r. t. the Hamiltonian H ) of theDirichlet data u ∈ R n D , and a function y f,h ∈ V h,D fulfilling homogeneous Dirichletconditions in the nodes V D . To be more precise, we have y u,h ( v ) = u v , v ∈ V D , a ( y u,h , w h ) = 0 ∀ w h ∈ V h,D , (3.3) a ( y f,h , w h ) = ( f, w h ) L (Γ) ∀ w h ∈ V h,D , (3.4)and a simple computation shows y h = y h,u + y h,f . Finally, we can formulate thediscretized optimal control problem(3.5) Minimize J h ( y h , u h ) := 12 k y h − ¯ y k L (Γ) + α | u h | over u h ∈ R n D , y h ∈ V h , subject to(3.6) y h ( v ) = u v , v ∈ V D , a ( y h , w h ) = ( f, v h ) L (Γ) ∀ v h ∈ V h,D . As in the continuous case discussed in Section 2 we can derive a necessary and sufficientoptimality condition. First, we define the solution operator P h : L (Γ) → V h,D of thediscretized adjoint equation by P h ( y ) = p h with(3.7) a ( w h , p h ) = ( y, w h ) L (Γ) ∀ w h ∈ V h,D . The discretized Kirchhoff-Neumann operator K h : V h → R n D is defined in a variationalsense by ( K h p h )( v ) = a ( φ v , p h ) − ( y, φ v ) L (Γ) ∀ v ∈ V D , (3.8)with y ∈ L (Γ) chosen such that p h = P h ( y ). PTIMAL CONTROL OF PDES ON NETWORKS Lemma
The adjoint of the operator S h : R n D → V h is S ∗ h = −K h ◦ P h : L (Γ) → R n D . Proof.
Let y ∈ L (Γ), z ∈ R n D and p h = P h ( y ) be arbitrary. We confirm usingthe definitions of P h , K h and S h as well as the properties w h := S h ( z ) − P v ∈V D z v φ v ∈ V h,D and p h ∈ V h,D ( y, S h ( z )) L (Γ) = ( y, S h ( z ) − X v ∈V D z v φ v ) L (Γ) + ( y, X v ∈V D z v φ v ) L (Γ) = a ( S h ( z ) − X v ∈V D z v φ v , p h ) + ( y, X v ∈V D z v φ v ) L (Γ) = X v ∈V D z v (cid:0) − a ( φ v , p h ) + ( y, φ v ) L (Γ) (cid:1) = − X v ∈V D z v K h ( p h )( v ) = − z > K h ( p h ) . This implies the desired result as K h ( p h ) = ( K h ◦ P h ) y .Analogous to the continuous case investigated in Section 2 we can derive an optimalitysystem for (3.5)–(3.6) which reads: Find u h ∈ R n D , y h ∈ V h with y h ( v ) = u h,v ∀ v ∈ V D , p h ∈ V h,D , such that a ( y h , w h ) = ( f, w h ) L (Γ) ∀ w h ∈ V h,D , (3.9a) a ( w h , p h ) = ( y h − ¯ y, w h ) L (Γ) ∀ w h ∈ V h,D , (3.9b) β u h − K h p h = 0 . (3.9c)This is a discrete version of the optimality system (2.13). Note that K h ( p h ) = K ( p h ).Using the exact Kirchoff-Neumann operator K in the discrete optimality system isbasically possible, but as we have seen, only the variational Kirchhoff-Neumann map K h yields the favorable property that the approaches “first optimize then discretize”and “first discretize then optimize” lead to the same discrete optimality system.In the remainder of this section we study discretization error estimates. Through-out this article, c stands for a generic positive constant which may have a differentvalue at each occurrence. Moreover, c is always independent of h and the functionsappearing in the estimates.The most technical part of the error analysis is the proof of an estimate forthe approximate Kirchhoff-Neumann operator K h . The proof is based on a dualityargument and requires some properties of the discrete harmonic extension S h . Lemma
The discrete harmonic extension S h fulfills the stability estimate (3.10) k S h u k H (Γ) ≤ c | u | for arbitrary u ∈ R n D .Proof. For technical reasons we define a further discrete extension ˜ S h : R n D → V h defined by [ ˜ S h u ] | e ∈ P ∀ e ∈ E , [ ˜ S h u ]( v ) = ( u v ∀ v ∈ V D , ∀ v ∈ V K . MARTIN STOLL AND MAX WINKLER
Obviously, ˜ S h u is a first-order polynomial on each edge e ∈ E and thus, ˜ S h u ∈ V h .As a consequence, we deduce together with the V h -ellipticity of a ( · , · ), the definitionof S h and the fact that ( S h − ˜ S h ) u ∈ V h,D (3.11) k S h u k H (Γ) ≤ c a ( S h u, S h u ) = c a ( S h u, ˜ S h u ) ≤ c k S h u k H (Γ) (cid:13)(cid:13) ˜ S h u (cid:13)(cid:13) H (Γ) . For an edge e ∈ E with vertices v and v we easily show the following properties. If v, v ∈ V K there holds [ ˜ S h u ] e = 0, and otherwise the values of the H ( e )-seminormsare (cid:13)(cid:13) ( ˜ S h u ) (cid:13)(cid:13) L ( e ) = L − / e | u v | if v ∈ V D , v ∈ V K , (cid:13)(cid:13) ( ˜ S h u ) (cid:13)(cid:13) L ( e ) = L − / e | u v − u v | if v, v ∈ V D . Analogously, we can compute the L ( e )-parts in the H (Γ) norm and get (cid:13)(cid:13) ˜ S h u (cid:13)(cid:13) L ( e ) ≤ c L / e | u v | if v ∈ V D , v ∈ V K , (cid:13)(cid:13) ˜ S h u (cid:13)(cid:13) L ( e ) ≤ c L / e ( | u v | + | u v | ) if v, v ∈ V D . After summation over all edges and taking into account (3.11) we conclude the prop-erty (3.10).As an important consequence, we also obtain a stability estimate for the adjoint S ∗ h and thus, for the discrete Kirchoff-Neumann operator K h . Lemma
For arbitrary y ∈ L (Γ) there holds the stability estimate | S ∗ h y | = |K h ( P h y ) | ≤ c k y k L (Γ) . Proof.
First, we apply the definitions of K h from (3.8) and P h from (3.7) takinginto account that P v ∈V D u v φ v − S h u ∈ V h,D , and obtain |K h ( P h y ) | = sup u ∈ R n D| u | (cid:12)(cid:12) u > K h ( P h y ) (cid:12)(cid:12) = sup u ∈ R n D| u | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X v ∈V D u v (cid:2) a ( φ v , P h y ) − ( y, φ v ) L (Γ) (cid:3)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = sup u ∈ R n D| u | (cid:12)(cid:12) a ( S h u, P h y ) − ( y, S h u ) L (Γ) (cid:12)(cid:12) . The first term on the right-hand side vanishes due to the definition of S h , see (3.3),and P h y ∈ V h,D . The last term can be bounded by means of the Cauchy-Schwarzinequality and the stability estimate for S h proved in Lemma 3.2.Now, we are in the position to derive the following general error estimate: Lemma
Let ( u, y ) ∈ R n D × H (Γ) and ( u h , y h ) ∈ R n D × V h be the solutionsof (2.7) – (2.8) and (3.5) – (3.6) , respectively. Then, the error estimate β | u − u h | ≤ c (cid:16) k y − ˜ y h k L (Γ) + |K p − K h ˜ p h | (cid:17) is fulfilled, where p is the adjoint state corresponding to u , and ˜ y h = S h ( u ) + y f,h and ˜ p h = P h ( y − ¯ y ) are the Ritz projections of y and p . PTIMAL CONTROL OF PDES ON NETWORKS Proof.
Subtraction of the optimality conditions (2.12) for the continuous problemand (3.9c) for the discrete problem and using y h = S h u h and p h = P h ( S h u h + y f,h − ¯ y )leads to β ( u − u h ) = K p − K h p h = ( K p − K h ( P h ( y − ¯ y )) + K h ( P h ( y − ( S h u + y f,h )) + K h ( P h ( S h ( u − u h )))= ( K p − K h ˜ p h ) + S ∗ h ( y − ˜ y h ) − S ∗ h ( S h ( u − u h )) . (3.12)Note that in the first step, we simply introduced the intermediate functions K h ( P h y )and K h ( P h ( S h u )). In the last step we inserted the Ritz projections ˜ y h = S h u + y f,h and ˜ p h = P h ( y − ¯ y ) of y and p , respectively, as well as S ∗ h = −K h ◦ P h .Next, we multiply the equation (3.12) by u − u h and get for the first two termson the right-hand side( K p − K h ˜ p h ) > ( u − u h ) ≤ |K p − K h ˜ p h | | u − u h | ,S ∗ h ( y − ˜ y h ) > ( u − u h ) ≤ c k y − ˜ y h k L (Γ) | u − u h | . In the latter estimate we used the stability properties of S ∗ h proved in Lemma 3.3.The last term on the right-hand side of (3.12), tested with u − u h , is non-positive andcan be neglected.It remains to derive error estimates for the two terms on the right-hand side ofthe general estimate derived in Lemma 3.4. The first term is a simple L (Γ)-errorand an estimate can be simply concluded from the H (Γ)-error estimate(3.13) | y − ˜ y h | H (Γ) ≤ c h | y | H (Γ) proved in [2, Theorem 3.2] and an application of the Aubin-Nitsche method. Thesearguments imply(3.14) k y − ˜ y h k L (Γ) ≤ c h | y | H (Γ) . The second term, namely the error estimate for the discrete Kirchoff-Neumann oper-ator, is more challenging.
Theorem
Let p ∈ H D (Γ) be the solution of a ( w, p ) = ( y, w ) L (Γ) ∀ w ∈ H D , and denote by ˜ p h ∈ V h,D its Ritz-projection, i. e., a ( w h , p − ˜ p h ) = 0 for all w h ∈ V h,D .Then, the error estimate (3.15) |K p − K h ˜ p h | ≤ c h | p | H (Γ) is fulfilled provided that p ∈ H (Γ) .Proof. Let u ∈ R n D be an arbitrary vector. Applying the integration-by-partsformula (2.4), which implies( K p ) > u = X v ∈V D u v (cid:2) a ( φ v , p ) − ( y, φ v ) L (Γ) (cid:3) , MARTIN STOLL AND MAX WINKLER the definition of K h from (3.8) and the Galerkin orthogonality using the fact that P v ∈V D u v φ v − S h u ∈ V h,D , yields( K p − K h ˜ p h ) > u = X v ∈V D u v a ( φ v , p − ˜ p h ) = a ( S h u, p − ˜ p h ) ≤ c k p − ˜ p h k H (Γ) k S h u k H (Γ) . Together with the H (Γ)-estimate (3.13) and Lemma 3.2 one arrives at |K p − K h ˜ p h | = sup u ∈ R n D| u | (cid:12)(cid:12) ( K p − K h ˜ p h ) > u (cid:12)(cid:12) ≤ c h | p | H (Γ) . Now we are in the position to state the main result of this section.
Theorem
Let f, y d ∈ L (Γ) and β > be arbitrary. The solutions ( u, y ) ∈ R n D × H (Γ) and ( u h , y h ) ∈ R n D × V h of (2.7) – (2.8) and (3.5) – (3.6) , respectively,fulfill the error estimate | u − u h | + k y − y h k H (Γ) ≤ c h (cid:16) | y | H (Γ) + | p | H (Γ) (cid:17) with a constant c > independent of h .Proof. The estimate for the control follows after insertion of the estimate (3.14)and the result of Theorem 3.5 into Lemma 3.4.To obtain an estimate for the state we apply the triangle inequality k y − y h k H (Γ) ≤ k y − ( S h u + y f,h ) k H (Γ) + k S h ( u − u h ) k H (Γ) , where we used the decomposition y h = S h u h + y f,h , compare (3.3)–(3.4). Note that S h u + y f,h is the Ritz projection of y and thus, the first term on the right-hand sidecan be bounded by means of (3.13). An estimate for the second term follows fromthe stability of S h shown in Lemma 3.2 and the estimate derived for the controls.In the numerical experiments in Section 5 we will confirm that this convergencerate is sharp. Although the estimate used for the state promises quadratic conver-gence, the rate is limited due to the approximation of the discrete Kirchhoff-Neumannoperator. However, there exist ideas based on local mesh refinement which would al-low for quadratic convergence in (3.15) as well. For further details we refer to [26]where Dirichlet control problems for elliptic equations in planar and bounded domainsare studied. An extension of these ideas to the context of graph networks is subjectof further research.
4. Structure of the system matrices and preconditioning.
We now dis-cuss the structure of the discretized equations based on the considerations from theprevious section as well as the results given in [2].
We use a numbering of the nodessuch that the interior nodes come first followed by the original graph nodes, (cf.(4.1)). We note that the incidence matrix for the interior nodes of an edge e ∈ E isstructured and can be written as E e = − − − ∈ R n e − × n e . PTIMAL CONTROL OF PDES ON NETWORKS E i = blkdiag ( { E e } e ∈E ) ∈ R m ( n e − × mn e , where n e is the same for all edges for simplicity. To also incorporate the originalgraph nodes we now introduce the matrices E + = 12 ( E + | E | ) , E − = 12 ( E − | E | ) , with | E | the component-wise absolute value of the incidence matrix E of the originalgraph. We now want to establish the incidence matrix for the extended graph basedon the original graph nodes. For this considerˆ E + j = E + j ⊗ [1 , , . . . , | {z } R × nej = [ E + j , , . . . , ] ∈ R n × n ej to incorporate outgoing edges, where E + j indicates the j -th column of the matrix E + and n e j = n e is the number of internal nodes on the edge e j . Here, the index j runsfrom 1 to m = |E| . For the incoming edges we useˆ E − j = E − j ⊗ [0 , , . . . ,
1] = [ , . . . , , E − j ] ∈ R n × n e . We see that due to the incorporation of the interior nodes this somewhat stretches theincidence matrix of the original graph. As a result the part of the incidence matrixrepresenting the original nodes becomes E v = h ˆ E +1 + ˆ E − , ˆ E +2 + ˆ E − , . . . , ˆ E + m + ˆ E − m i ∈ R n × mn e . We then obtain the incidence matrix of the extended graph as˜ E = (cid:20) E i E v (cid:21) ∈ R m ( n e − n × mn e . In order to build the finite element matrices we incorporate the mesh-parameter h e via W E = blkdiag (cid:18)(cid:26) h e I n e (cid:27) e ∈E (cid:19) ∈ R ( n i + m ) × ( n i + m ) , where n i = P e ∈E ( n e −
1) and in the case that all edges have the same number ofinterior nodes there holds n i = m ( n e − A has the following structure A = (cid:20) E i E v (cid:21) W E (cid:2) E > i E > v (cid:3) = (cid:20) E i W E E > i E i W E E > v E v W E E > i E v W E E > v (cid:21) . According to [2] the mass matrix can be written as M = 16 (cid:12)(cid:12) ˜ E (cid:12)(cid:12) c W E (cid:12)(cid:12) ˜ E (cid:12)(cid:12) > + diag (cid:26)(cid:16)(cid:12)(cid:12) ˜ E (cid:12)(cid:12) c W E (cid:12)(cid:12) ˜ E (cid:12)(cid:12) > (cid:17) i,i (cid:27) n i + mi =1 !! with c W E = blkdiag (cid:0) { h e I n e } e ∈E (cid:1) ∈ R ( n i + m ) × ( n i + m ) . MARTIN STOLL AND MAX WINKLER
With the discretization of both the mass and the stiffness term we are now able toobtain the matrix representation of the optimization problem where the mass andstiffness matrix are split according to free and Dirichlet-control variables. The massmatrix incorporating the term c can be assembled in a similar fashion (cf. [2]) andwe refer to it as M c . Finally, the system matrix representing the discrete bilinearform on the left-hand side of (3.2) is denoted by K = A + M c . We start with the ansatz (3.1), whichallows a representation of a function y h ∈ V h by the coefficient vectors y ∈ R N , N = m ( n e −
1) + n K + n D with n = n K + n D , that are sorted in the form(4.1) y = y I y K y D ˆ= coefficients related to ψ ej , j = 1 , . . . , n e − e ∈ E coefficients related to φ v , v ∈ K coefficients related to φ v , v ∈ D . In the same way, we may split the matrices K and M into K = K II K IK K ID K KI K KK K DI K DD and M = M II M IK M ID M KI M KK M DI M DD , respectively. In order to get a description of the discrete optimality system (3.9) ina matrix-vector notation, we distinguish only among Dirichlet nodes and free nodesand thus define y F = (cid:20) y I y K (cid:21) , K F F = (cid:20) K II K IK K KI K KK (cid:21) , K F D = (cid:20) K ID (cid:21) , K DF = K > F D . Analogous definitions are used for the mass matrix M .Furthermore, we define the load vector f by f > v = ( f, v h ) L (Γ) and the vector ¯ y by ¯ y > v = (¯ y, v h ) L (Γ) . These vectors are decomposed as well in the form f = (cid:18) f F f D (cid:19) , ¯ y = (cid:18) ¯ y F ¯ y D (cid:19) . With the previously introduced matrices and vectors we can write the optimalitysystem (3.9) in the form(4.2) M F F M F D K > F F M DF M DD + β I K > F D K F F K F D y F up F = ¯ y F ¯ y D f F . Note that the sign of the adjoint state in this formulation is different than in (3.9).This results in a symmetric system matrix.In fact, the system (4.2) is a saddle point or KKT matrix [3, 8]. For complexnetworks with many connections it is infeasible to work with direct solvers [7] dueto the higher complexity and fill-in issues. While one could employ non-standardconjugate gradient methods [12, 28, 31], we here employ
Minres [22] as a tailoredscheme for symmetric and indefinite matrices or
Gmres [30] for the case of a non-symmetric preconditioned systems. Of course, the performance of this scheme willrely on the distribution of the eigenvalues and we need to improve the performanceby introducing a suitably chosen preconditioner P . Its design is discussed in the nextpart. PTIMAL CONTROL OF PDES ON NETWORKS It is well known [19] that an ideal preconditioner forsaddle point systems is given by P ideal = M F F M F D DF M DD + β I 00 0 S with the Schur-complement defined as S = [ K F F K F D ] (cid:20) M F F M F D M DF M DD + β I (cid:21) − (cid:20) K > F F K > F D (cid:21) . While this preconditioner is attractive in producing an optimally clustered spectrumof the preconditioned matrix, it is in general very expensive to apply as the matrix S is typically dense. A more practical choice is given when we consider a block-diagonalapproximation P ideal ≈ M
00 0 S , where (cid:20) M F F M F D M DF M DD + β I (cid:21) ≈ (cid:20) M
00 M (cid:21) and S ≈ S . To make these approximation more precise, we first focus on approximating the massmatrix block and note that again an ideal preconditioner according to [19] is (cid:20) M F F M F D M DF M DD + β I (cid:21) ≈ (cid:20) M F F
00 M DD + β I − M DF M − F F M F D (cid:21) . Now, we use this approximation and the notation S M = M DD + β I − M DF M − F F M F D to obtain the following approximation of the Schur-complement of the overall KKTsystem S ≈ [ K F F K F D ] (cid:20) M F F S M (cid:21) − (cid:20) K > F F K > F D (cid:21) = K F F M − F F K > F F + K F D S − M K > F D . Still, the approximation K F F M − F F K > F F + K F D S − M K DF is not very practical to work with since we would need to invert a sum of matrix prod-ucts. We want to derive an efficient approximation to this sum. First, we look at thefollowing approximations M F F ≈ D M := diag( M F F ) . The usefulness of this approxi-mation for practical purposes is illustrated in Figure 1a, where we show that using theapproximation D M causes almost no change in the eigenvalues of the preconditionedmass matrix (cid:20) M F F
00 S M (cid:21) − (cid:20) M F F M F D M DF M DD + β I (cid:21) . MARTIN STOLL AND MAX WINKLER . . . . E i g e n v a l u e m ag n i t ud e (a) Mass matrix approximation − − P ideal P diag/Schur P Eigenvalue index E i g e n v a l u e m ag n i t ud e (b) β = 10 − − − E i g e n v a l u e m ag n i t ud e P ideal P diag/Schur P (c) β = 10 − − − β = 10 − β = 10 − Real part I m ag i n a r y p a r t (d) non-symmetric Fig. 1: Eigenvalue plots of the preconditioned matrices. In particular, preconditionedmass matrix (top left), preconditioned saddle point system in symmetric form (topright, bottom left), and preconditioned saddle point system with non-symmetric pre-conditioner (bottom right)We also use S M ≈ D S M := M DD + β I − M DF D − M M F D and then get(4.3) K F F M − F F K > F F + K F D S − M K > F D ≈ K F F D − M K > F F + K F D D − S M K > F D . The approximation K F F D − M K > F F + K F D D − S M K > F D is difficult to work with di-rectly as this involves a sum of terms. We now want to further rework this to get ausable preconditioning scheme K F F D − M K > F F + K F D D − S M K > F D ≈ ( K F F + N ) D − M ( K F F + N ) > . A matching argument [25, 23] requires the following ND − M N > = K F D D − S M K > F D , which holds true for N = (cid:0) K F D D − S M K > F D (cid:1) / D / M . Finally, since we do not want to take the square root of a possibly very large matrix,we obtain the final Schur-complement N = ( D KDK ) / D / M , PTIMAL CONTROL OF PDES ON NETWORKS D KDK is the lumped version of K F D D − S M K > F D . While this approach fits wellwithin the preconditioning framework for a symmetric solver like
Minres [22] we canalternatively try to avoid the square root and possibly lumping of a matrix. For thislets assume we are willing to employ a non-symmetric solver such as
Gmres [30]. Wethen again use a matching argument K F F M − F F K > F F + K F D D − S M K > F D ≈ ( K F F + N ) M − F F (cid:0) K > F F + N (cid:1) with the non-symmetric approach N M − F F N = K F D D − S M K > F D , which we can achieve by using N = M F F , N = K F D D − S M K > F D . Note that since we do not need to take square roots here we can work with the matrix M F F and not just its diagonal or lumped version.Now, we can write down the preconditioner P = D M S M
00 0 ( K F F + N ) D − M ( K F F + N ) > for the symmetric setup and P = D M S M
00 0 ( K F F + N ) M − F F (cid:0) K > F F + N (cid:1) in the unsymmetric case. For a small example using this in Matlab we see in Figures1b and 1c, where the value of the regularization parameter is varied, that the eigen-values of the preconditioned matrix are nicely clustered for the ideal preconditioner P ideal , the approximation with Schur-complement using (4.3) called P diag/Schur , andthe matching based approximation P . Note that since we will only work with P theapproximation with an expensive Schur-complement P diag/Schur is never used. ‘Wealso observe that there is almost no change in the eigenvalues even when the pa-rameter β changes. Similar robustness is observed when we use the non-symmetricSchur-complement approximation as illustrated in Figure 1d.The efficiency of our preconditioners is discussed in the numerical experiments inthe next section.
5. Numerical experiments.
In this section we illustrate how the methodologydeveloped in this paper performs when applied to several challenging datasets. Ourimplementation is based on MATLAB R (cid:13) . For the iterative solvers we rely on thestandard implementations of Minres and
Gmres within MATLAB R (cid:13) . We run thealgorithm until a relative tolerance of 10 − is reached. The goal of the preconditioners we designed is to providefast and robust methods that lead to a method exploiting as much as possible thestructure of the saddle point problems presented previously. While for certain smallerand non-complex graphs the use of direct solvers is very efficient, the motivation forconstructing preconditioners is the applicability for complex networks and also more6
MARTIN STOLL AND MAX WINKLER challenging differential equations that contain time-derivatives and possibly parameterdependencies.The first example we consider is a simple star graph, where we have one internalnode and all remaining nodes are leaf nodes. The solution of a Dirichlet boundary con-trol problem in such a graph is illustrated in Figure 2. The required iteration numbersof the iterative
Gmres -method using the non-symmetric preconditioner constructedin Section 4.3 are reported in Table 1a and the computational times in Table 1b.Obviously, our iterative solver is robust, both with respect to the regularization pa-rameter β and the discretization parameter n e . It can be also observed that ourmethod outperforms an unpreconditioned iterative solver. As a second example, weconsider a finite difference graph of an L-shaped domain, also illustrated in Figure 3.To generate this graph we used the Matlab function “numgrid(’L’,N)”, where N isthe number of vertices at the long edges of an L-shaped domain. In the experimentwe used N = 10 which leads to a graph with 75 nodes, and we randomly selected 12controllable nodes. The number of Gmres iterations are presented in Table 1c. As inthe previous example, we observe robustness of the iteration numbers with respect tochanges in β and the number of finite element nodes. We see a slight mesh-dependencefor an increasing number of degrees of freedom but the iteration numbers stay ratherlow.Fig. 2: Optimal solution of the Dirichlet control problem in a star-shaped network.The red lines represent the edges of the graph, the blue lines represent the optimalstate and the green stars are the control nodes. In this experiment we want to checkwhether the convergence behavior predicted by Theorem 3.6 is also observed in ex-periments. To this end, we use again the FDM graph of an L-shaped domain, asconsidered already in the previous section. The input data for the optimal controlproblem are β = 0 .
1, ¯ y ≡ f ≡ . c ≡ n e = 2 k , k = 1 , , . . . ,
13. The discretization error for the control andthe state in the L (Γ)- and H (Γ)-norm is obtained by comparison of the solutionwith the solution on the finest grid with k = 15. In Table 2 the absolute values of the PTIMAL CONTROL OF PDES ON NETWORKS N DOF
86 182 374 758 1526 3062 6134 β −
11 (11) 25 (35) 29 (87) 27 (244) 36 (727) 31 (2473) 31 (5636)10 −
16 (16) 25 (35) 29 (89) 34 (243) 31 (740) 31 (2474) 31 (5636)10 −
16 (16) 22 (36) 24 (90) 34 (243) 34 (727) 31 (2472) 31 (5548)10 −
16 (16) 22 (36) 24 (90) 34 (243) 33 (729) 31 (2472) 31 (5633) (a) Star graph – Iterations N DOF
758 1526 3062 6134 β − − − − (b) Star graph – computational time in seconds N DOF
398 918 1958 4038 8198 β −
44 (235) 90 (563) 103 (1309) 94 (3233) 92 (–)10 −
47 (238) 89 (565) 106 (1313) 87 (3241) 86 (–)10 −
46 (246) 86 (570) 106 (1309) 88 (3233) 84 (–)10 −
46 (236) 92 (566) 100 (1309) 94 (3248) 85 (–) (c) FDM graph – Iterations
Table 1: Dependence of the iteration numbers of
Gmres on the regularization param-eter β and discretization parameter N DOF = n + m ( n e − We now illustrate on two more examples that the tech-nique presented by us does apply to more general complex networks. The first exampleis the road network of Minnesota illustrated in Figure 4a. Also for this problem, thepreconditioner we studied is robust with respect to β and h , see Table 3. The secondcomplex network we use is the Facebook Ego Network Dataset shown in Figure 4b.We here have 5 , ,
870 degrees of freedom and the solver is able to obtain the solutionafter 38 iterations for β = 10 − and 37 iterations for β = 10 − .
6. Conclusion.
In this paper we have discussed the problem of a PDE-con-strained optimization problem on a complex network. The steady PDE-operator andthe objective function were discretized using finite elements. A rigorous error analysisshowed first order convergence that we later studied in the numerical experiments https://blogs.mathworks.com/loren/2016/02/03/visualizing-facebook-networks-with-matlab/ MARTIN STOLL AND MAX WINKLER
Fig. 3: Solution of the optimal Dirichlet control problem on an L-shaped finite differ-ence graph. N DOF h | u − u h | k y − y h k L (Γ) k y − y h k H (Γ) − − − − − − − |·| and states in k·k L (Γ) and |·| H (Γ) . The numbers in parentheses are experimental convergence rates. Here, N DOF is the number of degrees of freedom for the state variable, i. e., the number of nodesin the extended graph.section of the paper. We further discussed the matrix structure following [2] andproposed a preconditioning strategy for the saddle point system representing the firstorder conditions. Numerical experiments illustrated that we do indeed observe theproven discretization error and that the developed preconditioned performed robustlywith respect to changes of all the relevant parameters,
REFERENCES[1]
T. Apel, M. Mateos, J. Pfefferer, and A. Rösch , Error estimates for Dirichlet controlproblems in polygonal domains: quasi-uniform meshes , Mathematical Control and RelatedFields, 8 (2018), pp. 217–245.[2]
M. Arioli and M. Benzi , A finite element method for quantum graphs , IMA Journal of Nu-merical Analysis, 38 (2018), pp. 1119–1163.[3]
M. Benzi, G. H. Golub, and J. Liesen , Numerical solution of saddle point problems , ActaNumerica, 14 (2005), pp. 1–137.[4]
G. Berkolaiko and P. Kuchment , Introduction to quantum graphs , vol. 186 of MathematicalSurveys and Monographs, American Mathematical Society, Providence, RI, 2013.PTIMAL CONTROL OF PDES ON NETWORKS N DOF β −
45 45 6110 −
39 31 4210 −
39 29 4110 −
39 29 41Table 3: Iteration numbers of the
Gmres -method for the solution of an optimalDirichlet control problem on the Minnesota graph. (a) Road network of Minnesota (b) Ego network of Facebook
Fig. 4: Example networks investigated in Section 5.3. [5]
E. Casas and J.-P. Raymond , Error estimates for the numerical approximation of Dirichletboundary control for semilinear elliptic equations , SIAM Journal on Control and Opti-mization, 45 (2006), pp. 1586–1611.[6]
F. R. K. Chung , Spectral graph theory , vol. 92 of CBMS Regional Conference Series in Math-ematics, Amer. Math. Soc., Providence, RI, 1997.[7]
I. S. Duff, A. M. Erisman, and J. K. Reid , Direct methods for sparse matrices , Monographson Numerical Analysis, The Clarendon Press, Oxford University Press, New York, 1989.Corrected paperback edition of [ MR0892734], Oxford Science Publications.[8]
H. C. Elman, D. J. Silvester, and A. J. Wathen , Finite elements and fast iterative solvers:with applications in incompressible fluid dynamics , Numerical Mathematics and ScientificComputation, Oxford University Press, New York, 2005.[9]
L. J. Grady and J. R. Polimeni , Discrete calculus , Springer-Verlag London, Ltd., London,2010. Applied analysis on graphs for computational science.[10]
S. Grundel, N. Hornung, and S. Roggendorf , Numerical aspects of model order reduc-tion for gas transportation networks , in Simulation-Driven Modeling and Optimization,Springer, 2016, pp. 1–28.[11]
M. Herty, J. Mohring, and V. Sachers , A new model for gas flow in pipe networks , Math-ematical Methods in the Applied Sciences, 33 (2010), pp. 845–855.[12]
R. Herzog and E. Sachs , Preconditioned conjugate gradient method for optimal control prob-lems with control and state constraints , SIAM Journal on Matrix Analysis and Applica-tions, 31 (2010), pp. 2291–2317.[13]
J. Hild and G. Leugering , Real-time control of urban drainage systems , in Mathematical op-timization of water networks, vol. 162 of Internat. Ser. Numer. Math., Birkhäuser/SpringerBasel AG, Basel, 2012, pp. 129–150.[14]
M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich , Optimization with PDE Constraints ,Mathematical Modelling: Theory and Applications, Springer-Verlag, New York, 2009.[15]
K. Ito and K. Kunisch , Lagrange multiplier approach to variational problems and applications ,vol. 15 of Advances in Design and Control, Society for Industrial and Applied Mathematics(SIAM), Philadelphia, PA, 2008. MARTIN STOLL AND MAX WINKLER[16]
G. Leugering, A. Martin, M. Schmidt, and M. Sirvent , Nonoverlapping domain decompo-sition for optimal control problems governed by semilinear models for gas flow in networks ,Control and Cybernetics, 46 (2017), pp. 191–225.[17]
K.-A. Mardal and R. Winther , Preconditioning discretizations of systems of partial differ-ential equations , Numerical Linear Algebra with Applications, 18 (2011), pp. 1–40.[18]
S. May, R. Rannacher, and B. Vexler , Error analysis for a finite element approximation ofelliptic Dirichlet boundary control problems , SIAM Journal on Control and Optimization,51 (2013), pp. 2585–2611.[19]
M. F. Murphy, G. H. Golub, and A. J. Wathen , A note on preconditioning for indefinitelinear systems , SIAM Journal on Scientific Computing, 21 (2000), pp. 1969–1972.[20]
A. Nordenfelt , Spectral analysis of discrete approximations of quantum graphs , tech. rep.,Lund University, Sweden, 2007.[21]
G. Of, T. X. Phan, and O. Steinbach , An energy space finite element approach for ellipticDirichlet boundary control problems , Numerische Mathematik, 129 (2015), pp. 723–748.[22]
C. C. Paige and M. A. Saunders , Solutions of sparse indefinite systems of linear equations ,SIAM Journal on Numerical Analysis, 12 (1975), pp. 617–629.[23]
J. W. Pearson, M. Stoll, and A. J. Wathen , Regularization-robust preconditioners for time-dependent PDE-constrained optimization problems , SIAM Journal on Matrix Analysis andApplications, 33 (2012), pp. 1126–1152.[24]
J. W. Pearson and A. J. Wathen , A new approximation of the Schur complement in precon-ditioners for PDE-constrained optimization , Numerical Linear Algebra with Applications,19 (2012), pp. 816–829.[25] ,
Fast iterative solvers for convection-diffusion control problems , Electronic Transactionson Numerical Analysis, 40 (2013), pp. 294–310.[26]
J. Pfefferer and M. Winkler , Finite element error estimates for normal derivatives onboundary concentrated meshes , SIAM Journal on Numerical Analysis, to appear (2019).[27]
T. Rees, H. S. Dollar, and A. J. Wathen , Optimal solvers for PDE-constrained optimization ,SIAM Journal on Scientific Computing, 32 (2010), pp. 271–298.[28]
T. Rees and M. Stoll , Block-triangular preconditioners for PDE-constrained optimization ,Numerical Linear Algebra with Applications, 17 (2010), pp. 977–996.[29]
Y. Romano, M. Elad, and P. Milanfar , The little engine that could: regularization bydenoising (RED) , SIAM Journal on Imaging Sciences, 10 (2017), pp. 1804–1844.[30]
Y. Saad and M. H. Schultz , GMRES: a generalized minimal residual algorithm for solvingnonsymmetric linear systems , Society for Industrial and Applied Mathematics. Journal onScientific and Statistical Computing, 7 (1986), pp. 856–869.[31]
J. Schöberl and W. Zulehner , Symmetric indefinite preconditioners for saddle point prob-lems with applications to PDE-constrained optimization problems , SIAM Journal on MatrixAnalysis and Applications, 29 (2007), pp. 752–773.[32]
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst , Theemerging field of signal processing on graphs: Extending high-dimensional data analysisto networks and other irregular domains , arXiv preprint arXiv:1211.0053, (2012).[33]
D. Spielman , Spectral graph theory , in Combinatorial scientific computing, Chapman &Hall/CRC Comput. Sci. Ser., CRC Press, Boca Raton, FL, 2012, pp. 495–524.[34]
M. Stoll , One-shot solution of a time-dependent time-periodic PDE-constrained optimizationproblem , IMA Journal of Numerical Analysis, 34 (2014), pp. 1554–1577.[35]
F. Tröltzsch , Optimal control of partial differential equations: theory, methods, and applica-tions , vol. 112, American Mathematical Soc., 2010.[36]
U. von Luxburg , A tutorial on spectral clustering , Statistics and Computing, 17 (2007),pp. 395–416.[37]
M. Winkler , Error estimates for variational normal derivatives and Dirichlet control problemswith energy regularization , 2018. arXiv-preprint 1808.01171.[38]
W. A. M. Wybo, D. Boccalini, B. Torben-Nielsen, and M.-O. Gewaltig , A sparse refor-mulation of the Green’s function formalism allows efficient simulations of morphologicalneuron models , Neural Computation, 27 (2015), pp. 2587–2622.[39]