Separating Controller Design from Closed-Loop Design: A New Perspective on System-Level Controller Synthesis
SSeparating Controller Design from Closed-Loop Design:A New Perspective on System-Level Controller Synthesis
Jing Shuang (Lisa) Li and Dimitar Ho
Abstract — We show that given a desired closed-loop responsefor a system, there exists an affine subspace of controllersthat achieve this response. By leveraging the existence ofthis subspace, we are able to separate controller design fromclosed-loop design by first synthesizing the desired closed-loopresponse and then synthesizing a controller that achieves thedesired response. This is a useful extension to the recentlyintroduced
System Level Synthesis framework, in which thecontroller and closed-loop response are jointly synthesizedand we cannot enforce controller-specific constraints withoutsubjecting the closed-loop map to the same constraints.We demonstrate the importance of separating controllerdesign from closed-loop design with an example in whichcommunication delay and locality constraints cause standardSLS to be infeasible. Using our new two-step procedure, weare able to synthesize a controller that obeys the constraintswhile only incurring a 3% increase in LQR cost compared tothe optimal LQR controller.
I. INTRODUCTIONLarge-scale distributed cyberphysical systems (e.g. powergrids, intelligent transportation systems) are composed ofnumerous local controllers that exchange local informationvia some communication network. The information that eachlocal controller is able to obtain is limited by properties ofthe communication network, e.g. delay. It is a challenge toscalably synthesize optimal local controllers subject to thelimitations of the communication network [1]–[6].The recently developed
System Level Synthesis (SLS)framework addresses this challenge by shifting the optimiza-tion from the space of available controllers to the spaceof achievable system closed-loop maps [7]. In doing so, itallows the problem to be decomposed into sub-problems tobe solved in parallel, resulting in a synthesis procedure with O (1) complexity [8].In the original SLS framework, the closed-loop mapsthemselves are used to implement the controller, and thusany constraints applied to the controller are directly enforcedon the closed-loop response as well. However, the above-mentioned communication limitations motivate constraintson controllers , not closed-loop maps; by applying theseconstraints on the closed-loop response, we unnecessarilylimit the space over which we can search for solutions.Standard SLS is infeasible under excessive communicationconstraints. [9] addresses this by searching over approximateclosed-loop maps instead of exact closed-loop maps; con-straints are imposed on the approximate closed-loop maps.We propose an alternative two-step procedure, as follows: Authors are with the Department of Computing and MathematicalSciences, California Institute of Technology. [email protected] , [email protected]
1) Synthesize the desired closed-loop response, subject toclosed-loop constraints. This can be done using SLSor any other linear synthesis method (Proposition 1)2) Synthesize the controller, subject to controller con-straintsTo fully separate closed-loop map constraints from con-troller constraints, we require a controller that is imple-mented using transfer matrices other than the closed-loopmaps. We define the space of such matrices in Theorem 2and give conditions for their existence in Lemma 2.1.The main contribution of this paper is to introduce the con-troller synthesis step of the design procedure and demonstrateits importance. We show that our proposed two-step synthesisallows us to design low-cost, distributed controllers that wereunavailable to us in the previous framework. Additionally,the controller synthesis problem can be decomposed intoparallelizable sub-problems, much like the original SLSproblem. II. PRELIMINARIES
A. Notation
We use italicized lower-case letters (e.g. x t ) to denote vec-tors in the time domain. We use italicized upper-case letters(e.g. A ) to denote constant matrices. We use superscripts todenote individual matrix elements (e.g. A i,j ).We use boldface lower and upper case letters (eg. x , Φ x , R c ) to denote signals and transfer matrices in the frequencydomain. We use R c ( k ) to denote the k th spectral componentof R c , i.e. R c ( z ) = (cid:80) ∞ k =0 R c ( k ) z − k .In this paper, we will restrict ourselves to strictly properfinite-impulse-response (FIR) transfer matrices, i.e. R c ( z ) = (cid:80) Tk =1 R c ( k ) z − k , T ∈ Z + . B. System setup
We use the same setup as in (2.1) of [7]: x t +1 = Ax t + Bu t + w t (1)where x , w ∈ R n and u ∈ R m . In this paper we focus onthe time-invariant case (i.e. A , B have no time-dependence)with state feedback. Φ x and Φ u are the closed-loop maps from w to x and u ,with FIR time horizon T : (cid:20) xu (cid:21) = (cid:20) Φ x Φ u (cid:21) w (2) a r X i v : . [ ee ss . S Y ] J un M c x ˆ δ uI − zR c − ˆx Fig. 1. Implementation of state feedback controller
C. Controller implementation
Fig. 1 shows the controller implementation. R c and M c are the implementation matrices, with order (i.e. FIR timehorizon) T c .The controller includes two internal signals; ˆx and ˆ δ . Theequations describing the controller are ˆ δ t = x t − T c (cid:88) k =2 R c ( k )ˆ δ t − k +1 (3a) u t = T c (cid:88) k =1 M c ( k )ˆ δ t − k +1 (3b)where (3a) assumes that R c (1) is the identity. For a more de-tailed derivation, refer to [10]. The corresponding frequency-domain equations are ˆ δ = x + ( I − zR c ) ˆ δ (4a) x = z R c ˆ δ (4b) u = z M c ˆ δ (4c) Proposition 1.
Any linear controller (i.e. u = Kx ) can beimplemented using the controller structure defined in Fig. 1. Proof.
We can construct closed-loop maps Φ x and Φ u directly from K , as shown in (4.4) of [7]: Φ x = ( zI − A − B K ) − (5a) Φ u = K ( zI − A − B K ) − (5b)We can then set R c = Φ x and M c = Φ u in (4), whichgives back the original controller u = Kx .III. IMPLEMENTATION MATRICES A. Controllers and closed-loop maps
Theorem 1.
Let ( Φ x , Φ u ) be stable closed-loop maps. Theonly linear controller K (i.e. u = Kx ) that achieves theseclosed-loop maps is K = Φ u Φ − x . Proof.
By Theorem 4.1 in [7], K = Φ u Φ − x achieves theclosed-loop maps. We show uniqueness by contradiction.Assume there is another linear controller K , K (cid:54) = K ,that also achieves the desired closed-loop maps. Since both K and K achieve ( Φ x , Φ u ), Φ x = ( zI − A − B K ) − = ( zI − A − B K ) − (6a) Φ u = K ( zI − A − B K ) − = K ( zI − A − B K ) − (6b) Substituting (6a) into (6b) gives K Φ x = KΦ x (7)Since Φ x is invertible, this implies that K = K . Contra-diction!Theorem 1, along with the definitions from (5), show aone-to-one mapping between ( Φ x , Φ u ) and K . However, thelinear controller K can be implemented in a variety of ways.For example, we could directly implement u = Kx ; wecould also implement a linear controller using the structureshown in Figure 1. In the original SLS framework, the latteris used to avoid direct matrix inversion of Φ x . B. Implementing closed-loop maps
For the controller structure defined in Fig. 1, let thecontroller implemented by ( R c , M c ) achieve closed-loopmaps ( ˜ Φ x , ˜ Φ u ). We define the following terminology: Definition 1. ( R c , M c ) are the implementation transfermatrices for the closed-loop maps ( ˜ Φ x , ˜ Φ u ). We will referto them as implementation matrices . Definition 2.
We call ( ˜ Φ x , ˜ Φ u ) the implemented closed-loopmaps of the controller ( R c , M c ).The implemented closed-loop maps are found by combin-ing (3) and (1) as done in [10]: (cid:20) ˜ Φ x ˜ Φ u (cid:21) = (cid:20) R c M c (cid:21) ∆ c − (8)Where ∆ c is a helper variable defined as ∆ c = (cid:2) zI − A − B (cid:3) (cid:20) R c M c (cid:21) (9)Note that ∆ c can also be written as I + ∆ . This is thesame formulation used by (4.22) in [7], modulo notationaldifferences (we use R c and M c instead of ˆ Φ x , ˆ Φ u ). ∆ c isinvertible since its leading spectral element, I , is invertible.Our analysis largely focuses on closed-loop maps( Φ x , Φ u ) instead of the controller K . However, due tothe one-to-one mapping between controller and closed-loopmaps, we can also view ( R c , M c ) as implementation matri-ces for the controller K = Φ u Φ − x . Theorem 2.
For R c (1) = I , ( R c , M c ) are implementationmatrices for ( Φ x , Φ u ) if and only if they satisfy (cid:20) R c M c (cid:21) = (cid:20) Φ x Φ u (cid:21) (cid:2) zI − A − B (cid:3) (cid:20) R c M c (cid:21) (10) Proof. Necessity . If ( R c , M c ) are implementation matricesfor ( Φ x , Φ u ), then we require (cid:20) ˜ Φ x ˜ Φ u (cid:21) = (cid:20) Φ x Φ u (cid:21) (11)Substituting (8) into (11) and multiplying by ∆ c , thenwriting out ∆ c in terms of ( A , B , R c , M c ), gives (10). Sufficiency . If ( R c , M c ) satisfy (10), we can substitute(10) into (8) to conclude that ( ˜ Φ x , ˜ Φ u ) = ( Φ x , Φ u ), i.e.( R c , M c ) are implementation matrices for ( Φ x , Φ u ).his constraint describes an affine subspace of implemen-tation matrices for ( Φ x , Φ u ). Corollary 2.1.
If ( R c , M c ) are implementation matrices for( Φ x , Φ u ), then the first spectral components of Φ u and M c are equal, i.e. M c (1) = Φ u (1) .This equivalence arises directly from writing (10) in termsof its spectral elements. Corollary 2.2.
For T c ≥ T , ( Φ x , Φ u ) are implementationmatrices for themselves.( Φ x , Φ u ) are used as implementation matrices in [7]. Corollary 2.3.
If ( R c , M c ) are implementation matrices for( Φ x , Φ u ), then K = Φ u Φ − x = M c R c − C. Existence of solutions
To better understand the dimension of the space of im-plementation matrices, we rearrange the constraint (10) sothat the variables ( R c , M c ) appear on only one side of theconstraint.Rewrite ∆ c in block-matrix form: ∆ c (0)∆ c (1) ... ∆ c ( T c ) = I − A I − B . . . . . . . . . − A − B R c (1) ... R c ( T c ) M c (1) ... M c ( T c ) (12)Rewrite the right hand side of (10) in block-matrix form: R c (1) ...... R c ( T c )0 ... = Φ x (1)Φ x (2) . . .... Φ x ( T ) . . . Φ x ( T ) ∆ c (0)∆ c (1) ... ∆ c ( T c ) (13)We show only the formulation for R c ; the formulation for M c is identical but with Φ u and M c instead of Φ x and R c .Using the block-matrix formulations, we can rearrange(10) into a constraint of the form F v = G (14a) v = R c (2) ... R c ( T c ) M c (1) ... M c ( T c ) (14b)where F and G are matrices that do not depend on R c and M c . The total number of constraints is ( T c + T )( m + n ) . Lemma 2.1.
The implementation constraints (as defined in(10)) are feasible if and only if rank( F ) = rank( F | G ) . Iffeasible, the solution space has dimension dim (null( F )) × n ,where n is the number of states in the system. Proof.
This result is a direct application of the Rouch´e-Capelli theorem to the linear system defined in (14).Corollary 2.2 states that (10) has at least one solution for T c ≥ T . When T c < T , we can check the rank of F and [ F | G ] and calculate the dimension of the solution space if itexists. IV. STABILITY A. Internal dynamics
The system is internally stable if the dynamics of ˆ δ , theinternal signal, are stable. By substituting (3) into (1) andrearranging, we can obtain internal dynamics of the form z t = ˆ δ t − T c +1 ... ˆ δ t − ˆ δ t , z t +1 = A z z t (15a) A z = I . . . ... . . . . . . I − ∆ c ( T c ) . . . − ∆ c (1) (15b) B. Stability check
We can verify internal stability a posteriori by checkingthat A z is stable. Alternatively, a sufficient condition forinternal stability is (cid:107) ∆ (cid:107) < [7].The stability of A z can be checked in a distributed manner.First, a helpful proposition: Proposition 2.
Let (cid:107) · (cid:107) be an induced matrix norm. For A ∈ R n × n , if ∃ m > s.t. (cid:107) A m (cid:107) < , then A is stable. Proof.
Let ρ = (cid:107) A m (cid:107) /m , ρ ∈ [0 , . Using norm submul-tiplicativity and some algebra, we can show that ∀ t > m , (cid:107) A t (cid:107) ≤ Cρ t where C is some constant. Using this upperbound and induced norm properties, we can show that ∀ x o ∈ R n , lim t →∞ (cid:107) A t x o (cid:107) = 0 . This is the definition of stabilityin the discrete time setting.Let each processor store A z and some columns of A kz ,denoted A kz ( i : j ) . Overall, every column of A kz is stored onsome processor. The stability check procedure is as follows,starting with k = 1 :1) Calculate A kz ( i : j ) by multiplying A z and A k − z ( i : j )
2) Check the induced 1-to-1 norm of A kz ( i : j )
3) Consensus on whether a termination condition hasbeen met. If no termination condition is met, increment k and return to Step 1The clear termination condition is (cid:107) A kz (cid:107) < ; then, A z is certified to be stable by Proposition 2. We suggest twoadditional termination conditions: (cid:107) A kz (cid:107) > M , where M is some predetermined threshold.Since (cid:107) A kz (cid:107) corresponds to the amplitude of the tran-sient response, this termination condition correspondsto finding an unacceptably large transient condition • k > k max , where k max is some predetermined maxi-mum number of iterationsBoth conditions would indicate that the stability checkfailed to certify stability. Since we select a column-wiseseparable norm, the entire procedure can be distributed. Thecomplexity per iteration scales quadratically with n , underthe conservative assumption that each node has at least oneprocessor. For the system in Section VII, this procedurecertifies stability in 7 iterations for the low-order controllerand 32 iterations for the full-order controller.V. APPROXIMATE IMPLEMENTATIONSThe solution space defined by (10), although it exists for T c ≥ T , often yields solutions that are unstable. Further,Corollary 2.1 gives a fundamental limit on the sparsity of M c . If Φ u (1) is dense, we cannot find implementation ma-trices that support any type of sparsity (e.g. communicationdelay, locality). These necessitate relaxations of (10).For a relaxed implementation, we want the implementedclosed-loop maps ( ˜ Φ x , ˜ Φ u ) to be as close to the optimalclosed-loop maps ( Φ x , Φ u ) as possible while maintaininginternal stability, i.e. min R c , M c (cid:107) (cid:20) R c M c (cid:21) ( I + ∆ ) − − (cid:20) Φ x Φ u (cid:21) (cid:107) s.t. ( I + ∆ ) − stable , (cid:20) R c M c (cid:21) ∈ S (16)where S includes sparsity and FIR constraints, and I + ∆ = ∆ c . This optimization problem is clearly nonconvex.Factoring the objective function as (cid:107) ( (cid:20) R c M c (cid:21) − (cid:20) Φ x Φ u (cid:21) ( I + ∆ ))( I + ∆ ) − (cid:107) (17)and using similar submultiplicativity, small-gain, and powerseries arguments as Section 4.5.1 of [7], we can upperbound the optimization problem (16) with this quasi-convexproblem: min γ ∈ [0 , − γ min R c , M c , ∆ (cid:107) (cid:20) R c M c (cid:21) − (cid:20) Φ x Φ u (cid:21) ( I + ∆ ) (cid:107) s.t. (cid:2) zI − A − B (cid:3) (cid:20) R c M c (cid:21) = ( I + ∆ ) , (cid:107) ∆ (cid:107) ≤ γ, (cid:20) R c M c (cid:21) ∈ S (18)This is similar to the virtualized SLS method [9] [7],with one key difference. For an objective g ( Φ x , Φ u ) , thevirtualized SLS method uses g ( R c , M c ) as the objective,while our two-step method uses (cid:107) (cid:20) R c M c (cid:21) − (cid:20) Φ x Φ u (cid:21) ( I + ∆ ) (cid:107) (19)as the objective. This is the equation error for (10), and is aheuristic for the closed-loop difference. The nested optimization problem defined by (18) is time-consuming to solve; it can also be mathematically infeasibleif the sparsity constraints S are too strict. We instead solve(20), which is much quicker and uses a regularizer on ∆ topromote stability. We suggest starting with a small λ , solving(20), checking for stability using the distributed methodpresented in Section IV-B, and increasing λ if the stabilitycheck is failed. Alternatively, we can enforce (cid:107) ∆ (cid:107) < . min R c , M c , ∆ (cid:107) (cid:20) R c M c (cid:21) − (cid:20) Φ x Φ u (cid:21) ( I + ∆ ) (cid:107) + λ (cid:107) ∆ (cid:107) s.t. (cid:2) zI − A − B (cid:3) (cid:20) R c M c (cid:21) = ( I + ∆ ) , (cid:20) R c M c (cid:21) ∈ S (20)We can also include additional objectives in (20), e.g. L regularization on ( R c , M c ) to promote sparsity.The optimization problem (20) is column-wise separable ifwe choose a column-wise separable norm for the objective(e.g. H norm). Like the original SLS problem, it can bedecomposed into subproblems to be solved in parallel.VI. CLOSED-LOOP CONSTRAINTS VS.CONTROLLER CONSTRAINTSIn this section, we discuss the physical interpretation ofseparately applying locality and delay constraints to theclosed-loop and to the controller, and when such constraintsare appropriate. This separation is not possible in standardSLS, since the closed-loop maps themselves are used asimplementation matrices for the controller.First, a result on how applying controller constraints onthe closed-loop maps can be overly restrictive: Lemma 2.2.
Let K be the controller corresponding to theclosed-loop maps ( Φ x , Φ u ). Then, the operator Φ u lies inthe range of the operator K . Proof.
By Theorem 1, we have that KΦ x = Φ u .Lemma 2.2 shows that sparsity constraints (e.g. locality,delay) on K will translate to sparsity constraints on Φ u , butnot Φ x ; directly applying these constraints on Φ x may betoo restrictive. Note that although it is also true that KR c = M c , both M c and R c must obey sparsity constraints asthey are directly used in the implementation. A. Locality
Let L ( i ) denote the locality of node i . Generally, L ( i ) consists of the l closest neighbours of node i in the network.Locality constraints restrict spectral components of R c and M c (or Φ x and Φ u ) to have nonzero support only over theallowed localities; i.e. R c ( k ) i,j = 0 ∀ j / ∈ L ( i ) BM c ( k ) i,j = 0 ∀ j / ∈ L ( i ) (21)where B is the actuation matrix of the system.For a system with nodes arranged in a chain configurationand L ( i ) equal to the l closest neighbours of node i , theseonstraints result in banded diagonal R c ( k ) and M c ( k ) witha band width of l + 1 ∀ k .When we apply locality constraints on the implementationmatrices as per (21), we enforce that node i will onlycommunicate with nodes in L ( i ) for all time. When we applylocality constraints on the closed-loop maps (i.e. replace R c and M c in (21) with Φ x and Φ u ), we limit how fara disturbance at a node spreads before it is contained.While both are useful, controller locality tends to be ahard constraint that arises from physical limitations in thecommunication network, while closed-loop locality is a softconstraint that can be relaxed. B. Delay
Let d ( i, j ) denote the delay from node j to node i . Ingeneral, d ( i, j ) is proportional to the distance between nodes i and j . Delay constraints are like time-varying localityconstraints with an expanding locality, where L ( i ) at time k contains all nodes j for which k ≥ d ( i, j ) . Delay constraintsare enforced as follows: R c ( k ) i,j = 0 ∀ k < d ( i, j ) BM c ( k ) i,j = 0 ∀ k < d ( i, j ) (22)where B is the actuation matrix of the system.For a system in a chain configuration and d ( i, j ) pro-portional to inter-nodal distance, these constraints result inbanded diagonal R c ( k ) and M c ( k ) , with wider bands forhigher values of k .When we apply delay constraints on the implementationmatrices as per (22), we are ensuring that controllers do notrequire information that cannot be communicated to themin time. For example, node i cannot use any informationabout node j that is more recent than t − d ( i, j ) . Whenwe apply delay constraints on the closed-loop maps (i.e.replace R c and M c in (22) with Φ x and Φ u ), we limithow fast a disturbance at node j propagates to the stateand input at node i . As with locality, the controller delayconstraint tends to be a hard constraint arising from physicalcommunication limitations. Unlike in the locality case, theclosed-loop delay constraint serves no clear purpose; byseparating the controller design from the closed-loop design,we avoid imposing this unnecessary constraint on the closed-loop map. C. Delay and locality as optimization objectives
We can augment the objective in (20) with the followingterms to encourage tolerance for communication delay: T c (cid:88) k =1 n (cid:88) i =1 n (cid:88) j =1 e dist ( i,j ) − k ( (cid:107) R c ( k ) i,j (cid:107) + (cid:107) BM c ( k ) i,j (cid:107) ) (23)where dist ( i, j ) is the distance between nodes i and j in thenetwork.We can encourage tolerance for communication localityby using similar terms (note the removal of k from theexponential weight): T c (cid:88) k =1 n (cid:88) i =1 n (cid:88) j =1 e dist ( i,j ) ( (cid:107) R c ( k ) i,j (cid:107) + (cid:107) BM c ( k ) i,j (cid:107) ) (24) Again taking the chain configuration as an example, theseterms encourage banded-diagonal R c ( k ) and M c ( k ) withhigher penalties on elements farther away from the diagonal.Elements that survive despite heavy penalty represent edgesin the network that require fast communication in order tobest preserve the desired closed-loop map.VII. EXAMPLESAll subsequent analysis was done on MATLAB using thecvx toolbox with SDPT3 on the low precision setting. Theoptimization was done on a laptop with an Intel i7 processorand 8GB of RAM.The system we work with is a 10-node chain with thefollowing tridiagonal A matrix: A = . . . . . . . . . . . . . . . . . . .... . . . . . . . (25)The system has three actuators, located at nodes 3, 6, and10. The system is marginally stable, with a spectral radius of1. General observations below extend to larger chains withsimilarly sparse actuation. A. Low-norm centralized controllers
We first synthesize a desired closed-loop map via SLS,with no communication or locality constraints. We use anFIR horizon of T = 20 and an LQR objective. We thensynthesize unconstrained controllers using (20) with an ad-ditional L regularization term on ( R c , M c ). We synthesizecontrollers with order ranging from T c = 2 to T c = 25 . Tc % d i ff e r en c e Closed-loop differences x_diffu_diff Tc s pe c t r a l r ad i u s Spectral radii neworiginal Tc L1 no r m L1 norms neworiginal
Fig. 2. Closed-loop differences, spectral radii of internal dynamics, and L norms for controllers with varying T c Fig. 2 shows the differences between the desired closedloop maps ( Φ x , Φ u ) and the implemented closed-loop maps( ˜ Φ x , ˜ Φ u ), normalized by (cid:107) Φ x (cid:107) and (cid:107) Φ u (cid:107) , respectively. Asexpected, the closed loop differences decrease with increas-ing T c . Interestingly, we are able to approximate the systemrelatively well even for T c (cid:28) T ; at T c = 2 , we are less than10% away from the optimal closed-loop map.Fig. 2 also shows the spectral radii of A z . The spectralradius of the original controller is far lower than that ofthe new controllers, suggesting a possible tradeoff betweencontroller norm and internal stability margins. All imple-mentations are internally stable, and spectral radius remainsrelatively constant over T c .astly, Fig. 2 shows the L norms of the implementationmatrices. All new controllers have significantly lower normthan the original controller, and L norm remains almostconstant over T c . B. Localized LQR controller
In this example, separating closed-loop synthesis fromcontroller synthesis yields much better results than the origi-nal synthesis procedure, in which controller and closed-loopsynthesis are coupled.The objective of this example is to synthesize a controllerwith an LQR objective and FIR horizon of T = 20 . An SLSformulation of LQR can be found in [11]. The followingconstraints must be obeyed: the controller at each node isonly allowed to use information from its two neighbouringnodes, and communication speed is restricted to be the samespeed as propagation speed.Directly applying the constraints to the closed-loop maprenders the standard SLS problem infeasible (“ConstrainedCL map” in Table I); the algorithm cannot find a controllerthat meets the constraints. We use the virtual localizationtechnique introduced in [9] to synthesize a controller thatmeets these constraints (“Virtually local” in Table I), whilerelaxing the constraints on the closed-loop map.We then apply our proposed two-step procedure. First,we synthesize the desired closed-loop maps ( Φ x , Φ u ) viaSLS without communication and locality constraints. We usethese closed-loop maps to implement a centralized controllerfor comparison purposes (“FIR centralized” in Table I). Wethen synthesize a controller subject to the communication andlocality constraints (“Two-step” in Table I), using (20) with L regularization. We synthesize one low-order controllerwith order T c = 2 , and one full-order controller with T c = T .For all controllers, we evaluate the LQR cost, spectralradius of the internal dynamics, and L norm of the im-plementation matrices. The LQR cost is normalized by theoptimal infinite horizon LQR cost. Results are shown inTable I. TABLE IC
OMPARISON OF
LQR
COSTS
Controller LQR cost Spectral radius L normFIR centralized 1.001 0.214 9.688Constrained CL map Infeasible
Virtually local 1.294 0.847 9.704Two-step, T c = T T c = 2 In this example, both the full-order and low-order con-troller (“Two-step”) give an LQR cost increase of about 3 % over the optimal infinite-horizon controller. In contrast, thevirtually local controller incurs a cost increase of nearly 30 % .All synthesized controllers are internally stable, withspectral radius less than one. The centralized controller haslower spectral radius than the constrained controllers, whichhave comparable spectral radii. Additionally, both of ourcontrollers are able to attain an L norm that is very close to the L norm achieved in the previous example, despitemuch more severe constraints. Overall, our proposed two-step synthesis procedure generates a controller that performsbetter than the controller generated by existing techniques,without sacrificing internal stability margins.Interestingly, the low-order controller performs almost aswell as the full-order controller, with only 0.1 % performancedegradation. This suggests that in this case, highly delayedinformation (which correspond to higher order terms ofthe implementation matrices) are not very useful to thecontroller.VIII. CONCLUSIONS AND FUTURE WORKBy separating controller synthesis from closed-loop syn-thesis, we are able to apply constraints to the controllerwithout unnecessarily limiting the closed-loop map. Asdemonstrated above, our proposed two-step procedure offersbenefits over the original single step procedure. This pro-cedure offers a new perspective on system-level controllerdesign, and an alternative approach for regimes in whichstandard SLS is infeasible. In future work, we would like tobetter understand how our method relates to the existing workon virtually localized SLS, and which types of problems eachmethod is better suited to. Additionally, we would like toextend this work to the output feedback case.Synthesis methods mentioned in this paper can be foundin the SLS-MATLAB toolbox at https://github.com/sls-caltech/sls-code .R EFERENCES[1] Y. C. Ho and K. C. Chu, “Team Decision Theory and InformationStructures in Optimal Control Problems-Part I,”
IEEE Transactionson Automatic Control , vol. 17, no. 1, pp. 15–22, 1971.[2] A. Mahajan, N. C. Martins, M. C. Rotkowitz, and S. Yuksel, “Infor-mation structures in optimal decentralized control,” in
Proceedings ofthe IEEE Conference on Decision and Control , 2012, pp. 1291–1306.[3] M. Rotkowitz and S. Lall, “A characterization of convex problemsin decentralized control,”
IEEE Transactions on Automatic Control ,vol. 50, no. 12, pp. 1984–1996, 2005.[4] B. Bamieh, F. Paganini, and M. A. Dahleh, “Distributed control ofspatially invariant systems,”
IEEE Transactions on Automatic Control ,vol. 47, no. 7, pp. 1091–1107, 2002.[5] B. Bamieh and P. G. Voulgaris, “A convex characterization of dis-tributed control problems in spatially invariant systems with commu-nication constraints,”
Systems and Control Letters , vol. 54, no. 6, pp.575–583, 2005.[6] A. Nayyar, A. Mahajan, and D. Teneketzis, “Decentralized stochasticcontrol with partial history sharing: A common information approach,”
IEEE Transactions on Automatic Control , vol. 58, no. 7, pp. 1644–1658, 2013.[7] J. Anderson, J. C. Doyle, S. H. Low, and N. Matni, “System levelsynthesis,”
Annual Reviews in Control , vol. 47, pp. 364–393, 2019.[8] Y. S. Wang, N. Matni, and J. C. Doyle, “Separable and LocalizedSystem-Level Synthesis for Large-Scale Systems,”
IEEE Transactionson Automatic Control , vol. 63, no. 12, pp. 4234–4249, 2018.[9] N. Matni, Y. S. Wang, and J. Anderson, “Scalable system levelsynthesis for virtually localizable systems,” in
Proceedings of the IEEEConference on Decision and Control , 2018, pp. 3473–3480.[10] D. Ho and J. C. Doyle, “Scalable Robust Adaptive Controlfrom the System Level Perspective,” 2019. [Online]. Available:http://arxiv.org/abs/1904.00077[11] Y. S. Wang, N. Matni, and J. C. Doyle, “Localized LQR optimalcontrol,” in