Minimum Cost Feedback Selection in Structured Systems: Hardness and Approximation Algorithm
aa r X i v : . [ m a t h . O C ] M a r Minimum Cost Feedback Selection in Structured Systems: Hardnessand Approximation Algorithm
Aishwary Joshi, Shana Moothedath and Prasanna Chaporkar
Abstract —In this paper, we study output feedback selectionin linear time invariant structured systems. We assume that theinputs and the outputs are dedicated , i.e., each input directlyactuates a single state and each output directly senses a singlestate. Given a structured system with dedicated inputs andoutputs and a cost matrix that denotes the cost of each feedbackconnection, our aim is to select an optimal set of feedbackconnections such that the closed-loop system satisfies arbitrarypole-placement. This problem is referred as the optimal feedbackselection problem for dedicated i/o . We first prove the NP-hardnessof the problem using a reduction from a well known NP-hardproblem, the weighted set cover problem. In addition, we alsoprove that the optimal feedback selection problem for dedicatedi/o is inapproximable to a constant factor of log n , where n denotesthe system dimension. To this end, we propose an algorithm tofind an approximate solution to the optimal feedback selectionproblem for dedicated i/o. The proposed algorithm consists of a potential function incorporated with a greedy scheme and attainsa solution with a guaranteed approximation ratio. Then weconsider two special network topologies of practical importance,referred as back-edge feedback structure and hierarchical net-works . For the first case, which is NP-hard and inapproximable toa multiplicative factor of log n , we provide a (log n )-approximatesolution, where n denotes the system dimension. For hierarchicalnetworks, we give a dynamic programming based algorithm toobtain an optimal solution in polynomial time. Index Terms —Linear dynamical systems, arbitrary pole-placement, network analysis and control, minimum cost feedbackselection, dynamic programming, hierarchical networks.
I. I
NTRODUCTION
The emergence of large-scale networks as physical modelscapturing the structural properties of real networks presentsnew challenges in design, control and optimization. Large-scale dynamical systems have applications in diverse areas,including biological networks, transportation networks, waterdistribution networks, multi-agent systems and internet. Mostof the real world networks are often too complex and of largesystem dimension that employing conventional control theo-retic tools to analyse various properties of these systems arecomputationally infeasible. Recently, there has been immenseresearch advance in the area of large-scale dynamical sys-tems collectively using concepts from various interdisciplinaryfields including control theory, network science and statisticalphysics. These studies emphasise on the relationship betweenthe topology and the dynamics of complex networks.This paper deals with feedback selection in linear timeinvariant (LTI) systems. Feedback selection problem is a clas-sical problem in control theory which resisted much advancesdue to the inherent hardness of the problem. We addressthe feedback selection problem for a complex system whosegraph pattern is known and parameter values are unknown.
The authors Aishwary Joshi and Prasanna Chaporkar are in the Departmentof Electrical Engineering, Indian Institute of Technology Bombay, India andShana Moothedath in University of Washington, Seattle. Email: { aishwary,chaporkar } @ee.iitb.ac.in, { sm15 } @uw.edu. More specifically, this paper discusses optimal feedback se-lection for structured
LTI systems.
Given a structured systemwith specified state, input and output structures and a costmatrix that denotes the cost of each feedback connection,our objective is to design an optimal feedback matrix thatsatisfies arbitrary pole-placement of the closed-loop system.
The cost associated with the feedback connections comesfrom installation and monitoring cost associated with thenetwork. The motivation for this problem comes from therecent interest and developments in the control of large-scalesystems modeled with a very large number of variables, whereimplementing control strategies that affect all or many of thevariables in the system is not economical or rather not feasible.Structural analysis of dynamical systems is a well studiedarea since the introduction of structural controllability (see[1], [2], [3], [4], [5] as representatives). The strength ofthis analysis lies in the fact that many structural propertiesare ‘generic’ in nature, i.e., these properties hold for almostall systems with the same structure [2], [6]. Over last fewdecades, various design and optimization problems in complexnetworks are addressed using structural analysis in manypapers. These papers mainly use concepts of bipartite matchingand graph connectivity. For a detailed reading on variousproblems in this area see [7] and references therein.Optimal feedback selection for structured systems is pre-viously addressed in many papers [8]. Given a structuredstate matrix, an optimal input-output and feedback co-designproblem is addressed in [5]. As structure of input, output andfeedback matrices are unconstrained , the problem consideredin [5] is solvable in polynomial time complexity. Paper [9]considered the input-output and feedback co-design problemfor constrained input, output and feedback structures. Thisproblem turns out be NP-hard as a subproblem, namely theconstrained minimum input selection problem, is NP-hard[9]. Due to the NP-hardness of the problem, the class of irreducible systems is considered in [9]. Later in [10], anorder-optimal approximation algorithm is given for the input-output and feedback selection co-design problem.This paper deals with optimal feedback selection for struc-tured systems with specified state, input and output matrices.Note that, here input and output matrices are specified andthere is no selection of inputs and outputs. The structure ofthe feedback pattern is constrained and each feedback edgeis associated with a cost. Our aim is to design an optimalfeedback matrix that satisfies the prescribed structure and alsoof minimum cost. Depending on the nature of inputs andoutputs, dedicated and non-dedicated , and the nature of the A structured system is said to be irreducible if there exists a directed pathbetween any two arbitrary nodes in the state digraph D ( ¯ A ) (see Section III). An input (output, resp.) is said to be dedicated if it can actuate (sense,resp.) a single state only. osts of the feedback connections, uniform and non-uniform ,different formulations of this problem is addressed before.Table below summarizes the associated results.
Constrained
TABLE I A LGORITHMIC COMPLEXITY RESULTS OF THE OPTIMAL FEEDBACK SELECTIONPROBLEM
Inputand OutputFeedback costs
Uniform Non-uniformDedicated P [11] NP-hard (this paper)Non-dedicated NP-hard [12] NP-hard [12]feedback selection with non-dedicated inputs and outputs isconsidered in [12]. In [12], the authors show the NP-hardnessof the problem for non-dedicated i/o case and later proposea polynomial time algorithm for a special graph topology so-called line graphs. Optimal feedback selection problem withdedicated inputs and outputs and uniform cost feedback edgesis considered in [13], [11] and a polynomial time algorithmis given in [11].
In this paper, we consider optimal feedbackselection for dedicated inputs and outputs and non-uniformcost feedback edges . Remark 1.
NP-hardness result for non-dedicated i/o caserelies heavily on the non-dedicated nature of i/o’s, and hencethe NP-hardness proved in [12] does not automatically implyNP-hardness of a special case with dedicated i/o’s. Note that,for non-dedicated i/o’s, Problem 1 is NP-hard even when thefeedback costs are uniform. However, for uniform cost setting,the dedicated i/o case is solvable in polynomial time [11].
In this scenario, we make the following contributions: • We prove that the optimal feedback selection problem withdedicated input-output set and non-uniform cost feedbackedges is NP-hard (Theorem 1). • We prove that the optimal feedback selection problem withdedicated inputs and outputs, and feedback edges with non-uniform cost is inapproximable to a multiplicative factor oflog n , where n denotes the system dimension (Theorem 2). • We propose an approximation algorithm with a guaranteedapproximation ratio for solving the problem (Algorithm 4 andTheorem 4). • We show that the proposed algorithm has computationalcomplexity polynomial in the number of cycles in the systemdigraph and the system dimension (Theorem 5). • We consider a special class of systems with a constrainton the structure of the feedback matrix, referred as back-edgefeedback selection, and propose an approximation algorithmto solve the problem with a guaranteed approximation ratioof log n , where n denotes the system dimension (Algorithm 5and Theorem 7). • We consider another special class of systems referred as hi-erarchical networks and propose a polynomial time algorithmbased on dynamic programming to obtain an optimal solutionto the problem (Algorithm 6 and Theorem 8).The organization of the rest of the paper is as follows:Section II gives the formulation of the optimization problemaddressed in this paper. Section III describes preliminaries andfew existing results used in the sequel. Section IV analyzes thecomplexity of the problem and proves the NP-hardness and the inapproximability of the problem. Section V reformulates theproblem to a graph theoretic equivalent. Section VI gives anapproximation algorithm to solve the problem. Section VIIexplores two special topologies of structured systems andgives an approximation algorithm and an optimal algorithm,respectively, to solve the two cases. Finally, Section VIII givesthe concluding remarks and future directions.II. P
ROBLEM F ORMULATION
Consider an LTI system ˙ x = Ax + Bu , y = Cx , where A ∈ R n × n , B ∈ R n × m and C ∈ R p × n . Here the matrices A, B and C denote the state, input and output matrices respectivelyand R denotes the set of real numbers. The structured matrices ¯ A, ¯ B and ¯ C corresponding to this system are such that ¯ A ij = 0 whenever A ij = 0 , ¯ B ij = 0 whenever B ij = 0 , ¯ C ij = 0 whenever C ij = 0 . (1)For ( A, B, C ) that satisfy equation (1), ( ¯ A, ¯ B, ¯ C ) is referredas the structured system of ( A, B, C ) and the system ( A, B, C ) is called a numerical realization of the structured system ( ¯ A, ¯ B, ¯ C ) . Here ¯ A ∈ { , ⋆ } n × n , ¯ B ∈ { , ⋆ } n × m and ¯ C ∈{ , ⋆ } p × n . The entries in the structured system correspondto fixed zeros and the ⋆ entries correspond to unrelatedindeterminate. Let P ∈ R m × p be a cost matrix, where P ij denotes the cost of feeding the j th output to the i th input. Ourobjective here is output feedback selection. A feedback edge issaid to be infeasible if the corresponding output can not be fedto the corresponding input. All infeasible feedback connectionsare assigned infinity cost. In other words, P ij = ∞ impliesthat the j th output can not be fed to the i th input, or thefeedback edge ( y j , u i ) is infeasible. We define the feedbackmatrix ¯ K ∈ { , ⋆ } m × p , where ¯ K ij = ⋆ only if P ij = ∞ . Ouraim is to design an optimal output feedback matrix such thatthe closed-loop system guarantees arbitrary pole-placement. Agraph theoretic necessary and sufficient condition for check-ing whether arbitrary pole-placement is feasible or not in astructured system is given in [14]. This condition depends onthe existence of structurally fixed modes (SFMs) in the closed-loop structured system. Hence to address the pole-placementproblem in structured systems, the concept of SFMs is used inthis paper. Let [ K ] := { K : K ij = 0 , if ¯ K ij = 0 } . Structuredsystems with no SFMs are defined as follows: Definition 1.
The structured system ( ¯ A, ¯ B, ¯ C ) and feedbackmatrix ¯ K is said to have no structurally fixed modes if thereexists a numerical realization (A,B,C) of ( ¯ A, ¯ B, ¯ C ) such that ∩ K ∈ [ K ] σ ( A + BKC ) = ∅ , where σ ( T ) denotes the set ofeigenvalues of a square matrix T. Given a structured system ( ¯ A, ¯ B, ¯ C ) and a cost matrix P ,our aim is to find a minimum cost set of feedback edges suchthat the closed-loop system denoted by ( ¯ A, ¯ B, ¯ C, ¯ K ) has noSFMs. The set of all feedback matrices ¯ K that satisfies theno-SFMs criteria is denoted by the set K . In other words, K := { ¯ K ∈ { , ⋆ } m × p : ( ¯ A, ¯ B, ¯ C, ¯ K ) has no SFMs } is the setof all feasible solutions to the optimization problem discussedin this paper. The cost associated with the feedback matrix ¯ K is denoted by P ( ¯ K ) , where P ( ¯ K ) = P ( i,j ): ¯ K ij = ⋆ P ij . Theoptimization problem addressed in this paper is given below. roblem 1. Given a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) ,find ¯ K ⋆ ∈ arg min ¯ K ∈ K P ( ¯ K ) . Here I m and I p denote m dedicated inputs and p dedicatedoutputs, respectively. A dedicated input is an input which actuates a single state directly and a dedicated output is anoutput that senses a single state directly. Thus there is exactlyone ⋆ entry in each column of I m and exactly one ⋆ entryin each row of I p . Problem 1 is referred to as the optimalfeedback selection for dedicated i/o problem . If P ( ¯ K ⋆ ) = ∞ ,then we say that arbitrary pole-placement is not possible for ( ¯ A, I m , I p ) and cost matrix P . In the section below, we givefew notations and preliminaries used in the sequel.III. N OTATIONS , P
RELIMINARIES AND E XISTING R ESULTS
For describing various graph theoretic conditions used in theanalysis of structured systems, we first elaborate on few nota-tions and constructions. A digraph D ( ¯ A ) := ( V X , E X ) , where V X = { x , . . . , x n } and an edge ( x j , x i ) ∈ E X if ¯ A ij = ⋆ .The edge ( x j , x i ) directed from x j towards x i implies thatstate x j can influence state x i . Hence the influence of stateson other states is captured in the digraph D ( ¯ A ) . Similarly,we define D ( ¯ A, ¯ B, ¯ C ) := ( V X ∪ V Y ∪ V U , E X ∪ E Y ∪ E U ) ,where V U = { u , . . . , u m } and V Y = { y , . . . , y p } . An edge ( u j , x i ) ∈ E U if ¯ B ij = ⋆ and an edge ( x j , y i ) ∈ E Y if ¯ C ij = ⋆ . Next, we define D ( ¯ A, ¯ B, ¯ C, ¯ K ) := ( V X ∪ V Y ∪ V U , E X ∪ E Y ∪ E U ∪ E K ) , where a feedback edge ( y j , u i ) ∈ E K if ¯ K ij = ⋆ . Thus D ( ¯ A, ¯ B, ¯ C, ¯ K ) captures theinfluence of states, inputs, outputs and feedback connections.The digraphs D ( ¯ A ) and D ( ¯ A, ¯ B, ¯ C, ¯ K ) are referred to as the state digraph and the closed-loop system digraph , respectively.A digraph is said to be strongly connected if there exists a pathfrom v i to v k for each ordered pair of vertices ( v i , v k ) in thedigraph. A strongly connected component (SCC) is a subgraphthat consists of a maximal set of strongly connected vertices.Necessary and sufficient condition for the no-SFMs criteria isdescribed below. Proposition 1. [14, Theorem 4]: A structured system( ¯ A, ¯ B, ¯ C ) has no SFMs with respect to an information pattern ¯ K if and only if the following conditions hold:(a) in the digraph D ( ¯ A, ¯ B, ¯ C, ¯ K ) , each state node x i iscontained in an SCC which includes an edge from E K ,(b) there exists a finite node disjoint union of cycles C g =( V g , E g ) in D ( ¯ A, ¯ B, ¯ C, ¯ K ) , where g belongs to the setof natural numbers such that V X ⊂ ∪ g V g . The conditions given in Proposition 1 thus serve as condi-tions for checking existence of SFMs in the closed-loop sys-tem. For verifying condition (a), one has to find all the SCCs inthe digraph D ( ¯ A, ¯ B, ¯ C, ¯ K ) . If each SCC has atleast one feed-back edge present in it, then condition (a) is satisfied. Concern-ing condition (b), an equivalent matching condition using thebipartite graph B ( ¯ A, ¯ B, ¯ C, ¯ K ) exists [10]. The constructionof bipartite graph B ( ¯ A, ¯ B, ¯ C, ¯ K ) is as follows. We first define A matching is a set of edges such that no two edges share the same endpoint. For a bipartite graph G B = (( V B , V ′ B ) , E B ) , a perfect matching is amatching whose cardinality is equal to min( | V B | , | V ′ B | ). A bipartite graph G B = (( V B , V ′ B ) , E B ) is a graph satisfying V B ∩ V ′ B = ∅ and E B ⊆ V B × V ′ B . state bipartite graph B ( ¯ A ) := (( V X ′ , V X ) , E X ) , where V X ′ = { x ′ , . . . , x ′ n } , V X = { x , . . . , x n } and ( x ′ j , x i ) ∈ E X ⇔ ( x i , x j ) ∈ E X . Now, we define B ( ¯ A, ¯ B, ¯ C, ¯ K ) := (( V X ′ ∪ V U ′ ∪ V Y ′ , V X ∪ V U ∪ V Y ) , E ′ ) , where V U ′ = { u ′ , . . . , u ′ m } , V Y ′ = { y ′ , . . . , y ′ p } , V U = { u , . . . , u m } , V Y = { y , . . . , y p } and E ′ = ( E X ∪ E U ∪ E Y ∪ E K ∪ E U ∪ E Y ) . Also, ( x ′ i , u j ) ∈ E U ⇔ ( u j , x i ) ∈ E U , ( y ′ j , x i ) ∈ E Y ⇔ ( x i , y j ) ∈ E Y and ( u ′ i , y j ) ∈ E K ⇔ ( y j , u i ) ∈ E K . Moreover, E U includes edges ( u ′ i , u i ) , for i = 1 , . . . , m and E Y includes edges ( y ′ i , y i ) , for i = 1 , . . . , p . Proposition 2. [10, Theorem 3] Consider a closed-loop struc-tured system ( ¯ A, ¯ B, ¯ C, ¯ K ) . The bipartite graph B ( ¯ A, ¯ B, ¯ C, ¯ K ) has a perfect matching if and only if all state nodes arespanned by disjoint union of cycles in D ( ¯ A, ¯ B, ¯ C, ¯ K ) . If B ( ¯ A ) has a perfect matching, then B ( ¯ A, ¯ B, ¯ C, ¯ K ) has aperfect matching without using any feedback edge. This im-plies that condition (b) is satisfied without using any feedbackedge. This is because in B ( ¯ A, ¯ B, ¯ C, ¯ K ) , ( u ′ i , u i ) ∈ E U , forall i ∈ { , . . . , m } , and ( y ′ i , y i ) ∈ E Y , for all i ∈ { , . . . , p } .Thus a perfect matching in B ( ¯ A ) is a sufficient condition forsatisfying condition (b).Since m = O ( n ) and p = O ( n ) , finding SCCs in D ( ¯ A, ¯ B, ¯ C, ¯ K ) has O ( n ) complexity [15]. Verifying condi-tion (b) has a complexity O ( n . ) using the matching conditiongiven in Proposition 2 [16]. Hence, given ( ¯ A, ¯ B, ¯ C ) and feed-back matrix ¯ K , verifying the conditions in Proposition 1 hascomplexity O ( n . ) . Our objective in this paper is to obtain anoptimal (in the sense of cost) set of feedback connections thatguarantees arbitrary pole-placement. In other words, we needto obtain an optimal set of feedback edges that satisfies theno-SFMs criteria. Even though verifying existence of SFMsis of polynomial complexity, identifying an optimal feedbackmatrix may not be computationally easy. Specifically, in largescale systems of huge system dimension, an exhaustive searchbased technique to obtain an optimal solution to Problem 1 isnot computationally feasible. Before proposing a framework tosolve Problem 1, we first analyze the tractability of Problem 1in the section below.IV. C OMPLEXITY OF O PTIMAL F EEDBACK S ELECTION P ROBLEM WITH D EDICATED I NPUTS AND O UTPUTS
In this section, we prove the NP-hardness of Problem 1.The hardness result is obtained using a reduction of a knownNP-hard problem, the weighted set cover problem , to aninstance of Problem 1. The weighted set cover problem isa standard NP-hard problem with numerous applications [17].It is described here for the sake of completeness. Given auniverse U of N elements U = { , . . . , N } , and a collectionof sets P = { S , S , .... S r } , where S i ⊆ U and ∪ S i ∈ P S i = U and a weight function w : P → R , the objective is to find aset S ⋆ ⊆ P such that ∪ S i ∈ S ⋆ S i = U and P S i ∈ S ⋆ w ( S i ) P S i ∈ e S w ( S i ) , where ∪ S i ∈ e S = U .The pseudo-code showing a polynomial time reduction ofthe weighted set cover problem to an instance of Problem 1is presented in Algorithm 1. From a general instance ofthe weighted set cover problem, we construct an instanceof Problem 1. The structured system ( ¯ A, ¯ B, ¯ C ) has states x , . . . , x N + r +1 , inputs u , . . . , u r +1 and outputs y , . . . , y r . lgorithm 1 Pseudo-code for reducing the weighted set coverto an instance of Problem 1
Input : A weighted set cover problem with universe U = { , , . . . , N } , sets P = { S , . . . , S r } and a weight function w associated with each set in P Output : A structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and afeedback cost matrix P We define a structured system ( ¯ A, ¯ B, ¯ C ) as follows: ¯ A ij ← ⋆ , for i = j , ⋆ , for i ∈ { , . . . , N } and j = N + r +1 , ⋆ , for i ∈ { N +1 , . . . , N + r } , j ∈ S i − N , , otherwise. ¯ B ij ← ( ⋆ , for i ∈ { N +1 , . . . , N + r +1 } and j = i − N, , otherwise. ¯ C ij ← ( ⋆ , for j ∈ { N +1 . . . , N + r } and i = j − N, , otherwise . P ij ← w ( S j ) , j ∈ { , . . . r } and i = r +1 , , for i, j ∈ { , . . . , r } and i = j, ∞ , otherwise. Let ¯ K be a solution to Problem 1 for ( ¯ A, ¯ B, ¯ C ) and cost matrix P constructed above Sets selected under ¯ K, S ( ¯ K ) ←{ S j − N : ¯ K ij = ⋆ & i = j } Weight of the set w ( S ( ¯ K )) ← P S i ∈ S ( ¯ K ) w ( S i ) The structured state matrix ¯ A ∈ { , ⋆ } ( N + r +1) × ( N + r +1) isconstructed as follows. For ease of understanding, we refer x , . . . , x N , as the element nodes and x N +1 , . . . , x N + r , asthe set nodes . The element nodes correspond to the elementsof the universe U and every node in { x , . . . , x N } has an edgefrom the node x N + r +1 . The set nodes correspond to the setsof the weighted set cover problem. A set node x N + k has anedge from element node x j if element j ∈ U belongs to set S k ∈ P . This completes the construction of ¯ A (Step 2).The structured matrix ¯ B ∈ { , ⋆ } ( N + r +1) × ( r +1) corre-sponds to the ( r + 1) dedicated input nodes which are fedto set nodes x N +1 , . . . , x N + r and x N + r +1 (Step 3). Thestructured matrix ¯ C ∈ { , ⋆ } r × ( N + r +1) corresponds to the r dedicated output nodes which come out from the r setnodes x N +1 , . . . , x N + r respectively (Step 4). Thus for theconstructed structured system, n = N + r + 1 , m = r + 1 and p = r . Corresponding to the ( r + 1) inputs and r outputs, thefeedback cost matrix P ∈ R +( r +1) × r is defined as follows.We assign P ij = 0 , for i, j ∈ { , . . . , r } and i = j . For i = r + 1 and j ∈ { , . . . , r } , P ij is assigned the weight ofthe set S j (Step 5). The motive for defining such a feedbackcost structure is the following. In a solution to Problem 1,if we select a feedback edge connecting output connectedto the set node x N + k to u r +1 , it is analogous to selectingthe set S k in the weighted set cover problem. The zero costfeedback edges take into account the set nodes x N + j forwhich the feedback edge going from output of x N + j to u r +1 is not selected. Given a solution ¯ K to Problem 1, the setsselected under ¯ K is defined as S ( ¯ K ) . Here S ( ¯ K ) consists ofall those sets whose corresponding set node has its dedicatedoutput connected to the input u r +1 in ¯ K (Step 7). Further, theweight w ( S ( ¯ K )) is defined as shown in Step 8. An illustrative x x x x x x u u u u x x x y y y Fig. 1.
Digraph D ( ¯ A, ¯ B, ¯ C ) constructed usingAlgorithm 1 for aweighted set cover problemwith U = { , . . . , } , P = { S , S , S } , where S = { , } , S = { , } and S = { , , } . example demonstrating the construction given in Algorithm 1is given in Figure 1. Lemma 1.
Consider the weighted set cover problem with U = { , . . . , N } , sets P = { S , . . . , S r } and weight function w .Let ( ¯ A, ¯ B = I m , ¯ C = I p ) and P be the structured system andfeedback cost matrix constructed using Algorithm 1. If ¯ K is asolution to Problem 1, then S ( ¯ K ) covers U .Proof. Here we assume that a ¯ K is a solution to Problem 1 andthen show that S ( ¯ K ) is a solution to the weighted set coverproblem. Consider an arbitrary element j ∈ U . We show that S ( ¯ K ) covers the element j . Consider node x j . Since ¯ K is asolution to Problem 1, it follows that x j must lie in an SCCwith atleast one feedback edge in it. Notice that node x j doesnot have an input or output connected directly to it. Thus theonly way for node x j to satisfy condition (a) in Proposition 1is using a feedback edge connecting the output of some setnode x k , where k ∈ { N + 1 , . . . , N + r } , to the input node u r +1 such that ( x j , x k ) ∈ E X , i.e., j ∈ S k − N . Using Step 7of Algorithm 1, this implies that set S k − N ∈ S ( ¯ K ) . Thuselement j is covered by S ( ¯ K ) . Since element j is arbitrary,the proof follows. Theorem 1.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and a feedback cost matrix P . Then, Problem 1 is NP-hard.Proof. The reduction of the weighted set cover problem givenin Algorithm 1 is used for proving the NP-hardness. Let ¯ K be a solution to Problem 1. Now we show that S ( ¯ K ⋆ ) is anoptimal solution to the weighted set cover problem, where ¯ K ⋆ is an optimal solution to Problem 1. By Lemma 1, S ( ¯ K ⋆ ) is asolution to the weighted set cover problem. Hence feasibilityholds. To prove optimality, assume that ¯ K ⋆ denotes an optimalsolution to Problem 1. The proof follows if S ( ¯ K ⋆ ) is anoptimal solution to the weighted set cover problem. We provethis using a contradiction argument. Let the set S ′ be a coverto the weighted set cover problem, i.e., ∪ S i ∈ S ′ S i = U , suchthat w ( S ′ ) < w ( S ( ¯ K ⋆ )) . Corresponding to the set S ′ , weconstruct ¯ K ′ ∈ { , ⋆ } ( r +1) × r as follows: ¯ K ′ ij = ⋆ , for i = r + 1 and j : S j ∈ S ′ ,⋆ , for i = j, , otherwise. Notice that the cost P ( ¯ K ′ ) = w ( S ′ ) because the feed-back edges selected in ¯ K ′ of the form ( y k , u r +1 ) have cost w ( S k ) and other feedback edges of the form ( y k , u k ) havezero cost. To show that ¯ K ′ ∈ K , for an arbitrary node x j consider the following three cases: 1) j ∈ { , . . . , N } ,2) j ∈ { N + 1 , . . . , N + r } , and 3) j = N + r + 1 .For case 1), consider node x j . Since S ′ is a solution tothe weighted set cover problem, there exists a set S k ∈ S ′ such that j ∈ S k . Corresponding to the set S k , ¯ K ′ ( r +1) k = ⋆ .ence, x j lies in an SCC with the feedback edge ( y k , u r +1 ) .For case 2), notice that ¯ K ′ ii = ⋆ for all i . Hence, x N + k , for k = 1 , . . . , r, lies in an SCC with the zero cost feedbackedge ( y k , u k ) . For case 3), since we have shown that elementnodes are part of SCC with feedback edges connected to u r +1 which is connected to node x N + r +1 , node x N + r +1 alsobelongs to an SCC with a feedback edge. Thus all nodeslie in an SCC with a feedback edge and condition (a) inProposition 1 is satisfied. Since B ( ¯ A ) has a perfect matching,condition (b) in Proposition 1 is satisfied. Hence ¯ K ′ ∈ K .By Steps 7 and 8 of Algorithm 1, P ( ¯ K ⋆ ) = w ( S ( ¯ K ⋆ )) .Further, we know that P ( ¯ K ′ ) = w ( S ′ ) and by assumption w ( S ′ ) < w ( S ( ¯ K ⋆ )) . Thus P ( ¯ K ′ ) < P ( ¯ K ⋆ ) , which is acontradiction to the optimality of ¯ K ⋆ . As a result, given anoptimal solution ¯ K ⋆ , an optimal solution to the weighted setcover problem S ( ¯ K ⋆ ) can be obtained. Hence, Problem 1 isNP-hard. Remark 2.
Problem 1 is NP-hard even when the cost of thefeedback edges are restricted to 1, 0, and ∞ . For this case,one can reduce the minimum set cover problem to an instanceof Problem 1 in polynomial time using Algorithm 1. In thisreduction all the feedback edges from { y , . . . , y r } to u r +1 are of uniform cost. Notice that in the reduction given in Algorithm 1, ¯ A has alldiagonal entries as ⋆ ’s. Hence B ( ¯ A ) has a perfect matching.Thus, even without using any feedback edges, condition (b) issatisfied and hence the optimization in Problem 1 is now tosatisfy condition (a) optimally. The following result holds. Corollary 1.
Consider the structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedback cost matrix P . Then, finding aminimum cost feedback matrix that satisfies condition (a) inProposition 1 is NP-hard. By Theorem 1, Problem 1 is atleast as hard as the weightedset cover problem. Hence there does not exist a polynomialtime algorithm to solve Problem 1, unless P=NP. However,approximation algorithms may exist. Before investigating this,the inapproximability of Problem 1 is analyzed in Theorem 2.
Theorem 2.
Consider a general instance of the weightedset cover problem and a structured system ( ¯ A, ¯ B, ¯ C ) andfeedback cost matrix P constructed using Algorithm 1. Let S ⋆ and ¯ K ⋆ be optimal solutions to the weighted set coverproblem and Problem 1, respectively. For ǫ > , if ¯ K ′ is an ǫ -optimal solution to Problem 1, then S ( ¯ K ′ ) is an ǫ -optimal solution to the weighted set cover problem, i.e., P ( ¯ K ′ ) ǫ P ( ¯ K ⋆ ) implies w ( S ( ¯ K ′ )) ǫ w ( S ⋆ ) . Moreover,Problem 1 is inapproximable to a multiplicative factor of log n ,where n denotes the number of state nodes.Proof. Given ¯ K ′ is an ǫ -optimal solution to problem 1, i.e., P ( ¯ K ′ ) ǫ P ( ¯ K ⋆ ) . From Steps 7 and 8 of Algorithm 1, wehave w ( S ( ¯ K ′ )) = P ( ¯ K ′ ) and w ( S ( ¯ K ⋆ )) = P ( ¯ K ⋆ ) . Also,by Theorem 1, S ( ¯ K ⋆ ) is an optimal solution to the weightedset cover problem. Therefore, w ( S ⋆ ) = w ( S ( ¯ K ⋆ )) = P ( ¯ K ⋆ ) .Hence w ( S ( ¯ K ′ )) ǫ w ( S ⋆ ) . Thus an ǫ -optimal solution toProblem 1 gives an ǫ -optimal solution to the weighted set coverproblem. The weighted set cover problem is inapproximableto a factor of (1 − o (1)) log N [18], where N denotes the cardinality of the universe. Thus Problem 1 is inapproximableto a multiplicative factor of log n .In the following Sections (Section V and VI), we ex-plore approximation algorithm to solve Problem 1. Later inSection VII, we consider Problem 1 on two special graphtopologies, which are of practical importance, and proposepolynomial time algorithms to obtain a solution.V. R EFORMULATING O PTIMAL F EEDBACK S ELECTION P ROBLEM TO O PTIMAL C YCLE S ELECTION P ROBLEM
In this section, we reformulate Problem 1 to a graphtheoretic equivalent. The following assumption holds.
Assumption 1.
The structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) satisfies the following condition: B ( ¯ A ) has a perfect matching. The motivation to make this assumption comes from the factthat there exists a wide class of systems called as self-damped systems that have a perfect matching in B ( ¯ A ) , for exampleconsensus dynamics in multi-agent systems and epidemicequations [19]. Self-damped systems are the systems withall diagonal entries of ¯ A as nonzero. All systems with non-singular state matrix also satisfy Assumption 1. Consider astructured system ( ¯ A, ¯ B = I m , ¯ C = I p ) that satisfies Assump-tion 1 and a cost matrix P . Recall that under Assumption 1,condition (b) in Proposition 1 is satisfied without using anyfeedback edge. Hence, for solving Problem 1 we need tosatisfy only condition (a) in Proposition 1. The approximation Algorithm 2
Pseudo-code reducing Problem 1 to a cycleformulation
Input : Structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedbackcost matrix P Output : Cycles C = { C , . . . , C t } of digraph D R Construct D ( ¯ A ) and find SCCs in D ( ¯ A ) , say N = { N , . . . , N ℓ } Condense each SCC into a single node, say node set N = { N , . . . , N ℓ } Define E N := { ( N a , N b ) : x i ∈ N a , x j ∈ N b and ( x i , x j ) ∈ E X } Define E ′ U := { ( u j , N k ) : x i ∈ N k and ( u j , x i ) ∈ E U } Define E ′ Y := { ( N k , y j ) : x i ∈ N k and ( x i , y j ) ∈ E Y } Construct D F ← ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E K ) E ab ← { ( y i , u j ) : ( u j , N a ) ∈ E ′ U and ( N b , y i ) ∈ E ′ Y } e ab ← { ( y i ′ , u j ′ ) : ( i ′ , j ′ ) ∈ arg min ( y i ,u j ) ∈ E ab P ji } E min ← { e ab : a, b ∈ { , . . . , ℓ }} Construct D R ← ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E min ) Find all the cycles in D R , C = { C , . . . , C t } Each cycle C i ∈ C has the following structure: C i ← ( { N i ⊆ N } : [ E i ⊆ E min ]) algorithm given in this paper is based on cycle formulationof Problem 1. Given ( ¯ A, ¯ B = I m , ¯ C = I p ) and cost matrix P , the pseudo-code showing reformulation of Problem 1 to acycle based problem is presented in Algorithm 2. Algorithm 2constructs digraph D F and the reduced digraph D R as definedbelow and gives as output the cycles in digraph D R . The cyclesin a directed graph can be found using the algorithm in [20].onsider the directed graph D ( ¯ A ) . We first find the set ofall SCCs, N = { N , . . . , N ℓ } , in D ( ¯ A ) (Step 1). Each SCCis now condensed to a node. With a slight abuse of notation, N = { N , . . . , N ℓ } is used to denote the set of condensednodes (Step 2). The construction of the digraph D F = ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E K ) is as follows. In D F , an edge ( N a , N b ) ∈ E N if there exists an x i ∈ N a and x j ∈ N b and ¯ A ji = ⋆ (Step 3). Given the input edge set E U , the edgeset E ′ U is constructed in such a way that ( u i , N a ) ∈ E ′ U ⇔ x j ∈ N a and ( u i , x j ) ∈ E U (Step 4). Similarly, the edge set E ′ Y is constructed such that ( N a , y i ) ∈ E ′ Y ⇔ x j ∈ N a and ( x j , y i ) ∈ E Y (Step 5). Thus E ′ U consists of edges from aninput to SCCs in N and E ′ Y consists of edges from SCCsin N to an output. Recall that E K is the set of all feedbackedges for which P ij is finite. Thus, E K consists of all feasiblefeedback edges.Next we construct the reduced edge set E min and thedirected graph D R = ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E min ) from D F . Corresponding to each SCC node in N , thereare possibly multiple input and output nodes. Thus for anarbitrary node pair N a , N b ∈ N there are numerous feedbackedges possible between them. In such a situation, we onlyconsider a least cost feedback edge between these nodesand ignore others. Corresponding to an arbitrary node pair N a , N b ∈ N , we define the set E ab as the set of allfeasible feedback edges from N b to N a (Step 7). For all N a , N b ∈ N , if a feedback edge exists between ( N b , N a ) ,select a minimum cost edge from edge set E ab and includeit in edge set E min . This simplification results in a digraph D R := ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E min ) (Step 10). Next,the directed cycle set C = { C , . . . , C t } in D R is obtained. Acycle consists of two sets: node set N i ⊆ N and feedbackedge set E i ⊆ E min . Also, the cost of an edge set ˆ E ⊆ E min ,denoted by c ( ˆ E ) is the sum of the costs of the individualedges present in it, i.e., c ( ˆ E ) = P e i ∈ ˆ E c ( e i ) , where c ( e i ) denotes the cost of the feedback edge e i as defined by thefeedback cost matrix P . Below we define Problem 2, whichis an optimization problem on D R and later show that thisformulation indeed solves Problem 1. Problem 2 (Optimal cycle selection problem) . Consider astructured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedback cost ma-trix P . Let E min denotes the set of feedback edges constructedusing Algorithm 2. Then, find E opt ∈ arg min ˆ E ⊆ E min c ( ˆ E ) , such that each node, N i ∈ N , lies in atleast one cycle in thedigraph D opt = ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ E opt ) . We show that Problem 2 is equivalent to optimal feedbackselection problem for dedicated i/o.
Theorem 3.
Consider a structured system ( ¯ A, ¯ B, ¯ C ) andfeedback cost matrix P . Let D R be the digraph constructedusing Algorithm 2. Then, E ′ is a solution to Problem 2 if andonly if ¯ K ′ := { ¯ K ′ ij = ⋆ : ( y j , u i ) ∈ E ′ } is a solution toProblem 1. Moreover, for ǫ > , if E ′ is an ǫ -optimal solutionto Problem 2, then ¯ K ′ is an ǫ -optimal solution to Problem 1,i.e., c ( E ′ ) ǫ c ( E opt ) implies P ( ¯ K ′ ) ǫ P ( ¯ K ⋆ ) .Proof. Only-if part : We assume that E ′ is a solution toProblem 2 and then show that ¯ K ′ is a solution to Problem 1. Since E ′ is a solution to Problem 2, each N i ∈ N lies in acycle with some feedback edge, say ( y b , u a ) ∈ E ′ . Consideran arbitrary node x j ∈ N i . Since x j lies in the SCC N i and N i lies in a cycle with some feedback edge ( y b , u a ) , x j liesin an SCC in D ( ¯ A, ¯ B, ¯ C, ¯ K ′ ) with feedback edge ( y b , u a ) .Since x j is arbitrary, all nodes lie in an SCC with a feedbackedge. Hence ¯ K ′ is a solution to Problem 1. If-part : We assume that ¯ K ′ is a solution to Problem 1 andshow that E ′ := { ( y j , u i ) : ¯ K ′ ij = ⋆ } is a solution toProblem 2. Let x j ∈ N i be an arbitrary state in SCC N i of D ( ¯ A ) . Since ¯ K ′ is a solution to Problem 1, x j lies in anSCC in D ( ¯ A, ¯ B, ¯ C, ¯ K ′ ) with some feedback edge, say ( y b , u a ) .Hence there exists a directed path L in D ( ¯ A, ¯ B, ¯ C, ¯ K ′ ) from x j to itself, with node repetitions allowed, which includesfeedback edge ( y b , u a ) . Let the set of state nodes that liein this path be denoted by N L . Consider the digraph D ′ =( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ) . If N L ⊆ N i , then N i lies ina cycle with feedback edge ( y b , u a ) . If N L * N i , then sinceall the state nodes in L lie in some SCC in D ( ¯ A, ¯ B, ¯ C, ¯ K ′ ) there exists a path that originates at SCC N i and returns to N i including the feedback edge ( y b , u a ) . For this path, the SCCnode repetitions are not allowed because D ′ is a DAG. Thusthe directed path L along with feedback edge ( y b , u a ) formsa cycle in D ′ and hence N i lies in the cycle formed by path L which includes the feedback edge ( y b , u a ) . This concludesthe if-part of the proof.Next, we show the ǫ -optimality. Given E ′ is a solutionto Problem 2 and c ( E ′ ) ǫ c ( E opt ) . By only-if part ofTheorem 3, ¯ K ′ := { ¯ K ′ ij = ⋆ : ( y j , u i ) ∈ E ′ } is a feasiblesolution to Problem 1. Also, by definition of ¯ K ′ , c ( E ′ ) = P ( ¯ K ′ ) . Similarly, ¯ K opt := { ¯ K optij = ⋆ : ( y j , u i ) ∈ E opt } is afeasible solution to Problem 1 and P ( ¯ K opt ) = c ( E opt ) . Thus, P ( ¯ K ′ ) ǫ P ( ¯ K opt ) . Now we show that ¯ K opt is an optimalsolution to Problem 1. Suppose not, i.e., P ( ¯ K ⋆ ) < P ( ¯ K opt ) .Then, by if-part of Theorem 3, E ⋆ := { ( y j , u i ) ∈ E ⋆ : ¯ K ⋆ij = ⋆ } is a feasible solution to Problem 2. Also, c ( E ⋆ ) = P ( ¯ K ⋆ ) .Thus c ( E ⋆ ) < c ( E opt ) . This contradicts the optimality of E opt . Hence P ( ¯ K opt ) = P ( ¯ K ⋆ ) . Now, since P ( ¯ K ′ ) ǫ P ( ¯ K opt ) and P ( ¯ K opt ) = P ( ¯ K ⋆ ) , P ( ¯ K ′ ) ǫ P ( ¯ K ⋆ ) . Thiscompletes the proof.Theorem 3 thus concludes that an ǫ -optimal solution toProblem 2 gives an ǫ -optimal solution to Problem 1. Weelaborate our approach to solve Problem 2 below.VI. A PPROXIMATION A LGORITHM FOR THE O PTIMAL F EEDBACK S ELECTION P ROBLEM
This section discusses a greedy algorithm and later an approximation algorithm to find an approximate solution toProblem 2, which in turn gives an approximate solution toProblem 1 (Theorem 3). Recall C as the set of cycles in D R . Definition 2.
Consider the set of cycles in D R , C = { C , . . . , C t } . Given a set of cycles C ′ ⊆ C in D R , the nodeset N ′ covered by C ′ is defined as N ′ := ∪ C i ∈ C ′ N i , where C i = ( { N i } : [ E i ]) . Here N ′ ⊆ N , where N is the set ofSCCs in D ( ¯ A ) . In other words, we say C ′ covers N ′ . Further,the cost of the cover of cycle set C ′ is defined as c ( ∪ C i ∈ C ′ E i ) .Also, C ′ is said to be an optimal cycle cover if N ′ = N andhe cost of the cover C ′ is equal to c ( E opt ) , where E opt is anoptimal solution to Problem 2. Our approach to solve Problem 2 incorporates a greedyalgorithm presented in Algorithm 3 with a potential function presented in Algorithm 4. Algorithm 3 is described below.The pseudo-code to find a greedy solution to Problem 2
Algorithm 3
Pseudo-code for subroutine G
REEDY ( · , · ) Input : Cycle set C inp ⊆ C in D R , where C i ∈ C inp := ( { N i } :[ E i ]) , and an edge set E inp ⊆ E K Output : Set of feedback edges H G REEDY ( ∪ C i ∈ C inp N i , E inp ) : Initialize the set of covered nodes, I ← emptyset Initialize the set of selected edges, H ← ∅ N inp ← ∪ C i ∈ C inp N i E i ← E i \ E inp , for all C i ∈ C inp while I = N inp do Calculate ρ ( C k ) ← c ( E k ) / | N k | , for all C k ∈ C inp Select C j ∈ arg min C i ∈ C inp ρ ( C i ) Update I ← I ∪ N j , H ← H ∪ E j N k ← N k \ I , E k ← E k \ H , for all C k ∈ C inp for C k ∈ C inp do if N k = {} then C inp ← C inp \ C k end if end for end whilereturn H is presented in Algorithm 3. Consider a structured system ( ¯ A, ¯ B, ¯ C ) and feedback cost matrix P . Let D R denotes thedigraph corresponding to the structured system constructedusing Algorithm 2. Given a set of cycles C inp and an edge set E inp as input, Algorithm 3 outputs a set of feedback edges H such that H ⊆ E min , H ∩ E inp = ∅ and all nodes N i ∈ N inp ,where N inp = ∪ C i ∈ C inp N i (Step 4), lie in atleast one cyclein the digraph ( N ∪ V U ∪ V Y , E N ∪ E ′ U ∪ E ′ Y ∪ H ) . Ateach step of the while loop (Step 6), the sets I and H aredefined as the set of nodes covered and the set of feedbackedges selected, respectively (Steps 2 and 4). Our purpose isto make I = N inp . In other words, given a set of cycles C inp in D R , our aim is to choose a set of cycles C sol ⊆ C inp suchthat C sol is a cover (Definition 2) of N inp . For each cycle C i ∈ C inp , we define price of a cycle as the average costper node, i.e., ρ ( C i ) = c ( E i ) / | N i | (Step 7). A cycle whichhas a minimum price, say C j , is selected (Step 8). We callthis selection as a greedy selection of the cycle C j . If thereare multiple cycles with minimum price, select any one ofthem. Based on this selection, the sets I and H are updatedby including the nodes and the edges of C j , respectively(Step 9). Further, all the covered nodes ( I ) and all the selectededges ( H ) are removed from the node set and the edge setof each cycle, respectively (Step 10). The set of cycles C inp is now updated by removing all the cycles with empty nodeset (Step 13). These set of operations are performed untilwe cover all the nodes in N inp , i.e., I = N inp . The costof this greedy approach is denoted by c ( H ) , where H isthe set of feedback edges selected by the greedy algorithmsatisfying H ∩ E inp = ∅ . Let C arb be an arbitrary set of cycles. Then, for each edge e i ∈ E min , we define multiplicity m i ( C arb ) as m i ( C arb ) = |{ C j : C j ∈ C arb and e i ∈ E j }| .In other words, m i ( C arb ) is the number of cycles in C arb in which the feedback edge e i is present. Now we define k ( C arb ) := max e i ∈ E min m i ( C arb ) and is referred as the firsthighest multiplicity of an edge in cycle set C arb . Also, forevery cycle C j ∈ C arb , k j ( C arb ) := max e i ∈ E min \ E j m i ( C arb ) .Then, k ( C arb ) := min C j ∈ C arb k j and is referred as the secondhighest multiplicity of an edge in cycle set C arb . Next, let C set denotes the set that consists of all possible optimal solutionsto Problem 2. Note that, C ( j ) ∈ C set is a set of cyclesin D R . Then, we define ˜ k = min C ( j ) ∈ C set k ( C ( j )) and acorresponding cycle set C opt ∈ arg min C ( j ) ∈ C set k ( C ( j )) .Similarly, ˜ k = min C ( j ) ∈ C set k ( C ( j )) and a correspondingcycle set C opt ∈ arg min C ( j ) ∈ C set k ( C ( j )) . Further, E opt and E opt denote the set of feedback edges present in set of cycles C opt and C opt , respectively. Note that, ˜ k and ˜ k may notnecessarily be from the same cycle set in C set . Also, since C opt ∈ C set and C opt ∈ C set , c ( E opt ) = c ( E opt ) = c ( E opt ) .We describe an example using Figure 2 to demonstrate thevalues of variables k , k for a cycle set and ˜ k , ˜ k for thestructured system illustrated. Consider the following cycles: C : ( { N , N , N } : [( y , u ) , ( y , u )]) C : ( { N , N , N } : [( y , u ) , ( y , u )]) C : ( { N , N , N } : [( y , u ) , ( y , u )]) C : ( { N , N , N } : [( y , u ) , ( y , u )]) C : ( { N , N , N } : [( y , u ) , ( y , u )]) C : ( { N } : [( y , u )]) C : ( { N } : [( y , u )]) C : ( { N } : [( y , u )]) C : ( { N } : [( y , u )]) . (2)Let the feedback cost matrix P associated with the structuredsystem given in Figure 2 be P =
10 1 10 10 10 10 10 1010 10 1 1 10 10 10 1010 10 1 10 10 10 10 1010 10 10 10 10 10 10 101 10 10 10 10 10 1 1010 10 10 10 1 1 10 1010 10 10 10 10 10 1 1010 10 10 10 10 1 10 1
For the structured system given in Figure 2, theset of all possible optimal solutions to Problem 2 C set = { C (1) , C (2) , C (3) , C (4) } . Here, C (1) = { C , C , C , C , C , C } , C (2) = { C , C , C , C , C } , C (3) = { C , C , C , C , C } and C (4) = { C , C , C , C , C , C } . Inthe cycle set C (1) , the feedback edge ( y , u ) is presentin 3 cycles, which is the first highest multiplicity of anedge in cycle set C (1) . The second highest multiplicity ofan edge in C (1) is 1 because all the other feedback edgesare present in only one cycle. Hence k ( C (1)) = 3 and k ( C (1)) = 1 . In C (2) , the feedback edges ( y , u ) and ( y , u ) are both present in 2 cycles each. Therefore, thefirst highest multiplicity of an edge in C (2) is 2. Also, thesecond highest multiplicity of an edge in C (2) is also 2. In C (3) , the edge ( y , u ) is present in 3 cycles, which is thefirst highest multiplicity of an edge in cycle set C (3) , andthe edge ( y , u ) is present in 2 cycles. Therefore, the firsthighest multiplicity of an edge in C (3) is 3 and the secondhighest multiplicity of an edge in C (3) is 2. In C (4) , the N N N N N N N u y u y u y y u y u y u y y u Fig. 2.
Illustrative figure demon-strating the variables ˜ k and ˜ k . C set = { C (1) , C (2) , C (3) and C (4) } . Then, k ( C (1)) = 3 , k ( C (2)) = 2 , k ( C (3)) =3 , k ( C (4)) = 2 . Similarly k ( C (1)) = 1 , k ( C (2)) = 2 , k ( C (3)) = 2 , k ( C (4)) = 1 .Thus, ˜ k = 2 and ˜ k = 1 . feedback edges ( y , u ) is present in 2 cycles and all theother feedback edges are present in one cycle. Thus, the firsthighest multiplicity of an edge in C (4) is 2 and the secondhighest multiplicity of an edge in C (4) is 1. Therefore, ˜ k = min { k ( C (1)) , k ( C (2)) , k ( C (3)) , k ( C (4)) } = min { , , , } = 2 and ˜ k = min { k ( C (1)) , k ( C (2)) , k ( C (3)) , k ( C (4)) } = min { , , , } = 1 . Note that, onlythe feedback edges which lie in the cycles present in the sets C (1) , C (2) , C (3) and C (4) are shown in Figure 2. Lemma 2.
Consider a structured system ( ¯ A, ¯ B, ¯ C ) and costmatrix P . Let C opt be an optimal cycle cover and H ⊆ E K be the output of Algorithm 3, which takes as input a set ofcycles and a set of feedback edges. Then, c ( H ) ˜ k (1 +log | N | ) c ( E opt ) , where ˜ k is the highest multiplicity of anedge in the cycle set C opt and E opt is an optimal solution toProblem 2.Proof. Given C opt is an optimal solution to Problem 2. Wedefine the total cost of cycles c tot as c tot = X C i ∈ C opt c ( E i ) . (3)Since ˜ k is the highest multiplicity edge, in the edge set E opt := ∪ C i ∈ C opt E i , corresponding to C opt , from (3) c tot X ˜ e i ∈ E opt (˜ k c (˜ e i )) = ˜ k ( X ˜ e i ∈ E opt c (˜ e i )) = ˜ k × c ( E opt ) . (4)Let in v th iteration of the while loop (Steps 7-13), e C ns ( v ) = { e C ns ( v ) , . . . , e C zns ( v ) } ⊆ C opt , where e C ins ( v ) = ( { e N ins ( v ) } :[ e E ins ( v )]) , be the set of cycles not yet selected by the greedyscheme described in Algorithm 3. Since e C ns ( v ) ⊆ C opt , c tot > X e C ins ( v ) ∈ e C ns ( v ) c ( e E ins ( v )) . (5)From (4) and (5), we get ˜ k × c ( E opt ) > X e C ins ( v ) ∈ e C ns ( v ) c ( e E ins ( v )) , = c ( e E ns ( v )) + · · · + c ( e E zns ( v )) , = | e N ns ( v ) | c ( e E ns ( v )) | e N ns ( v ) | + · · · + | e N zns ( v ) | c ( e E zns ( v )) | e N zns ( v ) | The ratio of the cost of each cycle C i to the number of nodesit will cover is denoted by ρ ( C i ) (Step 7 of Algorithm 3), i.e., c ( E i ) / | N i | = ρ ( C i ) . Let the cycle C j with minimum priceis selected greedily in the current iteration. Then, ρ ( C j ) ρ ( e C ins ( v )) , for i = 1 , . . . , z . So, ˜ k × c ( E opt ) > X e C ins ( v ) ∈ e C ns ( v ) ρ ( C j ) × | e N ins ( v ) | , = ρ ( C j ) × ( X e C ins ( v ) ∈ e C ns ( v ) | e N ins ( v ) | ) , > ρ ( C j ) × ( | ∪ e C ins ( v ) ∈ e C ns ( v ) e N ins ( v ) | ) . Notice that e C ns ( v ) covers nodes N \ I , where I is the set ofnodes in N covered till the v th iteration of the while loop. Let N \ I = N ns ( v ) . Thus | N ns ( v ) | = | ∪ e C ins ( v ) ∈ e C ns ( v ) e N ins ( v ) | . ˜ k × c ( E opt ) > ρ ( C j ) × | N ns ( v ) | ,ρ ( C j ) ˜ k × c ( E opt ) | N ns ( v ) | . (6)Let the sequence of cycles selected by Algorithm 3 be ˆ C = { ˆ C , . . . , ˆ C d } . In v th iteration, let the number of nodes coveredby cycle ˆ C v be given by ˆ n v . Here | N ns ( v ) | is the number ofnodes yet to be covered after ( v − iterations. Thus N ns (1) = N . Also, by (6), ρ ( ˆ C v ) ˜ k c ( E opt ) | N ns ( v ) | . The cost incurred whenselecting cycle ˆ C v is ρ ( ˆ C v ) × ˆ n v . So, the total cost incurred c ( H ) = X ˆ C v ∈ ˆ C ρ ( ˆ C v ) × ˆ n v , ˜ k c ( E opt ) (cid:16) ˆ n | N ns (1) | + . . . + ˆ n d | N ns ( d ) | (cid:17) , = ˜ k c ( E opt ) (cid:16) ˆ n | N | + . . . + ˆ n d | N ns ( d ) | (cid:17) , = ˜ k c ( E opt ) (cid:16) | N | + · · · + 1 | N | | {z } ˆ n times + 1 | N | − ˆ n + · · · + 1 | N | − ˆ n | {z } ˆ n times + . . . + 1 | N | − d − P i =1 ˆ n i + . . . + 1 | N | − d − P i =1 ˆ n i | {z } ˆ n d times (cid:17) , ˜ k c ( E opt ) (cid:16) log ( | N | ) (cid:17) , = ˜ k c ( E opt ) (cid:16) log ( | N | ) (cid:17) . Thus c ( H ) ˜ k c ( E opt )(1 + log | N | ) . Remark 3.
Let C opt be an optimal cycle set that solvesProblem 2 and the highest multiplicity of a feedback edgein C opt be ˜ k . Notice that | C opt | | N | because in optimalsolution each cycle covers atleast one different node. Hence, ˜ k ≤ | N | . The pseudo-code for finding an approximate solution toProblem 2 is presented in Algorithm 4. This algorithm in-corporates the greedy algorithm given in Algorithm 3 witha potential function. Here, I A and H A are defined as theset of nodes covered and the set of feedback edges selected,respectively. Our purpose is to make I A = N . Consider acycle C i ∈ C . The potential of a cycle is defined in thefollowing way. We apply the greedy scheme discussed inAlgorithm 3 with input ( ∪ tj =1 N j /N i , E i ) and let the solu-tion obtained be the edge set H A ( C i ) (Step 3). Notice that H A ( C i ) ∩ E i = ∅ because we removed the edge set E i from lgorithm 4 Pseudo-code to find an approximate solution toProblem 2
Input : Cycle set C = { C , . . . , C t } , where C i := ( { N i } : [ E i ]) Output : Set of feedback edges H A Initialize the set of covered nodes as I A ← ∅ Initialize the set of selected edges as H A ← ∅ Define H A ( C i ) ← G REEDY (cid:16) ∪ tj =1 N j \ N i , E i (cid:17) Define
POT ( C i ) ← c ( E i ) + c ( H A ( C i )) while I = N do Calculate
POT ( C k ) , for k = 1 , . . . , | C | Select C j ∈ arg min C i ∈ C POT ( C i ) I A ← I A ∪ N j , H A ← H A ∪ E j N k ← N k /I A , E k ← E k /H A , for k = 1 , . . . , | C | end while Return H A all E j ’s before applying the greedy scheme (see Algorithm 3).The potential of cycle C i is then defined as the sum of c ( E i ) and c ( H A ( C i )) (Step 4). Also, the edge set E i ∪ H A ( C i ) is afeasible solution to Problem 2, as E i covers N i and H A ( C i ) covers ( ∪ tj =1 N j /N i ) . After calculating the potential for each C i ∈ C , we select a cycle with minimum potential value, say C j (Step 7). The node set covered and the edge set selectedtill current iteration is updated as in Step 8. Also, the edge set E j is removed from remaining edge sets for all C k ∈ C \ C j (Step 9). In Theorem 4, we prove that Algorithm 4 gives anapproximate solution to Problem 2 with approximation ratio ˜ k (1 + log | N | ) . Theorem 4.
Algorithm 4 which takes as input a cycle set C = { C , . . . , C t } outputs a solution H A to Problem 2 such that c ( H A ) ˜ k (1 + log | N | ) c ( E opt ) , where E opt is an optimalsolution to Problem 2. In other words, output of Algorithm 4is a ˜ k (1 + log | N | ) -optimal solution to Problem 2.Proof. Let C opt be an optimal solution of Problem 2. Recallthe definition of ˜ k . Let the highest multiplicity of a feedbackedge in C opt be k ′ and the corresponding edge be e ′ . Considerthe cycle e C ∈ C opt , where e C = ( { e N } : [ e E ]) such that e ′ ∈ e E . Let H A ( e C ) := GREEDY ( ∪ tj =1 N j / e N , e E ) . The potentialof cycle e C is given by POT ( e C ) = c ( e E ) + c ( H A ( e C )) . Let E opt be the set of feedback edges corresponding to C opt .Note that, an optimal edge set to cover the nodes N \ e N is E opt \ e E and the optimal cost is c ( E opt ) − c ( e E ) . Also, since e C ∈ C opt , where C opt is an optimal cycle cover, the highestmultiplicity of an edge in C opt \ e C is ˜ k . Hence by Lemma 2,we have c ( H A ( e C )) ˜ k (1 + log | N \ e N | ) ( c ( E opt ) − c ( ˜ E )) , ˜ k (1 + log | N | ) ( c ( E opt ) − c ( e E )) . Algorithm 4 greedily selects a cycle, say C k , with minimumpotential. Then, POT ( C k ) POT ( e C ) . Hence, POT ( C k ) c ( e E ) + ˜ k (1 + log | N | ) (cid:16) c ( E opt ) − c ( e E ) (cid:17) , ˜ k (1 + log | N | ) c ( E opt ) . (7)Equation (7) holds since ˜ k (1 + log | N | ) > . Notice that x x x x x x u u x x x x x x x x x x x x x y Fig. 3.
Illustrative figure demonstrating the merging operation. Eachstate vertex x k has input u k and output y k connected which areomitted for many x k ’s for the sake of clarity, i.e, feedback edges ( y k , u k ) for all k = 1 , . . . , are present in the system. POT ( C k ) is the cost of the edge set obtained by selectingcycle C k and then applying greedy scheme on the remaining N \ N k nodes. Hence, edge set E k ∪ H A ( C k ) is a solution toProblem 2. Therefore, after the first iteration of the while loopof Algorithm 4, we obtain a solution to Problem 2, the costof which is bounded by ˜ k (1 + log | N | ) c ( E opt ) = ˜ k (1 + log | N | ) c ( E opt ) . Thus Algorithm 4 gives an approximate so-lution to Problem 2 with approximation ratio ˜ k (1+ log | N | )) .This completes the proof.The result below gives the computational complexity ofAlgorithm 4. Theorem 5.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedback cost matrix P . Algorithm 4, which takesas input a set of cycles C and gives as output the feedbackedge set H A , has complexity O ( n | C | ) , where n denotes thesystem dimension and | C | is the number of cycles in D R .Proof. Finding all cycles in the digraph D R has complexity O ( n | C | ) [20] as the number of SCCs in D ( ¯ A ) are in O ( n ) ,where n is the number of state nodes in the structured system ( ¯ A, ¯ B, ¯ C ) . Algorithm 3 finds the price for | C | cycles in eachiteration and the number of iterations are O ( n ) . Hence, Algo-rithm 3 has complexity O ( n | C | ) . In Algorithm 4, Algorithm 3is called as a subroutine O ( n | C | ) times. All the other steps inAlgorithm 4 are of linear complexity. Hence, the complexityof Algorithm 4 is ( n | C | ) . Remark 4.
Cycle merging: A cycle merging operation canbe performed on the cycle set C in D R before applyingAlgorithm 4. For all C a , C b ∈ C , if E a ⊂ E b , then we mergethe cycle C a with cycle C b , i.e., C b = ( { N a ∪ N b } : [ E b ]) . Notice that after the merging operation, the cost c ( E b ) ofselecting the cycle C b does not change, but the number ofnodes covered can increase resulting in a better ratio of costto nodes covered, ρ ( C b ) . The bound achieved in Algorithm 4has a factor of ˜ k . As a result of this merging operation,the optimal edge set does not change, but the multiplicity ˜ k can decrease resulting in a better approximation and lowercomplexity of Algorithm 4. An illustrative example showingmerging operation is shown in Figure 3. Assume that anoptimal solution to the given system is the set of edges ( y , u ) and ( y , u ) . Then both ˜ k and ˜ k are andcan possibly be very high as the number of nodes increases.If we perform the merging operation as mentioned above, ˜ k becomes . Broadly, the merging operation simplifies theproposed algorithm and requires more detailed analysis. emark 5. Notice that in Algorithm 4, only the first it-eration of the while loop is used to prove an approxi-mation ratio of ˜ k (1 + log( | N | )) . The cost of the finaledge set obtained when Algorithm 4 terminates will beatmost ˜ k (1 + log( | N | )) ( c ( E opt )) , i.e., lesser cost than ˜ k (1 + log( | N | )) ( c ( E opt )) . The following section considers two special cases of Prob-lem 1 of practical importance and we propose polynomial timealgorithms to obtain approximate and optimal solutions to thetwo cases, respectively.VII. S
PECIAL CASES
In this section, we consider two special graph topologies:(i) structured systems with back-edge feedback structure and(ii) hierarchical network.
A. Structured systems with back-edge feedback structure
In this subsection, we consider a special class of structuredsystems with a constraint on the structure of the feedbackmatrix. We assume that the only feasible feedback edges ( y j , u i ) ′ s are those edges where there exists a directed pathfrom input u i to output y j in D ( ¯ A, ¯ B, ¯ C ) . In other words, theassumption states that an output from a state is fed back toan input which can directly or indirectly influence the stateassociated with that output. A feedback structure that satisfiesthis constraint is referred as a back-edge feedback structure.Note that, inputs and outputs are dedicated here. For thisclass of systems we propose a polynomial time algorithm tofind an approximate solution to Problem 1 with an optimalapproximation ratio. We describe below the graph topologyconsidered in this subsection. Definition 3.
Consider a digraph D G := ( V G , E G ) . Let thenodes v i , v j ∈ V G be such that there exists a directed pathfrom v i to v j . Then, v i is referred as an ancestor of v j . Also,node v j is referred as a descendant of node v i . Assumption 2.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and a feedback cost matrix P ∈ R m × p , where P ij denotes the cost of feeding the j th output to the i th input.Then, P ij = ∞ , if the input node u i is not an ancestor of theoutput node y j in D ( ¯ A, ¯ B, ¯ C ) . Recall that if P ij = ∞ , then the feedback edge ¯ K ij isinfeasible. Thus Assumption 2 concludes that an output y j can be fed to an input u i only if u i is an ancestor of y j in D ( ¯ A, ¯ B, ¯ C ) . If u i is not an ancestor of y j , then ( y j , u i ) isan infeasible feedback link. An illustrative example showingfeasible and infeasible feedback connections in a structuredsystem is presented in Figure 4. Corollary 2.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and a feedback cost matrix P that satisfies Assumption 2.For this structured system the following hold:(i) Problem 1 is NP-hard,(ii) Problem 1 is inapproximable to a multiplicative factor of log n , where n is the number of states in the system. The above corollary is a consequence of the fact that thestructured system and the feedback cost matrix obtained in x x x x x x x x x x x x x x x x x u y y u Fig. 4.
Illustrative figuredemonstrating feasiblefeedback connections. UnderAssumption 2, feedback edge ( y , u ) is feasible while ( y , u ) is infeasible. the reduction given in Algorithm 2 and the NP-hardness proofgiven in Theorem 1 satisfy Assumption 2.In this subsection, we present a polynomial time approx-imation algorithm that finds a (log n ) -approximate solutionto Problem 1. This algorithm is based on a reduction ofProblem 1 to an instance of the weighted set cover problem.We reduce a general instance of Problem 1 satisfying Assump-tion 2 to an instance of the weighted set cover problem insuch a way that an approximation algorithm of the weightedset cover problem will serve as an approximation algorithmfor Problem 1. To achieve this, we reduce the weighted setcover problem to Problem 1 and prove in Theorem 6 that any ǫ -optimal solution of the weighted set cover problem is an ǫ -optimal solution to Problem 1. Algorithm 5
Pseudo-code for reducing a general instanceof Problem 1 following Assumption 2 to an instance of theweighted set cover problem denoted by ( U s , P s , w s ) . Input:
Structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) andfeedback cost matrix P Output:
Weighted set cover problem ( U s , P s , w s ) Define ¯ K P := { ¯ K Pij = ⋆ if P ij = ∞} Define an instance of the weighted set cover problem as: Universe U s ← { x , . . . , x n } Set P s = { S , . . . , S | E KP | } for e d = ( y j , u i ) ∈ E K P do S d := { x a : x a lies in an SCC in the digraph formed byadding the feedback edge e d = ( y j , u i ) to D ( ¯ A, ¯ B, ¯ C ) } Weight w s ( S d ) = P ij end for Let S ′ be a solution to the weighted set cover problem ( U s , P s , w s ) The feedback matrix ¯ K ( S ′ ) selected under S ′ , ¯ K ( S ′ ) ←{ ¯ K ( S ′ ) ij = ⋆ : S d ∈ S ′ and e d = ( y j , u i ) } Cost of the edge set ¯ K ( S ′ ) , P ( ¯ K ( S ′ )) = P ( i,j ): ¯ K ( S ′ ) ij = ⋆ P ij Algorithm 5 gives the pseudo-code for reducing a generalinstance of Problem 1 to an instance of the weighted set coverproblem denoted by ( U s , P s , w s ) . We define a feedback matrix ¯ K P , such that ¯ K P consists of all feasible feedback edges(Step 1). The universe U s of the weighted set cover problemconsists of all states { x , . . . , x n } of the system (Step 3).The set P s is defined in such a way that a set S d ∈ P s corresponds to a feedback edge ( y j , u i ) = e d (Step 4). Thus | P s | = | E K P | and each set S d consists of state nodes in D ( ¯ A ) that lie in an SCC in the digraph formed by adding thefeedback edge ( y j , u i ) to D ( ¯ A, ¯ B, ¯ C ) (Step 6). The weight ofthe set S d is assigned the cost of the feedback edge ( y j , u i ) Step 7). We denote a solution to the weighted set coverproblem ( U s , P s , w s ) by S ′ (Step 9). With respect to S ′ thefeedback matrix selected is denoted by ¯ K ( S ′ ) (Step 10) andits cost is denoted by P ( ¯ K ( S ′ )) (Step 11). The result belowproves that ¯ K ( S ′ ) is a solution to Problem 1. Theorem 6.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and cost matrix P such that Assumption 2 holds. Also, let B ( ¯ A ) has a perfect matching. Then,(i) S ′ is a solution to the weighted set cover problem ( U s , P s , w s ) constructed using Algorithm 5 if and only if ¯ K ( S ′ ) is a solution to Problem 1.(ii) S ⋆ is an optimal solution to the weighted set cover prob-lem ( U s , P s , w s ) implies ¯ K ( S ⋆ ) is an optimal solutionto Problem 1, i.e., P ( ¯ K ( S ⋆ )) = P ( ¯ K ⋆ ) .(iii) For ǫ > , if S ′ is an ǫ -optimal solution to the weightedset cover problem, then ¯ K ( S ′ ) is an ǫ -optimal solution toProblem 1, i.e., w s ( S ′ ) ǫ w s ( S ⋆ ) implies P ( ¯ K ( S ′ )) ǫ P ( ¯ K ⋆ ) .Proof. (i) Only-if part : Here we assume that S ′ is a solutionto the weighted set cover problem and then show that ¯ K ( S ′ ) is a solution to Problem 1. Note that in B ( ¯ A ) there exists aperfect matching, and hence condition (b) in Proposition 1is satisfied without using any feedback edge. As a result,only condition (a) has to be satisfied. Since S ′ is a solutionto the weighted set cover problem, ∪ S d ∈ S ′ S d = U s = { x , . . . , x n } . Consider an arbitrary state x i such that x i ∈ S j for some S j ∈ S ′ . We now show that x i lies in an SCCin D ( ¯ A, ¯ B, ¯ C, ¯ K ( S ′ )) . Note that x i ∈ S j implies that x i lies in an SCC in a digraph obtained by adding feedbackedge e j = ( y b , u a ) to D ( ¯ A, ¯ B, ¯ C, ¯ K P ) (see Step 6). Byconstruction of ¯ K ( S ′ ) (see Step 10), ¯ K ( S ′ ) ab = ⋆ . Thisconcludes that x i lies in an SCC with a feedback edge in ¯ K ( S ′ ) . As x i is arbitrary the only-if part follows. (i) If part : Here we assume that e ¯ K is a solution to Problem 1and then show that e S , where e S := { S j ∈ P s : e j = ( y b , u a ) and e ¯ K ab = ⋆ } , is a solution to the weighted set cover problem.Consider an arbitrary element x i ∈ U s . Since e ¯ K is a solutionto Problem 1, there exists some e j = ( y b , u a ) such that e ¯ K ab = ⋆ and x i lies in an SCC in D ( ¯ A, ¯ B, ¯ C, e ¯ K ) withfeedback edge e j . By Step 6 of Algorithm 5, this implies that x i ∈ S j . Since e ¯ K ab = ⋆ and e j = ( y b , u a ) , by definition of e S , S j ∈ e S . Hence e S covers the element x i ∈ U s . Since x i isarbitrary, the if-part follows. This completes the proof of ( i ) . (ii) : Given S ⋆ is an optimal solution to ( U s , P s , w s ) . ByTheorem 6 (i), ¯ K ( S ⋆ ) is a solution to Problem 1. We need toshow that ¯ K ( S ⋆ ) is an optimal solution to Problem 1. Supposenot. Then there exists ¯ K ′ ∈ K , i.e., a solution to Problem 1,and P ( ¯ K ′ ) < P ( ¯ K ( S ⋆ )) . From if-part of Theorem 6 (i),corresponding to ¯ K ′ there exists e S := { S j : e j = ( y b , u a ) and ¯ K ′ ab = ⋆ } which a solution to ( U s , P s , w s ) . Using Steps 7, 10and 11, w s ( S ⋆ ) = P ( ¯ K ( S ⋆ )) and w s ( e S ) = P ( ¯ K ′ ) .As P ( ¯ K ′ ) < P ( ¯ K ( S ⋆ )) , this implies w s ( e S ) < w s ( S ⋆ ) .This contradicts the fact that S ⋆ is an optimal solution to ( U s , P s , w s ) . Thus ¯ K ( S ⋆ ) is an optimal solution to Problem 1. (iii) : Let S ⋆ and ¯ K ⋆ be optimal solutions of the weightedset cover problem ( U s , P s , w s ) and Problem 1, respec-tively. Given w s ( S ′ ) ǫ w s ( S ⋆ ) . Now we need to show x x x x x u y u y u y u y u y Fig. 5.
Illustrative figure of astructured system with ded-icated inputs and outputs todemonstrate Algorithm 5. that P ( ¯ K ( S ′ )) ǫ P ( ¯ K ⋆ ) . Since S ′ and S ⋆ are feasi-ble solutions to the weighted set cover problem, by The-orem 6 (i), ¯ K ( S ′ ) and ¯ K ( S ⋆ ) are feasible solutions toProblem 1. By Steps 7, 10 and 11 of Algorithm 5, w s ( S ′ ) = P ( ¯ K ( S ′ )) and w s ( S ⋆ ) = P ( ¯ K ( S ⋆ )) . Hence P ( ¯ K ( S ′ )) ǫ P ( ¯ K ( S ⋆ )) . From Theorem 6 (ii), P ( ¯ K ( S ⋆ )) = P ( ¯ K ⋆ ) .Thus P ( ¯ K ( S ′ )) ǫ P ( ¯ K ⋆ ) . This completes the proof. Theorem 7.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedback cost matrix P such that Assumption 2 holds.Then,(i) There exists an algorithm that approximates Problem 1to factor log n , where n is the system dimension.(ii) Further, the log n approximation ratio is optimal.Proof. (i) : Using Algorithm 5, any general instance of Prob-lem 1 satisfying Assumption 2 can be reduced to an instanceof the weighted set cover problem. Notice that Algorithm 5iterates over all the feasible feedback edges and each iterationhas O ( n ) complexity. Since m = O ( n ) and p = O ( n ) , numberof feedback edges in the system are O ( n ) . The remainingsteps of Algorithm 5 are of linear complexity. Hence thecomplexity of Algorithm 5 is O ( n ) . This concludes that thereduction given in Algorithm 5 is a polynomial time reduction.From Theorem 6 (iii), an ǫ -optimal solution to the weighted setcover problem gives an ǫ -optimal solution to Problem 1. Forsolving weighted set cover problem there exists a polynomialtime greedy algorithm which gives a (log N ) -optimal solution,where N denotes the cardinality of the universe [17]. ThusProblem 1 is approximable to factor log n , using Algorithm 5and the greedy algorithm given in [17], in polynomial time. (ii) : For a structured system satisfying Assumption 2, Prob-lem 1 is inapproximable to multiplicative factor of log n (Theorem 2). Theorem 7 (i) proves that one can find (log n ) -optimal solution to Problem 1. Thus, the above approximationbound is optimal bound for Problem 1.We explain Algorithm 5 using an illustrative example below. Illustrative example for structured systems with back-edgefeedback structure:
In this section, we describe Algorithm 5using the example given in Figure 5. Let the feedback costmatrix P associated with the structured system given inFigure 5 be P = " ∞ ∞ ∞ ∞∞
10 4 ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ . Notice that an output y j can be given as feedback to aninput u i if there exists a directed path from u i to y j in D ( ¯ A, ¯ B, ¯ C ) . We reduce this instance of Problem 1 to aninstance of the weighted set cover problem ( U s , P s , w s ) asfollows. Here, the universe U s = { x , . . . , x } . As per P , thereare 12 feasible feedback edges. Corresponding to these edges,the sets of the weighted set cover problem P s = { S , . . . , S } are constructed as follows: S = { x } , S = { x , x , x , x } , S = { x , x } , S = { x , x } , S = { x , x , x } , S = { x } , = { x , x } , S = { x } , S = { x , x } , S = { x } , S = { x , x } , and S = { x } . The respective weights forthe sets defined by matrix P given above are given by, w s = { , , , , , , , , , , , } . Solving weighted set coverproblem for ( U s , P s , w s ) given above using approximationalgorithm given in [17] gives a (log n ) -optimal solution toProblem 1 (Theorem 7). The next section discusses the secondgraph topology. B. Hierarchical Network
In this subsection, we consider a special graph topologyreferred as layered graphs in the literature [21]. Many real-world systems such as power grids, drinking water networks,biological cell regulation networks, online social networks,and road traffic control can be described and modeled usinga layered network structure where the states in the systeminteract with each other in a layered fashion [22]. Each layerin the layered structure is influenced by the nodes in theprevious layer and hence the network follows a directed treestructure called as arborescence . A directed graph followinga tree structure such that every node except the root nodehas exactly one incoming edge is referred as a hierarchicalnetwork . Here, we aim to solve the minimum cost feedbackselection problem for dedicated i/o satisfying Assumption 2for structured systems whose DAG of SCCs is a hierarchicalnetwork.Hierarchical network structure is common in real-life net-works [21]. A power distribution system follows a hierarchicalnetwork structure and finding an optimal control strategy aimstowards designing a least cost feedback pattern to maintain thesystem parameters such as voltages and frequency at differentlayers of the network at specified levels [22], [23]. In a waterdistribution network, optimization techniques in controllingthe network contribute towards developing a smart manage-ment strategy for implementing drinking water networks [24].In case of road traffic control, a hierarchical network is anatural choice to structure the control problems [25]. Next, wediscuss few notations and constructions required to describe ahierarchical network. Definition 4.
Consider a directed graph D G := ( V G , E G ) . Letnodes v i , v j ∈ V G be such that there exists an edge ( v i , v j ) ∈ E G from v i to v j . Then, v i is referred as a parent of v j . Let the DAG of SCCs in D ( ¯ A ) be denoted by D A :=( V A , E A ) . Here the node set V A = { N , . . . , N ℓ } is the setof all SCCs in D ( ¯ A ) and ( N i , N j ) ∈ E A if there exists adirected edge in D ( ¯ A ) from a state in N i to a state in N j .Then we have the following assumption on the digraph D A . Assumption 3.
Consider the DAG D A = ( V A , E A ) whichconsists of SCCs in D ( ¯ A ) . Then, each node N i ∈ V A exceptthe root node has a unique parent, where root node is a vertexwhich has no incoming edge. Under Assumption 3, the DAG D A is a hierarchical net-work. For a hierarchical network, we define the notion of layer which corresponds to the position of a set of nodes in thenetwork arrangement. In a directed graph a node v i is said to be influenced by node v j , if thereexists a directed path from v j to v i . L L L L L N N N N N N N N N N N N N N N N N N Fig. 6.
A structured sys-tem whose DAG of SCCsforms a hierarchical net-work. Each vertex N ji in the figure correspondsto an SCC of D ( ¯ A ) .The subgraph enclosedin the dashed box illus-trates a subtree rootedat node N denoted by T ree ( N ) . Definition 5.
Consider N i , N j ∈ V A such that there exists adirected path from N i to N j in D A . The distance from N i to N j in D A is the number of edges in the shortest directedpath from N i to N j . Then, a layer L i is defined as the set ofall nodes which are at a distance i − from the root node in D A . Note that L i ⊆ N . The node set L i is represented as L i = { N i , . . . , N ih i } , where a node N ij ∈ V A denotes the j th node in L i and h i denotes the number of nodes in L i . An illustrative example of a hierarchical network is pre-sented in Figure 6. Under Definition 5, the root node of thehierarchical network is denoted by N and it is the onlynode present in the top layer. Next, we define a subtree ofa hierarchical network which is a subgraph of the systemdigraph. For D A = ( V A , E A ) , D S := ( V S , E S ) denotes asubgraph of D A whose vertex set V S ⊆ V A and edge set E S ⊆ E A , such that endpoints of E S are nodes from V S . Definition 6.
Consider a node N fk ∈ V A in the layer L f .Then, a subtree rooted at node N fk , denoted by T ree ( N fk ) ,is defined as the subgraph in the hierarchical network whichconsists of the node N fk and all of it’s descendants. Note that
T ree ( N ) denotes the entire hierarchical net-work, where N is the top node in the network. An illustrativeexample of a subtree is shown enclosed in the dashed boxwith respect to the hierarchical network in Figure 6. In thispaper, we propose a dynamic programming based algorithmto solve the minimum cost feedback selection problem fordedicated i/o when the structured system is a hierarchicalnetwork. The approach is based on dividing the network intosmaller subtrees (Definition 6) and finding an optimal solutionfor the subtrees in a bottom up fashion. Eventually we mergethe solutions obtained for the smaller subtrees and find anoptimal solution to the bigger network.Consider a hierarchical network D A . Our aim is to find aset of minimum cost feedback edges such that the hierarchicalnetwork along with these feedback edges satisfies condition (a)in Proposition 1. Consider a node N fk , where N fk denotesthe k th node in layer L f . Recall that N fk lies in D A whichis a DAG. For N fk , let A fk denotes the set of all feedbackedges such that each edge in A fk makes N fk lie in a cycle.For a feedback edge ( y b , u a ) to be in A fk , ( y b , u a ) has to bedirected from an output y b which is a descendant of N fk toan input u a which is an ancestor of N fk . To characterize allthe edges in A fk , we give the following definition. Definition 7.
Consider D A and N fk ∈ V A . The set A fk denotes the set of all state nodes that lie in the SCCs of D ( ¯ A ) hich are ancestors of N fk . Similarly, the set D fk denotes theset of all state nodes which lie in some SCC of D ( ¯ A ) whichare descendants of N fk . We denote U fk as the set of inputnodes u i ’s which are connected to the state nodes in A fk .Similarly, Y fk denotes the set of output nodes y j ’s which areconnected from the state nodes in D fk . Then, with respect to N fk , a feedback edge ( y j , u i ) belongs to the edge set A fk if y j ∈ Y fk and u i ∈ U fk . A feedback edge ( y b , u a ) is said tocover N fk if ( y b , u a ) ∈ A fk . We need to find an optimal solution to the minimum costfeedback selection problem for hierarchical networks, i.e., weneed to find a set of feedback edges which cover the entirenetwork represented by
T ree ( N ) . The proposed algorithmis based on dynamic programming where we find solutionsto the subproblems and merge them to obtain a solution forthe original problem. The subproblem is to find an optimalfeedback edge set to cover a general subtree T ree ( N fk ) in thenetwork. Next, we describe the procedure to cover a subtree T ree ( N fk ) optimally. Consider T ree ( N fk ) and ( y b , u a ) ∈ A fk . Since T ree ( N fk ) includes N fk , an edge in A fk is essentialto cover the nodes in T ree ( N fk ) . Suppose we select ( y b , u a ) that covers N fk . Note that there might be a set of nodes in T ree ( N fk ) other than N fk which are covered by the edge ( y b , u a ) . We need to cover the rest of the nodes in T ree ( N fk ) which are not covered by the edge ( y b , u a ) . These nodes lie ina subgraph of T ree ( N fk ) and form a set of disjoint subtreesdenoted by F orest ( N fk , ( y b , u a )) . Definition 8.
Consider node N fk and a feedback edge ( y b , u a ) ∈ A fk . Then, F orest ( N fk , ( y b , u a )) is defined asthe subgraph of T ree ( N fk ) which consists of the nodes in T ree ( N fk ) which are not covered by the feedback edge ( y b , u a ) . The F orest ( N fk , ( y b , u a )) is composed of disjointsubtrees in T ree ( N fk ) . Consider an example of forest presented in Figure 7. Withrespect to the node N and the feedback edge ( y b , u a ) cov-ering the node N , the F orest ( N , ( y b , u a )) is representedby subtrees consisting of the node set { N , N , N , N } (highlighted in green colour). Here, there are two subtrees,namely T ree ( N ) and T ree ( N ) , in F orest ( N , ( y b , u a )) .Consider F orest ( N fk , ( y b , u a )) , where ( y b , u a ) ∈ A fk . Thecost to cover the disjoint subtrees in F orest ( N fk , ( y b , u a )) is the sum of the cost to cover the subtrees individually(Corollary 4) and is denoted by c ( F ( N fk , ( y b , u a ))) , where F ( N fk , ( y b , u a )) is an optimal set of feedback edges to coverall the individual subtrees in F orest ( N fk , ( y b , u a )) . Next, wegive a dynamic programming algorithm to find an optimalsolution to Problem 1 under Assumptions 2 and 3. Thepseudo-code to find an optimal solution to Problem 1 forhierarchical networks satisfying Assumption 2 is presentedin Algorithm 6. Here L f (Step 2) denotes the f th layerin the network and N fk (Step 3) denotes the k th node inlayer L f . We denote U fk (Step 4) as the set of input nodesfrom which there exists a directed path to the states in SCC N fk , and we denote Y fk (Step 5) as the set of output nodeswhich have a directed path from the states in the SCC N fk . L L L L L N N N N N N N N N N N N N N N N N N u a y b Fig. 7.
Illustrative figuredemonstrating forestcorresponding to anode and a feedbackedge in the hierarchicalnetwork given inFigure 6. Figure shows
F orest ( N , ( y b , u a )) whose node set is { N , N , N , N } . Algorithm 6
Pseudo-code to solve Problem 1 for structuredsystems satisfying Assumptions 2 and 3
Input : Structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and costmatrix P satisfying Assumptions 2 and 3 Output : Set of optimal feedback edges H opt Find SCCs in D ( ¯ A ) , N = { N , . . . , N ℓ } Define set L f ← nodes in D A which are at distance f − from the root node Define N fk ← k th node in layer L f Define U fk ← { u i : ¯ B ri = ⋆ and x r ∈ A fk } Define Y fk ← { y j : ¯ C jr = ⋆ and x r ∈ D fk } for f = { ∆ , . . . , } do for k ∈ { , . . . , | L f |} do F ( N fk , ( y j , u i )) ← minimum cost edge set tokeep the nodes in F orest ( N fk , ( y j , u i )) in cycles c ( F ( N fk , ( y j , u i ))) ← cost of the edge set F ( N fk , ( y j , u i )) A fk ← { ( y j , u i ) : y j ∈ Y fk and u i ∈ U fk } c ( Z ( N fk )) ← min ( y j ,u i ) ∈ A fk { P ij + c ( F ( N fk , ( y j , u i ))) } If c ( Z ( N fk )) = P ab + c ( F ( N fk , ( y b , u a ))) , then Z ( N fk ) ← ( y b , u a ) ∪ F ( N fk , ( y b , u a )) , where a ∈{ , . . . , m } , b ∈ { , . . . , p } end for end for H opt = Z ( N ) return H opt and c ( Z ( N ) The algorithm iterates over two nested for -loops, where thefirst loop (Step 6) iterates over the layers in the networkand the second loop (Step 7) iterates over the nodes in aparticular layer. We start with the bottom most layer L ∆ and find the optimal cost to cover each node in layer L ∆ .At layer L f , consider a particular node N fk . For an edge ( y j , u i ) ∈ A fk (Step 10), the algorithm finds the cost to cover T ree ( N fk ) using ( y j , u i ) (Step 11). The cost is computed asthe sum of the cost of the feedback edge ( y j , u i ) ∈ A fk andthe cost of the edge set to cover the F orest ( N fk , ( y j , u i )) .The feedback edge set F ( N fk , ( y j , u i )) denotes an optimalfeedback edge set to cover F orest ( N fk , ( y j , u i )) (Step 8)and c ( F ( N fk , ( y j , u i ))) denotes the corresponding cost ofthe edge set F ( N fk , ( y j , u i )) (Step 9). The cost to cover F orest ( N fk , ( y j , u i )) is already found as the subtrees in F orest ( N fk , ( y j , u i )) are rooted at some descendants of thenode N fk and the costs to cover these subtrees individuallyre already computed. Next, we perform a minimization overall the feedback edges present in A fk and select the feedbackedge ( y b , u a ) which results in the minimum cost to cover T ree ( N fk ) . The set of feedback edges to cover T ree ( N fk ) isthen obtained by taking the union of the optimal edge ( y b , u a ) and an optimal edge set to cover the F orest ( N fk , ( y b , u a )) (Step 12). Eventually, the algorithm reaches the top mostlayer where we find the optimal cost to cover T ree ( N ) (Step 15), which is in fact the cost to cover the entirehierarchical network. Next, we give the main result regardingthe optimality of Algorithm 6. Theorem 8.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and a feedback cost matrix P satisfying Assumptions 2and 3. Let B ( ¯ A ) has a perfect matching. Then, output ofAlgorithm 6 is an optimal solution to Problem 1. To prove Theorem 8, we state and prove the followinglemma. Further, we state two corollaries extending the resultof Lemma 3. Finally, we give a proof for Theorem 8.
Lemma 3.
Consider the nodes N fi , N gj ∈ V A such that theredoes not exist a path directed from N fi to N gj . Let the set offeedback edges which cover the nodes N fi and N gj be A fi and A gj , respectively. Then A fi ∩ A gj = ∅ .Proof. We prove by contradiction. Let ( y b , u a ) be a feedbackedge such that ( y b , u a ) ∈ A fi ∩ A gj . The feedback edge ( y b , u a ) is directed from output node y b to an input node u a andcovers the nodes N fi and N gj . Note that, since y b and u a arededicated inputs and outputs and they belong to a hierarchicalnetwork, there exists atmost one directed path between u a and y b . Since ( y b , u a ) covers N fi , there exists a directed path fromnode u a towards node y b through the node N fi . Similarly,there exists a path directed from node u a towards node y b through the node N gj . Since there exists exactly one directedpath from node u a towards node y b , the nodes N fi and N gj must lie in single path directed from node u a to node y b . Thisis a contradiction to our assumption that the nodes N fi and N gj do not lie in a directed path. Hence A fi ∩ A gj = ∅ . Corollary 3.
Consider a hierarchical network correspondingto the structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and thefeedback cost matrix P . Consider the nodes N fi and N fk in a layer L f . Let A fi and A fk be the set of all feedbackedges which cover the nodes N fi and N fk , respectively. Then, A fi ∩ A fk = ∅ .Proof. Note that since the nodes N fi and N fk belong to thesame layer, there does not exist a directed path between them.Hence the proof follows from Lemma 3.The following corollary states that an optimal feedback edgeset to cover a forest composed of disjoint subtrees is the unionof the optimal edge sets to cover the subtrees individually.Moreover, these edge sets are disjoint and hence their cost isequal to the sum of the costs of edge sets to cover the subtreesindividually. Corollary 4.
Consider nodes N fi , N gj ∈ V A , such thatthere does not exist a path directed from node N fi to node N gj . Let Z ′ ( N fi ) and Z ′ ( N gj ) be some arbitrary maximalfeedback edge sets which cover T ree ( N fi ) and T ree ( N gj ) ,respectively. Then, Z ′ ( N fi ) ∩ Z ′ ( N gj ) = ∅ . Also, the optimalcost to cover T ree ( N fi ) and T ree ( N gj ) together is equal tothe sum of the cost of covering T ree ( N fi ) and T ree ( N gj ) optimally, i.e., c ( Z ( N fi )) + c ( Z ( N gj )) .Proof. Given there exists no directed path from node N fi to N gj . Therefore, there exists no directed path between anynode in T ree ( N fi ) to any node in T ree ( N gj ) . Since the edgeset Z ′ ( N fi ) covers the nodes in T ree ( N fi ) and Z ′ ( N gj ) covers the nodes in T ree ( N gj ) , from Lemma 3, it followsthat Z ′ ( N fi ) ∩ Z ′ ( N gj ) = ∅ . Therefore, the cost to cover T ree ( N fi ) and T ree ( N gj ) is equal to the sum of the costs ofthe feedback edge sets Z ′ ( N fi ) and Z ′ ( N gj ) separately. Let Z ( N fi ) and Z ( N gj ) be optimal edge sets to cover T ree ( N fi ) and T ree ( N gj ) , respectively. Since Z ( N fi ) ∩ Z ( N gj ) = ∅ ,the optimal cost to cover T ree ( N fi ) and T ree ( N gj ) is c ( Z ( N fi )) + c ( Z ( N gj )) . This completes the proof.Next, we prove Theorem 8 to show the optimality of Algo-rithm 6 and give complexity of Algorithm 6 in Theorem 9. Proof of Theorem 8:
We prove Theorem 8 using an inductionargument. The induction hypothesis states that Z ( N fi ) isan optimal set of feedback edges such that the nodes in T ree ( N fi ) lie in cycles with feedback edges in Z ( N fi ) . Base Step:
We consider k = ∆ as the base case. Considernode N ∆ j in layer L ∆ and the feedback edge set A ∆ j . Notethat A ∆ j consists of all feedback edges that can make the node N ∆ j lie in a cycle with a feedback edge. For k = ∆ , wefind the minimum cost to cover the subtree rooted at N ∆ j .Since L ∆ is the lowest layer in the hierarchical network, N ∆ j is a leaf node in D A . Thus, for any feedback edge ( y b , u a ) ∈ A ∆ j , the F orest ( N ∆ j , ( y b , u a )) = ∅ . Hence theedge set F ( N ∆ j , ( y b , u a )) = ∅ and c ( F ( N ∆ i , ( y a , u b ))) = 0 .Thus we need to find the minimum cost to cover the node N ∆ j only. Therefore, the minimum cost edge set Z ( N ∆ j ) to coverthe node N ∆ j is given by Z ( N ∆ j ) = arg min ( y b ,u a ) ∈ A ∆ j P ab .Thus, for each node N ∆ j in the lowest layer L ∆ , Algo-rithm 6 selects a minimum cost feedback edge in A ∆ j for each N ∆ j ∈ L ∆ . As a consequence of Corollary 3, the algorithmfinds a minimum cost feedback edge to cover each node in L ∆ independently. This completes the base step. Induction Step:
For the induction step, we assume that thealgorithm gives an optimal feedback edge set to cover all thenodes in layers L k +1 , . . . , L ∆ , i.e., the cost to cover eachsubtree rooted at nodes in ∪ ∆ s = k +1 L s . Then the collection { Z ( N sj ) : s ∈ , . . . , k + 1 and j ∈ , . . . , | L s |} is thecollection of optimal edge sets to cover all the subtrees whoseroot nodes are nodes present in layer L k +1 and below it. Now,we will prove that Z ( N kj ) is an optimal set of feedback edgesto cover subtree T ree ( N kj ) for each node N kj in layer L k ,i.e., the algorithm gives the optimal cost to cover the subtreesrooted at the nodes in layer L k . Note that A kj consists ofall the feedback edges which can cover N kj . Since N kj liesin T ree ( N kj ) , an edge ( y b , u a ) ∈ A kj is essential to cover N N N N N u y u u y y u y u y u y Fig. 8.
Illustrative exampleof a structured system with ahierarchical network topologyto demonstrate Algorithm 6.The set of optimal edges { ( y , u ) , ( y , u ) , ( y , u ) } obtained by Algorithm 6 areshown in red. T ree ( N kj ) . Then, cost to cover T ree ( N kj ) using some feed-back edge ( y b , u a ) ∈ A kj is given by c ( F ( N kj , ( y b , u a ))) + P ab , where c ( F ( N kj , ( y b , u a ))) is the optimal cost to cover F orest ( N kj , ( y b , u a )) . As a consequence of Corollary 4, theoptimal cost of covering F orest ( N kj , ( y b , u a )) is the sum ofthe optimal costs of covering the subtrees present in the forestindependently and since the optimal costs to cover these sub-trees are already found (induction step assumption), we havethe optimal cost to cover F orest ( N kj , ( y b , u a )) . Therefore,the optimal cost to cover T ree ( N kj ) using a particular feed-back edge ( y b , u a ) ∈ A kj is given as c ( F ( N kj , ( y b , u a )))+ P ab .Since we perform the minimization of the cost over all thefeedback edges in A kj , we obtain the optimal cost to cover T ree ( N kj ) . Further, Z ( N kj ) is the union of the feedbackedge ( y b , u a ) selected in the minimization step and the edgeset F ( N kj , ( y b , u a )) . Thus Z ( N kj ) is an optimal feedbackedge set to cover T ree ( N kj ) . After the final iteration for thetop layer L , we obtain an optimal edge set Z ( N ) to cover T ree ( N ) , which in fact is the hierarchical network. Thiscompletes the proof of Theorem 8. Theorem 9.
Consider a structured system ( ¯ A, ¯ B = I m , ¯ C = I p ) and the feedback cost matrix P . Then, Algorithm 6 whichtakes as input the hierarchical network corresponding to ( ¯ A, ¯ B = I m , ¯ C = I p ) and feedback cost matrix P and outputsan optimal cost feedback edge set to solve Problem 1 hascomplexity of O ( n ) , where n denotes the system dimension.Proof. The number of subtrees possible in the hierarchicalnetwork is equal to the number of SCCs in D ( ¯ A ) which isof the order of n . The minimization step in Algorithm 6 isperformed for all the feedback edges which cover a node N fi ∈ V A , which is of the order of | E K | . Therefore, thecomplexity of Algorithm 6 is O ( n | E K | ) . Since m = O ( n ) and p = O ( n ) , the number of feedback edges in the systemis O ( n ) . Thus the complexity of Algorithm 6 is O ( n ) . Illustrative example for hierarchical network:
In thissection, we describe Algorithm 6 using the exampleillustrated in Figure 8. In the hierarchical network,there are three layers, { L , L , L } , and six SCCs, { N , N , N , N , N , N } . Corresponding to the sixinput and output nodes, let the feedback cost matrix be P = ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ We need to select an optimal set of feedback edges suchthat the six SCCs in this network satisfies condition (a) inProposition 1 optimally. For each SCC N fk , the corresponding set of feedback edges A fk covering N fk are as follows: A = { ( y , u ) , ( y , u ) , ( y , u ) , ( y , u ) , ( y , u ) , ( y , u ) } A = { ( y , u ) , ( y , u ) , ( y , u ) , ( y , u ) } A = { ( y , u ) , ( y , u ) , ( y , u ) , ( y , u ) } A = { ( y , u ) , ( y , u ) , ( y , u ) } A = { ( y , u ) , ( y , u ) , ( y , u ) } A = { ( y , u ) , ( y , u ) , ( y , u ) } In the first iteration ( f = 3 ) we select the layer L . Our aimis to cover each subtree rooted at some node in layer L , i.e.,subtrees rooted at each SCC N k ∈ L .For T ree ( N ) , c ( Z ( N )) = min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min + = 1 and Z ( N ) = ( y , u ) . For T ree ( N ) , c ( Z ( N )) = min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min
10 + 010 + 0 + = 1 and Z ( N ) = ( y , u ) . For T ree ( N ) , c ( Z ( N )) = min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min
10 + 02 + 0 + = 1 and Z ( N ) = ( y , u ) . In the next iteration ( f = 2 ), ouraim is to cover each subtree rooted at some node in layer L .For T ree ( N ) , c ( Z ( N ))= min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min + =3 and Z ( N ) = { ( y , u ) } ∪ F ( N , ( y , u )) = { ( y , u ) , ( y , u ) } . For T ree ( N ) , c ( Z ( N )) = min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min
10 + 0 +
10 + 13 + 1 = 2 and Z ( N ) = { ( y , u ) } ∪ F ( N , ( y , u )) = { ( y , u ) } . Inthe final iteration ( f = 1 ), our aim is to cover each subtreerooted at some node in layer L , i.e., T ree ( N ) which is theentire hierarchical network. For T ree ( N ) , c ( Z ( N ) = min P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) P + c ( F ( N , ( y , u ))) = min + + =5 and Z ( N ) = { ( y , u ) } ∪ F ( N , ( y , u )) = { ( y , u ) , ( y , u ) , ( y , u ) } . Thus Z ( N ) is an optimalfeedback edge set to cover all the nodes in the digraph usinga feedback edge and the optimal solution to Problem 1 isgiven by ¯ K ⋆ = ⋆ ⋆ ⋆
00 0 0 0 0 0 . emark 6. Consider a structured system ( ¯ A, ¯ B, ¯ C ) andfeedback cost matrix P such that the DAG of SCCs of thesystem consists of multiple hierarchical networks with distinctroot nodes and disjoint node sets. Then all the analysis andresults discussed in Subsection VII-B still hold. In such a case,Algorithm 6 is implemented separately on each of the hierar-chical networks and by combining the solutions obtained givesan optimal solution to Problem 1. This gives a generalizationof the structured systems considered in Subsection VII-B. VIII. C
ONCLUSION
This paper addressed the following optimization problem:given a structured system with dedicated inputs and outputsand a feedback cost matrix, where each entry denotes thecost of the individual feedback connection, the objective isto obtain an optimal set of feedback edges that guaranteesarbitrary pole-placement of the closed-loop structured system.This problem is referred as the optimal feedback selectionproblem with dedicated inputs and outputs. We proved the NP-hardness of this problem using a reduction from a known NP-hard problem, the weighted set cover problem (Theorem 1).Later it is also shown that the problem is inapproximable toa multiplicative factor of log n , where n denotes the numberof states in the system (Theorem 2). We then proposed analgorithm that incorporates a greedy scheme with a potentialfunction to solve this problem (Algorithm 4). This algorithm isshown to attain a solution with guaranteed approximation ratioin pseudo-polynomial time (Theorem 4). The proposed algo-rithm has limitations regarding the pseudo-polynomial timecomplexity. We then considered two special cases, namelystructured systems with a back-edge feedback structure andstructured systems satisfying a hierarchical network topology.These topologies find application in many real time networkslike power networks, water distribution networks and socialorganization networks. For the first class of systems, weshow that Problem 1 is NP-hard and also inapproximable tomultiplicative factor of log n (Corollary 2). We then provide a (log n ) -optimal approximation algorithm for this class of sys-tems (Algorithm 5 and Theorem 7). For hierarchical networks,a polynomial time algorithm based on dynamic programmingis proposed (Algorithm 6) and the optimality of the solution isproved (Theorem 8). Investigating other network topologies ofpractical importance and developing computationally efficientalgorithms is part of future work.R EFERENCES[1] C. Commault and J.-M. Dion, “The single-input minimal controllabilityproblem for structured systems,”
Systems & Control Letters , vol. 80, pp.50–55, 2015.[2] C. Commault, J.-M. Dion, and J. W. van der Woude, “Characterizationof generic properties of linear structured systems for efficient computa-tions,”
Kybernetika , vol. 38, no. 5, pp. 503–520, 2002.[3] R. K. Kalaimani, M. N. Belur, and S. Sivasubramanian, “Genericpole assignability, structurally constrained controllers and unimodularcompletion,”
Linear Algebra and its Applications , vol. 439, no. 12, pp.4003–4022, 2013.[4] A. Olshevsky, “Minimal controllability problems,”
IEEE Transactionson Control of Network Systems , vol. 1, no. 3, pp. 249–258, 2014.[5] S. Pequito, S. Kar, and A. P. Aguiar, “A framework for structuralinput/output and control configuration selection in large-scale systems,”
IEEE Transactions on Automatic Control , vol. 61, no. 2, pp. 303–318,2016. [6] K. J. Reinschke,
Multivariable Control: a Graph Theoretic Approach .Springer-Verlag, 1988.[7] Y.-Y. Liu and A.-L. Barab´asi, “Control principles of complex systems,”
Reviews of Modern Physics , vol. 88, no. 3, pp. 035 006:1–58, 2016.[8] K. ¨Unyelio ˘ g lu and M. E. Sezer, “Optimum feedback patterns in mul-tivariable control systems,” International Journal of Control , vol. 49,no. 3, pp. 791–808, 1989.[9] S. Pequito, S. Kar, and A. P. Aguiar, “Minimum cost input/output designfor large-scale linear structural systems,”
Automatica , vol. 68, pp. 384–391, 2016.[10] S. Moothedath, P. Chaporkar, and M. N. Belur, “Approximating Con-strained Minimum Cost Input-Output Selection for Generic ArbitraryPole Placement in Structured Systems,”
ArXiv e-prints , May 2017. [On-line]. Available: http://adsabs.harvard.edu/abs/2017arXiv170509600M[11] S. Moothedath, P. Chaporkar, and M. N. Belur, “Optimal FeedbackSelection for Structurally Cyclic Systems with Dedicated Actuators andSensors,”
Conditionally accepted in IEEE Transactions on AutomaticControl , 2018. [Online]. Available: https://arxiv.org/abs/1706.07928[12] S. Moothedath, P. Chaporkar, and M. N. Belur, “Minimum cost feedbackselection for arbitrary pole placement in structured systems,”
IEEETransactions on Automatic Control , 2018.[13] J. F. Carvalho, S. Pequito, A. P. Aguiar, S. Kar, and G. J. Pappas,“Static output feedback: on essential feasible information patterns,” in
Proceedings of the IEEE Conference on Decision and Control (CDC) ,Osaka, Japan, 2015, pp. 3989–3994.[14] V. Pichai, M. Sezer, and D. ˇSiljak, “A graph-theoretic characterizationof structurally fixed modes,”
Automatica , vol. 20, no. 2, pp. 247–250,1984.[15] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein,
Introductionto Algorithms . MIT press: Cambridge, 2001.[16] R. Diestel,
Graph Theory . Springer: New York, 2000.[17] V. Chvatal, “A greedy heuristic for the set-covering problem,”
Mathe-matics of Operations Research , vol. 4, no. 3, pp. 233–235, 1979.[18] U. Feige, “A threshold of ln n for approximating set cover,” Journal ofthe ACM , vol. 45, no. 4, pp. 634–652, 1998.[19] A. Chapman and M. Mesbahi, “On strong structural controllability ofnetworked systems: A constrained matching approach,” in
Proceedingsof the IEEE American Control Conference (ACC) , Washington DC,USA, 2013, pp. 6126–6131.[20] D. B. Johnson, “Finding all the elementary circuits of a directed graph,”
SIAM Journal on Computing , vol. 4, no. 1, pp. 77–84, 1975.[21] Y.-Y. Liu, J.-J. Slotine, and A.-L. Barab´asi, “Control centrality andhierarchical structure in complex networks,”
Plos one , vol. 7, no. 9,pp. e44 459:1–7, 2012.[22] P. F. Kenneth A. Fegley, “Hierarchical control of a multiarea power grid,”
IEEE Transactions on Systems, Man, and Cybernetics , vol. 7, no. 7, pp.545–551, July 1977.[23] M. D. Ilic, “From hierarchical to open access electric power systems,”
Proceedings of the IEEE , vol. 95, pp. 1060–1084, 2007.[24] C. Ocampo-Martinez, D. Barcelli, V. Puig, and A. Bemporad, “Hier-archical and decentralised model predictive control of drinking waternetworks: Application to barcelona case study,”
IET Control TheoryApplications , vol. 6, no. 1, pp. 62–71, January 2012.[25] J. Vrancken, J. H. van Schuppen, M. dos Santos Soares, and F. Ottenhof,“A hierarchical network model for road traffic control,” in
Proceedings ofthe IEEE International Conference on Networking, Sensing and Control ,Okayama, Japan, 2009, pp. 340–344.
Aishwary Joshi is pursuing his Dual Degree (B.Tech. + M.Tech.) in ElectricalEngineering with specialisation in Communication and Signal Processing fromIndian Institute of Technology Bombay, India. His research interests includegraph theory, optimization, algorithms and computational complexity.
Shana Moothedath obtained her B.Tech. and M.Tech. in Electrical andElectronics Engineering from Kerala University, India in 2011 and 2014respectively. Currently she is pursuing Ph.D. in the Department of ElectricalEngineering, Indian Institute of Technology Bombay. Her research interestsinclude matching or allocation problem, structural analysis of control systems,combinatorial optimization and applications of graph theory.