Actuator Security Indices Based on Perfect Undetectability: Computation, Robustness, and Sensor Placement
Jezdimir Milosevic, Andre Teixeira, Henrik Sandberg, Karl Henrik Johansson
11 Actuator Security Indices Based on Perfect Undetectability:Computation, Robustness, and Sensor Placement
Jezdimir Miloˇsevi´c , Andr´e Teixeira , Karl H. Johansson , and Henrik Sandberg Abstract —This paper proposes an actuator security indexbased on the definition of perfect undetectability. This index canhelp a control system operator to localize the most vulnerableactuators in a networked control system, which can then besecured. Particularly, the security index of an actuator equalsthe minimum number of sensors and actuators that needs to becompromised, such that a perfectly undetectable attack againstthat actuator can be conducted. A method for computing theindex for small scale networked control systems is derived, andit is shown that the index can potentially be increased by placingadditional sensors. The difficulties that appear once the system isof a large scale are then outlined: the problem of calculating theindex is NP–hard, the index is vulnerable to system variations,and it is based on the assumption that the attacker knows theentire model of the system. To overcome these difficulties, a robustsecurity index is introduced. The robust index can be calculatedin polynomial time, it is unaffected by the system variations,and it can be related to both limited and full model knowl-edge attackers. Additionally, we analyze two sensor placementproblems with the objective to increase the robust indices. Weshow that both of these problems have submodular structures, sotheir suboptimal solutions with performance guarantees can beobtained in polynomial time. Finally, the theoretical developmentsare illustrated through numerical examples.
I. I
NTRODUCTION
Actuators are one of the most vital components of net-worked control systems. Through them, we ensure that im-portant physical processes such as power production or waterdistribution behave in a desired way. Actuators can also beexpensive, so it is important to carefully choose where toplace them. To solve this important problem of cost–efficientallocation of actuators, number of approaches have beendeveloped [1]–[4]. However, an issue with these approaches isthat they do not take security aspects into consideration. Thiscould be dangerous, since control systems can easily becomea target of malicious adversaries [5]–[7]. For this reason, itis essential to be able to check if these efficient actuatorplacements are at the same time secure.Motivated by this issue, we introduce novel actuator securityindices δ and δ r . As we shall see, these indices can be used forboth localization of vulnerable actuators, and for developmentof defense strategies. The security index δ ( u i ) is defined forevery actuator u i , and it is equal to the minimum numberof sensors and actuators that needs to be compromised byan attacker to conduct a perfectly undetectable attack against u i . Perfectly undetectable attacks are dangerous, since theydo not leave any trace in the sensor measurements [8], [9]. Division of Decision and Control Systems, School of Electrical Engineer-ing and Computer Science, KTH Royal Institute of Technology, Stockholm,Sweden. Emails: { jezdimir, hsan, kallej } @kth.se. Signals and Systems,Department of Engineering Sciences, Uppsala University, Uppsala, Sweden.Email: { andre.teixeira } @angstrom.uu.se. Therefore, an actuator with a small value of δ is potentiallyvery vulnerable. Since δ cannot be straightforwardly used inlarge scale networked systems, as explained in this paper, weintroduce the robust security index δ r as its replacement inthese systems. We then outline favorable properties of δ r ,and propose possible strategies for increasing δ r . Finally, weremark that due to the properties of perfectly undetectableattacks, sensor security indices can in general be derived fromactuator security indices. Hence, the focus of the paper isexclusively on actuator indices. Literature Review.
It has been recognized within the controlcommunity that cyber-attacks require new techniques to behandled [10]. For instance, cyber-attacks impose fundamentallimitations for state estimation [11], [12], detection [13], andfor consensus computation [14], [15]. The most troublesomeattacks are those that can inflict considerable damage, whileremaining unnoticed by the system operator. Examples includestealthy false-data injection attacks [16], undetectable (zero-dynamics) attacks [13], [17], perfectly undetectable attacks [8],[9], covert attacks [18], optimal linear attacks [19], and replayattacks [20]. To characterize the vulnerability of the systemand protect it against these attacks, many different approacheshave been proposed [21]–[23].In this work, we focus on so–called security indices. Thefirst security index was introduced in [24]. In this work, astatic linear system was used as a network model, and thestatic security index α was defined for each sensor. The mainpurpose of this index is to help the operator to localize the mostvulnerable sensors in a power network, which are those withlow values of α . Once these sensors are localized, the operatorcan allocate additional security measures to protect them.Furthermore, to choose the most beneficial combination ofsecurity measures, he/she can again use security indices [25].A major challenge is to compute α once the power networkis large. In fact, it was shown that the problem of calculating α is NP–hard in general [26]. However, α can be efficiently com-puted in some cases [26]–[30]. For instance, [27] proposes anupper bound on α . This bound can be obtained in polynomialtime by solving the minimum s – t cut problem, and proves tobe tight in several cases of interest.Although α proved to be a useful tool for both vulnera-bility analysis and development of defense strategies, securityindices that can be used for more general dynamical systemshave been considered only by few works [31], [32]. The indexin [31] considerably differs from α , since it characterizesvulnerability of the entire system. On the other hand, in [32],the definition of undetectability [13] was used to define asecurity index similar to α to characterize vulnerability ofsensors and actuators within the system. However, this workneither addresses the problems that appear in large scale a r X i v : . [ c s . S Y ] F e b systems, nor explains how this index can be used for defensepurposes. In this paper, we introduce novel actuator securityindices suitable for dynamical systems, tackle the challengesthat appear once the system is of a large scale, and proposedefense strategies based on these indices. Contributions.
The contributions of this manuscript are asfollows. Firstly, we propose a novel type of actuator securityindex δ . In contrast to the dynamical index proposed in [32],our index is based on the definition of perfect undetectabil-ity [8], [9]. To calculate δ when the number of sensorsand actuators is small, we derive a sufficient and necessarycondition that a set of compromised components needs tosatisfy in order for a solution of the security index problemto exist (Proposition 1). To prove Proposition 1, we use analgebraic condition for existence of perfectly undetectableattacks derived in [8], [9]. We also show that δ can potentiallybe increased by placing additional sensors, and that placementof additional actuators may decrease δ (Proposition 2). Wethen identify the three issues that appear once the system isof a large scale: (1) The problem of computing δ is NP–hard(Theorem 1); (2) δ is fragile to system variations, which areexpected in large systems; (3) δ is based on the assumptionthat the attacker knows the entire model of the system, whichcan be a conservative assumption in this case.To overcome these deficiencies, we introduce the robust se-curity index δ r , which is our second contribution. To define δ r ,we use a structural model of the system [33], and the notionof vertex separators that was used to characterize existenceof perfectly undetectable attacks in [9]. Particularly, we firstshow how vertex separators can be used to upper bound theindex δ (Theorem 2), and then define δ r to be the best upperbound based on vertex separators.Thirdly, we show that δ r does not suffer from the afore-mentioned deficiencies of δ . Namely, δ r can be calculatedefficiently by solving the minimum s – t cut problem in agraph (Proposition 3). We remark that Proposition 3 extendsthe previous work on the static index α [26]–[29], wherethe minimum s – t cut problem was also used for calculat-ing/approximating α . Additionally, δ r is unaffected by thesystem variations, since it is based on the structural model ofthe system [33]. Moreover, δ r can be related to both full andlimited model knowledge attackers. In the context of the fullmodel knowledge attacker, δ r ( u i ) characterizes the minimumamount of resources for conducting a perfectly undetectableattack against u i in any possible realization of the system(Proposition 4). We then introduce an attacker with resourceslimited to a local model and measurements, and prove thathe/she can also conduct a perfectly undetectable attack against u i by compromising a right combination of δ r ( u i ) components(Proposition 5). We also analyze an attacker that knowsonly the structural model of the system. In this case, δ r ( u i ) lower bounds the number of components this attacker needsto compromise to ensure that the attack against u i remainsperfectly undetectable (Proposition 6).Since the previous results imply that actuators with smallvalue of δ r are potentially very vulnerable, we propose sensorplacement strategies to increase δ r , which we outline as ourfourth contribution. We firstly show that δ r is guaranteed to in- crease if sensors are placed to suitable locations in the system(Thereom 3). Based on this result, we formulate two sensorplacement problems with the objective to increase δ r , andshow that these problems have suitable submodular structures(Proposition 7–8). This enables us to find suboptimal solutionsof these problems with guaranteed performance efficiently,even in large scale networked control systems. Finally, weillustrate the theoretical results through numerical examples.The preliminary version of the paper appeared in [34]. Thiswork differs from [34] in the following aspects: (1) We provethat δ is NP–hard to calculate (Theorem 1); (2) The connectionof δ r with the full/limited model knowledge attacker is derived(Propositions 4–6); (3) We prove that both δ and δ r canbe increased by placing additional sensors (Proposition 2,Theorem 3); (4) A new section on increasing δ r is added(Section VI); (5) More detailed proofs of the results thatappeared in [34] are included (Proofs of Propositions 1 and 3,and Theorem 2); (6) The section with examples is extended. Organization.
The remainder of the paper is organized asfollows. In Section II, we introduce the system model, theattacker model, and the security index δ . In Section III, weinvestigate properties of δ . In Section IV, we derive an upperbound on δ , and based on it, define the robust index δ r . InSection V, we outline properties of δ r . In Section VI, we dis-cuss strategies for increasing δ r . In Section VII, we illustratethe theoretical findings through examples. In Section VIII, weconclude. Appendix contains the proofs of the results.II. S ECURITY I NDEX δ In this section, we introduce the model setup and formulatethe problem of calculating the actuator security index δ .We remark that although we consider discrete time systems,the analysis presented in the paper can also be extended tocontinuous time systems. A. Model Setup
The plant of a networked control system is modeled by x ( k + 1) = Ax ( k ) + Bu ( k ) + B a a ( k ) ,y ( k ) = Cx ( k ) + D a a ( k ) , (1)where x ( k ) ∈ R n x is the system state at time step k ∈ Z ≥ , u ( k ) ∈ R n u is the control input, y ( k ) ∈ R n y + n e is the mea-surement vector, and a ( k ) ∈ R n u + n y is the attack vector. Forthe analysis that follows, it is convenient to assume thatthe system is in a steady state x (0)=0 and u =0 . Due tolinearity, this assumptions is without loss of generality formost results in the paper. The exceptions are clearly outlined.We also allow the last n e ≥ elements of y to be protected,so the attacker cannot manipulate them. The protection can beachieved by implementing encryption/authentication schemes,and/or improving physical protection [25].We now introduce the attacker model. The first n u elementsof a model attacks against the actuators, while the last n y For a signal s : Z ≥ → R n s , s =0 means that s ( k )=0 for all k ∈ Z ≥ ,while s (cid:54) =0 means s ( k ) (cid:54) =0 for at least one k ∈ Z ≥ . model attacks against the unprotected sensors. The matrices B a and D a are therefore given by B a = (cid:2) B n x × n y (cid:3) , D a = (cid:20) n y × n u I n y n e × n u n e × n y (cid:21) , and B is assumed to have a full column rank. This is neededto exclude degenerate cases in which the attacks triviallycancel each-other, or cases where an actuator does not affectthe system. We denote by I = { , . . . , n u + n y } the indicesof elements of a , and by a ( I a ) ( k ) the vector consisting ofthe elements of a ( k ) with indices from I a ⊆I . The set I isalso used to denote the joint set of actuators and unprotectedsensors in the first part of the paper. We adopt the followingcommon assumption about the attacker. Assumption 1:
The attacker: (1) Can read and change thevalues of attacked control signals and measurements I a ⊆I arbitrarily; (2) Knows A, B, C .Further, we assume that the attacker’s goal is to conductan attack while ensuring the attack remains undetected bythe system operator. To model this goal, we need a suitabledefinition of undetectability. In this paper, we use the definitionof perfect undetectability [8], [9].
Definition 1:
Let x (0)=0 and u =0 . The attack signal a (cid:54) =0 is perfectly undetectable if y =0 .In other words, the attack is perfectly undetectable if itdoes not leave any trace in the sensor measurements. For thisreason, these attacks are potentially very dangerous. B. Security Index δ : Problem Formulation We now introduce an actuator security index δ . The securityindex δ ( i ) is defined for every actuator i ∈ I . The indexis equal to the minimum number of sensors and actuatorsthat need to be compromised by the attacker, such as toconduct a perfectly undetectable attack. Additionally, i hasto be actively used in the attack, which models a goal orintent by the attacker. Naturally, actuators with small valuesof δ are more vulnerable than those with large values. In theworst case, δ ( i )=1 . This implies that an attacker can attack i and stay perfectly undetectable without compromising anyother component. Let || a || = | ∪ k ∈ Z ≥ supp( a ( k )) | , wheresupp ( a ( k )) = { i ∈ I : a ( i ) ( k ) (cid:54) = 0 } . Based on the previousdiscussion, δ ( i ) can be formally defined as follows. Problem 1: Calculating δ minimize a δ ( i ) = || a || subject to x ( k + 1) = Ax ( k ) + B a a ( k ) , (C1) Cx ( k ) + D a a ( k ) , (C2) x (0) = 0 , (C3) a ( i ) (cid:54) = 0 . (C4)The objective function reflects our desire to find the mini-mum number of sensors and actuators to conduct a perfectlyundetectable attack (sparsest signal a : Z ≥ → R n u + n y ). Theconstraints: (C1) and (C2) ensure that the attack signal satisfiesphysical dynamics of the system; (C2) and (C3) constraint theattack to be perfectly undetectable; (C4) ensures that actuator i is actively used in the attack. Before we start analyzing δ , we point out several propertiesof Problem 1. Firstly, this problem is not necessarily feasiblefor every actuator i . Absence of a solution implies that theattacker cannot attack i while staying perfectly undetectable.Thus, we adopt δ ( i )=+ ∞ in this case. Secondly, if we remove(C3) and include x (0) to be an optimization variable, werecover the security index problem based on undetectableattacks [32]. Thirdly, Problem 1 can also be used for findingsecurity indices of unprotected sensors. However, to conducta perfectly undetectable attack, at least one actuator must beattacked to make the attack signal against a sensor active.Thus, the problem of finding δ ( i ) of sensor i ∈ I can ingeneral be reduced to the problem of finding an actuator withthe minimum δ that excites sensor i . Finally, the problemcan also be extended to capture the case where sensors andactuators are not equally hard to attack.III. P ROPERTIES OF δ We now analyze properties of δ . We show how δ can becomputed once I has small cardinality, and that δ can beincreased by placing additional sensors. We then outline dif-ficulties that appear in large scale networked control systems:Problem 1 is NP–hard, δ can be quite vulnerable to systemvariations, and Assumption 1.(2) may be conservative in thiscase. Overall, δ is more appropriate for small scale systems,while a replacement is required for large scale systems. Proofsof the results from this section are available in Appendix A. A. Calculating δ Using Brute Force Search
We first derive a sufficient and necessary condition that theset of attacked components I a needs to satisfy, so that we canconstruct an attack signal a feasible for Problem 1. We thenexplain how this condition can be used for finding δ . Prior tothat, we introduce some terminology and notation. The transferfunction from a to y is denoted by G , and the normal rank of G is defined as normrank G = max { rank G ( z ) | z ∈ C } . With G ( I a ) , we denote the transfer function matrix that contains thecolumns of G from I a ⊆ I . Proposition 1:
A perfectly undetectable attack conductedwith components I a ⊆I in which component i ∈I a is activelyused exists if and only ifnormrank G ( I a ) = normrank G ( I a \ i ) . (2)There are two important consequences of this result. Firstly,we can use (2) to calculate δ ( i ) of actuator i in small scalesystems in the following way. We form all the combinationsof sensors and actuators I a ⊆ I , i ∈ I a , of cardinality p . Theinitial value of p is set to . For each combination, we checkif (2) is satisfied, which can be done efficiently (e.g. by usingthe Matlab function tzero ). If we find a combination thatsatisfies (2), we stop the search. The value of δ ( i ) is then p .If (2) is not satisfied for any of the combinations of cardinality p , we increase p by 1, and repeat the process.Secondly, as shown in the proof, the attacker can perfectlycover an arbitrarily large attack signal injected in i once (2)holds. Additionally, he/she can construct this attack off-lineusing only the model knowledge, which makes the attack decoupled from x (0) and u . Thus, the attack remains perfectlyundetectable for any choice of x (0) and u , and the assumption x (0) = 0 and u = 0 is without lose of generality in this case.However, the attack is implemented in a feedforward manner,which makes it fragile in respect of modeling errors [35]. Wefurther discuss these properties in Section VII. B. Increasing δ We now investigate how the deployment of new sensors andactuators affects δ . Proposition 2:
Assume that a new component j (sensor oractuator) is deployed. Let δ ( i ) and δ (cid:48) ( i ) be respectively thesecurity indices of an arbitrary actuator i before and afterthe deployment. Then: (1) δ ( i ) ≤ δ (cid:48) ( i ) ≤ δ ( i ) + 1 if j is anunprotected sensor; (2) δ ( i ) ≤ δ (cid:48) ( i ) if j is a protected sensor;(3) δ ( i ) ≥ δ (cid:48) ( i ) if j is an actuator.Proposition 2 has two interesting consequences. Firstly, itimplies that we can increase δ by placing additional sensors tomonitor the system. Furthermore, δ can be used to determinewhich sensor placement is the most beneficial. For example,one optimality criterion can be to select the placement suchthat the minimum value of δ is as large as possible. If thesystem is of a small scale, and if a small number of sensors isbeing placed, we can simply go through all the combinationsof sensors and pick the best. Secondly, Proposition 2 illustratesan interesting trade-off between security and safety. On the onehand, to make the system easier to control and more resilient toactuator faults, more actuators should be placed in the system.On the other, this may also decrease the security indices, sothe actuators become easier to attack. C. δ and Large Scale Networked Control Systems We now outline difficulties that appear once a networkedcontrol system is of a large scale.
1) NP Hardness of Problem 1:
We showed earlier that δ can in general be obtained by using the brute force search.However, this method is computationally intense, and it isinapplicable for large scale networked systems. In fact, Theo-rem 1 that we introduce next establishes that Problem 1 is NP-hard. Thus, there are no known polynomial time algorithmsthat can be used to solve this problem. Theorem 1:
Problem 1 is NP-hard.
2) Fragility of δ : Large scale networked control systems arecomplex systems that can change configuration over time. Forexample, in the power grids, micro-grids can detach from thegrid [36], some of the power lines may be turned–off [37], orsome measurements may become unavailable due to unreliablecommunication [38]. Unfortunately, the security index canbe quite fragile with respect to changes in realization of thesystem matrices
A, B, C , as shown in the following example.
Example 1:
Let the realization of the system be A = (cid:20) . .
01 0 . (cid:21) , B = (cid:20) (cid:21) , C = (cid:2) (cid:3) , and assume that the sensor is protected. Then any input influ-ences the output which is protected, so δ (1)=+ ∞ . However, if A (2 , , the transfer function from the actuator to thesensor is 0, which implies δ (1)=1 .Lack of robustness of δ has two consequences. Namely, anactuator that appears to be secure in one realization of thesystem, may be vulnerable in another. Thus, to find actuatorsthat are vulnerable, one should calculate δ for different real-izations of A, B, C . Due to NP–hardness, this cannot be donefor large scale networked control systems. Additionally, evenif we are able to go through all the realizations of matrices
A, B, C and calculate indices, ensuring that δ of every actuatoris large enough for every realization may require a significantsecurity budget. Naturally, we may first focus on defendingthose actuators that are vulnerable in any realization of thesystem. However, the question to answer is if we can findthese actuators efficiently. Remark 1:
We assume that system variations occur infre-quently compared to the time scale of the perfectly unde-tectable attacks. Hence, to the attacker, the system is linearand time-invariant.
3) Full Model Knowledge Attacker:
The third issue arisesdue to Assumption 1.(2). If the system is of a large scale, theassumption that the attacker possesses the exact knowledgeof the entire realization
A, B, C may be unrealistic. Lack ofthe full model knowledge represents a serious disadvantagefor the attacker. Even if the attacker’s knowledge is slightlyinaccurate, he/she can get detected [35]. For this reason, As-sumption 1.(2) can result in the index being too conservative,and lead to unnecessary spending of security budget.
4) Replacement of δ : Due to the aforementioned threedeficiencies, δ is not practical to be used in large scalenetworked control systems. Therefore, in the next section,we introduce a robust security index δ r that is based on astructural model of the system. We then argue in Section V that δ r represents a good candidate for replacing δ in large scalesystems. Particularly, δ r can be calculated efficiently and it isrobust with respect to system variations. Furthermore, havinga small value of δ r indicates that an actuator is vulnerable inany realization of the system, both in respect of the attackerwith the full model knowledge and the one with limited.IV. R OBUST S ECURITY I NDEX δ r In this section, we introduce an upper bound on the securityindex δ . Based on this bound, we define the robust securityindex δ r . Prior to that, we introduce some graph theorypreliminaries and a structural model of the system. Proofs ofthe results from this section are available in Appendix B. A. Graph Theory
Let G = ( V , E ) be a directed graph , with the set of nodes V = { v , . . . , v n } , and the set of directed edges E ⊆ V×V . Wedenote by N in v i = { v j ∈ V : ( v j , v i ) ∈ E} the in–neighborhood of v i . We say that two nodes v j and v k are non-adjacent ifthere exists no edge in between them. Otherwise, we say theyare adjacent . A directed path from v j to v j l is a sequenceof nodes v j , v j , . . . , v j l , where ( v j k , v j k +1 ) ∈ E for ≤ k < l . A directed path that does not contain repeated nodesis called a simple directed path . A vertex separator (resp. an edge separator ) of non-adjacent nodes v a and v b is a subsetof nodes V (cid:48) ⊆ V \ ( v a ∪ v b ) (resp. edges E (cid:48) ⊆ E ) whoseremoval deletes all the directed paths from v a to v b . If eachedge ( v i , v j ) is assigned with weight w v i v j , the cost of edgeseparator E (cid:48) is defined as (cid:80) ( v i ,v j ) ∈E (cid:48) w v i v j . B. Structural Model
The upper bound and the robust index we introduce in thissection are based on a structural model [ A ] , [ B ] , [ C ] of the sys-tem [33]. The structural matrix [ A ] ∈ R n x × n x has only binaryelements. If [ A ]( i, j )=0 , then A ( i, j )=0 for every realization A . If [ A ]( i, j )=1 , A ( i, j ) can take any value from R . Sameholds for matrices [ B ] ∈ R n x × n u and [ C ] ∈ R ( n y + n e ) × n x . Onthe one hand, this model is less informative, since it does notuse the exact values of the coefficients. On the other hand,this also makes it more robust to system variations, which areto be expected in large scale networked systems.We restrict our attention to a special case of matrices [ B ] and [ C ] . We assume that each actuator directly influences onlyone state, and each sensor measures only one state. Theseare commonly adopted simplifying assumptions in sensor andactuator placement problems for large scale networked controlsystems [2], [3], [39]. Additionally, to ensure that every B hasa full column rank, we assume that [ B ] has a full column rankand exclude realizations of [ B ] where an actuator is idle (itdoes not influence any state). Assumption 2:
Let e i be the i -th vector of the canonical basisof appropriate size. We assume: (1) [ B ]=[ e i . . . e i nu ] ; (2) [ B ] has a full column rank; (3) If [ B ]( i, j )= 1 , then B ( i, j ) (cid:54) = 0 for every realization B ; (4) [ C ] = [ e j . . . e j ny + ne ] T .Assumptions 2.(1)–2.(3) are necessary for derivation of theresults that follow. Assumption 2.(4) is introduced to simplifythe presentation, and the results can be generalized to the casewhen this assumption does not hold.We now introduce a graph G = {V , E} of the structuralmodel [ A ] , [ B ] , [ C ] . The set of nodes is V = X ∪U ∪Y , where X = { x , . . . , x n x } is the set of states, U = { u , . . . , u n u } isthe set of actuators, and Y = { y , . . . , y n y + n e } is the set ofsensors. The set of edges is E = E ux ∪E xx ∪E xy , where E ux = { ( u j , x i ) : [ B ]( i, j ) (cid:54) = 0 } are the edges from the actuators tothe states, E xx = { ( x j , x i ) : [ A ]( i, j ) (cid:54) = 0 } are the edges inbetween the states, and E xy = { ( x j , y i ) : [ C ]( i, j ) (cid:54) = 0 } arethe edges from the states to the sensors. The extended graph isgiven by G t = {V ∪ t, E t } , where E t = E ∪ { ( y i , t ) : ∀ y i ∈ Y} .In what follows, we use G t to derive an upper bound on δ .We first clarify how this graph is constructed on an example. Example 2:
Let the structural matrices be given by [ A ] = , [ B ] = , [ C ] = (cid:20) (cid:21) . The extended graph G t is shown in Fig. 1. In the remainder of the paper, we substitute the joint set of components I with the sets of actuators U and sensors Y . 𝑥𝑥 𝑥𝑥 𝑥𝑥 𝑦𝑦 𝑦𝑦 𝑢𝑢 𝑡𝑡 𝑢𝑢 Fig. 1. The extended graph G t (Example 2). C. Upper Bound on δ We now introduce Theorem 2, where we derive an upperbound on δ using G t and vertex separators. Theorem 2 isinspired by [9], where the connection between the existenceof perfectly undetectable attacks and the size of the minimumvertex separator was introduced. Theorem 2:
Let G t be the extended graph, U a and Y a bethe attacked actuators and sensors, respectively, u i ∈U a , and X a = { x j ∈ X : ( u k , x j ) ∈ E ux , u k ∈ U a \ u i } . (3)If X a ∪ Y a is a vertex separator of u i and t in graph G t , then δ ( u i ) ≤ |U a | + |Y a | for any realization of matrices A, B, C .The intuition behind Theorem 2 is the following. An attackagainst u i can be thought of as the attacker injecting a flowinto the network through u i . To stay perfectly undetectable,he/she wants to prevent the flow reaching the operator modeledby t . The attacker uses a simple strategy where he/she injectsnegative flows into the states X a using the actuators U a \ u i ,and cancels out the flows going through these states. The samestrategy is applied in the case of Y a . If X a ∪Y a is a vertexseparator of u i and t , then the flow is successfully canceledout, so the attack remains perfectly undetectable. Furthermore,this strategy can be applied for any realization A, B, C . Example 3:
Let G t be as shown in Fig. 1, u be the actuatorfor which we are calculating the upper bound, and assume U a = { u , u } and Y a = { y } . Then X a = { x } . One can noticethat by removing X a ∪Y a = { x , y } , we delete all the directedpaths from u to t . Thus, X a ∪Y a is a vertex separator of u and t , so δ ( u ) ≤|U a | + |Y a | =3 in any realization of the system. D. Robust Security Index δ r : Problem Formulation We now use Theorem 2 to introduce the robust securityindex δ r ( u i ) for every u i ∈ U . Essentially, δ r ( u i ) is the bestpossible upper bound from Theorem 2. Problem 2: Calculating δ r minimize U a , Y a δ r ( u i ) = |U a | + |Y a | subject to X a is given by (3) , (C1) Y a are not protected , (C2) X a ∪ Y a is a vertex separator of u i and t, (C3) u i ∈ U a . (C4)The objective reflects our goal to find an upper bound withthe smallest possible value. The constraints: (C1) and (C2)ensure that the separator consists only of the states X a forwhich there exists an actuator from U a \ u i adjacent to them,and unprotected sensors Y a ; (C3) ensures that X a ∪ Y a is a vertex separator of u i and t ; (C4) ensures that u i is includedin the attacked components. Remark 2:
Just as Problem 1, Problem 2 does not have tobe solvable. This occurs when there exists a directed path inbetween u i and a protected measurement, which cannot beintersected by a vertex separator. In that case, the attackercannot in general use the previously introduced strategy, sowe adopt δ r ( u i ) = + ∞ . Additional interpretations of δ r beingequal to + ∞ are provided in Section V. Remark 3:
In the structural systems theory, it is common touse the structural model to derive results that hold for almostany realization of the system [33]. We depart from this typeof analysis, that is, the robust security index δ r is in generalnot equal to δ in almost any realization (see Section VII).In the next section, we argue that δ r is a good candidateto replace δ in large scale systems. Particularly, we show that δ r can be efficiently calculated by solving the minimum s – t cut problem. Additionally, the fact that δ r is derived based onthe structural model of the system makes it robust to systemvariations. Finally, δ r can also be related to different types oflimited model knowledge attackers.V. P ROPERTIES OF δ r We now outline properties of δ r . Before we move to theanalysis, we revisit the minimum s – t cut problem. Proofs ofthe results from this section are available in Appendix C. A. Minimum s – t Cut Problem
Let G ( V , E ) be a directed graph, the source s and thesink t be the elements of V , and assume that weight w v i v j is associated to each edge ( v i , v j ) ∈ E . A partition of V into V s and V t = V \ V s , such that s ∈ V s and t ∈ V t , is called an s – t cut. We define the cut capacity as C ( V s ) = (cid:88) { ( v i ,v j ) ∈E : v i ∈V s ,v j ∈V t } w v i v j . The minimum cut problem can then be formulated asminimize V s C ( V s ) subject to V s , V t is an s – t cut . (4)The minimum s – t cut problem can also be interpreted as theproblem of finding a minimum cost edge separator of s and t .Once (4) is solved, this separator can be recovered from V s as E c = { ( v i , v j ) ∈ E : v i ∈ V s , v j ∈ V t } , and its cost is C ( V s ) . B. Efficient Computation
In contrast to δ that is NP–hard to calculate, the exact valueof δ r can be obtained efficiently. Particularly, the optimal valueof Problem 2 can be calculated by solving the minimum s – t cut problem (Proposition 3), which can be done in polynomialtime using well established algorithms such as [40]. We remarkthat Proposition 3 extends the previous findings on the staticsecurity index [27], where an upper bound was also obtainedby solving the minimum s – t cut problem.The first step towards proving Proposition 3 is to transform G t to a convenient graph G u i = ( V u i , E u i ) , with an additional 𝑥𝑥 𝑥𝑥 𝑥𝑥 𝑥𝑥 𝑢𝑢 +∞ 𝑡𝑡 +∞ +∞ +∞ +∞ +∞ Fig. 2. The graph G u (Example 4). set of edge weights W u i . This graph is dependent on actuator u i for which we are calculating δ r ( u i ) . In what follows,we explain how G u i is constructed. We use the followingterminology: x j ∈ X is said to be of Type 1, if it is adjacentto u k ∈ U \ u i . Otherwise, x j is of Type 2. Remark 4:
In [9], it was explained how to construct a graphfor finding a minimum vertex separator. However, in our case,not all the states can be removed, and protected sensors arepossible, so the graph needs to be adjusted accordingly.The set V u i contains the following nodes: (1) u i and t (thesource and the sink node); (2) x j in and x j out for every x j ofType 1; (3) Every x j of Type 2. The sets E u i and W u i areconstructed according to the following rules.(1) If ( u i , x j ) ∈ E ux , then ( u i , x j ) ∈ E u i and w u i x j = + ∞ .(2) For every ( x j , x k ) ∈ E xx , x j (cid:54) = x k , we add an edge ofthe weight + ∞ to E u i subject to the following rules:- If x j is Type 1 and x k is Type 1, ( x j out , x k in ) ∈ E u i ;- If x j is Type 1 and x k is Type 2, ( x j out , x k ) ∈ E u i ;- If x j is Type 2 and x k is Type 1, ( x j , x k in ) ∈ E u i ;- If x j is Type 2 and x k is Type 2, ( x j , x k ) ∈ E u i .(3) For every x j in and x j out that correspond to the state x j of Type 1, ( x j in , x j out ) ∈ E u i and w x jin x jout = 1 .(4) For every x j of Type 1 (resp. Type 2) that is measured,we add ( x j out , t ) (resp. ( x j , t ) ) to E u i . If any of thesensors measuring x j is protected, we set the edge weightto + ∞ . Otherwise, the edge weight equals to the numberof unprotected sensors measuring x j . Example 4:
Assume the same structural matrices as inExample 2. Let the first sensor be unprotected, and the secondone protected. The graph G u constructed for the purpose ofsolving Problem 2 for actuator u is shown in Fig. 2.We now introduce Proposition 3, which tells us that wecan calculate the optimal value of Problem 2 by solving theminimum u i – t cut problem in G u i . Proposition 3:
Let δ r ( u i ) be the optimal value of Prob-lem 2, and δ ∗ be the optimal value of the minimum u i – t cutproblem in G u i . If Problem 2 is solvable, then δ r ( u i )= δ ∗ + 1 (Statement 1). Otherwise, δ r ( u i )= δ ∗ =+ ∞ (Statement 2). C. Robustness
The second important property of δ r is its robustness tosystem variations. Mainly, δ r is derived based on the structuralmodel [ A ] , [ B ] , [ C ] , which does not use the exact values ofthe system parameters. Hence, δ r has the same value for anyrealization A, B, C , which is not the case with δ . D. Relation of δ r to Different Types of Attackers We now explain how δ r is related to the full modelknowledge attacker. We also introduce two new attacker types without the full model knowledge, and discuss their relation to δ r . To distinguish between the different attacker types, in theremainder of the paper we refer to the full model knowledgeattacker as the Type 1 attacker, and to the new attackers as theType 2 and the Type 3 attackers.
1) Type 1 Attacker:
Recall that δ r ( u i ) ≥ δ ( u i ) holds forany u i ∈U and any realization A,B,C . Thus, a small δ r ( u i ) indicates serious vulnerability in respect of the Type 1 attacker.Particularly, not just that this attacker can conduct a perfectlyundetectable attack against u i using a small number of com-ponents, but he/she can do that in any realization.Unfortunately, as it will be shown in Section VII, δ r ( u i ) isnot a tight upper bound of δ ( u i ) . Thus, a large δ r ( u i ) does notmean that u i is secured from the Type 1 attacker. For instance,although a solution of Problem 2 is U a ∪Y a = { u i , y j , y k } , itmay exist a realization in which u i and y j are sufficient toconduct a perfectly undetectable attack against u i . However,the Type 1 attacker then needs to be sure that this realization ispresent. If the realization occurs rarely, the attacker may needto wait for a long time, which increases his/her chances to bediscovered in between. To avoid this, the Type 1 attacker maywant to compromise the sensors and actuators which wouldallow him/her to conduct a perfectly undetectable against u i for any realization A, B, C . Interestingly, the minimumnumber of sensors and actuators that enables this is δ r ( u i ) . Proposition 4:
Let U a and Y a be attacked actuators andsensors, respectively. If the Type 1 attacker can conducta perfectly undetectable that actively uses u i ∈U a for anyrealization of [ A ] , [ B ] , [ C ] , then |U a | + |Y a |≥ δ r ( u i ) must hold.Proposition 4 tells us that having large δ r ( u i ) prevents theType 1 attacker to easily gather resources that allow him/herto attack u i in any system realization. The following corollarydirectly follows from the proof of Proposition 4. Corollary 1: If δ r ( u i ) = + ∞ , then there exist realizationsof [ A ] , [ B ] , [ C ] in which δ ( u i ) = + ∞ .
2) Type 2 Attacker:
We now show that having δ r ( u i ) smallimplies that u i is vulnerable even if the attacker does not knowthe entire realization A, B, C . Particularly, we introduce theType 2 attacker with resources limited to a local model knowl-edge and measurements. We then prove that if this attackercompromises the right combination of δ r ( u i ) components,he/she can attack u i and remain perfectly undetectable. Assumption 3:
The Type 2 attacker: (1) Can read and changethe values for attacked control signals U a and measurements Y a arbitrarily; (2) Possesses the knowledge of [ A ] , [ B ] , [ C ] andof the rows A ( j, :) , B ( j, :) that correspond to every state x j that is adjacent to an actuator from U a ; (3) Knows for every k : x j ( k ) for any x j that is adjacent to an actuator from U a , and x l ( k ) for any x l ∈ N in x j ; (4) Wants to ensure an attack remainsperfectly undetectable.The Type 2 attacker’s knowledge is limited to the struc-tural model and the rows of A and B that correspond toactuators U a . Thus, this attacker does now know the entirerealization A, B, C . The attacker is also assumed to knowthe values of the states adjacent to U a and their in-neighbors.The attacker can obtain these values by deploying additionalsensors, but can also get this information for free. Namely,control algorithms sometimes base decision on the neighboring and local state to achieve better performance [41]. Hence, if theattacker remains undetected, nodes may continue sending thestate information to the compromised actuators, not knowingthat these actuators are controlled by the attacker.Proposition 5 that we introduce next relates the Type 2attacker to δ r . Before we proceed to the proposition, we pointout that the assumption x (0)=0 , u =0 is not without loss ofgenerality for this result to hold, as explained later. Proposition 5:
Let U a and Y a be attacked actuators andsensors, respectively, u i ∈ U a , and X a be defined as in (3). TheType 2 attacker can conduct a perfectly undetectable attack inwhich u i is actively used in any realization of [ A ] , [ B ] , [ C ] ifand only if X a ∪ Y a is a vertex separator of u i and t in G t .The result has two consequences. Firstly, recall that δ r ( u i ) equals the minimum number of components that ensures X a ∪Y a is a vertex separator of u i and t , with u i ∈U a . Thisimplies that the Type 2 attacker with the right combination of δ r ( u i ) components can conduct a perfectly undetectable attackagainst u i in any system realization. Particularly, it followsfrom the proof that the Type 2 attacker can then use a strategysimilar to the one introduced to prove Theorem 2. Yet, thestrategy is implemented on-line and in a feedback fashion,based on the knowledge of local states and measurements.This is the reason why a steady state assumption is required.For instance, if u starts changing during the attack, the Type 2attacker can be revealed (see Section VII). Secondly, same asfor the Type 1 attacker, δ r ( u i ) is the minimum number ofcomponents that allows the Type 2 attacker to conduct a per-fectly undetectable attack against u i in any system realization.Overall, a small δ r ( u i ) implies u i is vulnerable even thoughthe attacker does not posses the full model knowledge.
3) Type 3 Attacker:
While the previous two propositionsshow that a small value of δ r ( u i ) implies that u i is vulnerable,a perhaps more interesting question to answer is if a large δ r ( u i ) implies that u i is secured. Unfortunately, we cannotmake such a claim. Namely, both the Type 1 and the Type 2attackers may be able to conduct a perfectly undetectableattack against u i with less than δ r ( u i ) components in somerealizations. However, we do argue that having a large valueof δ r ( u i ) provides a reasonable level of security. Intuitively,having large δ r ( u i ) implies that an attack against u i can triggera large number of sensors. To avoid being detected from thesesensors, an attacker should make a synchronized attack usingother components to cancel out the effect of the attack. Thus,the attacker should then either ensure he/she has a very precisemodel and use other actuators to cancel the effect of the attack,or he/she needs to compromise a large number of sensors. Toillustrate this point, we introduce the Type 3 attacker. Assumption 4:
The Type 3 attacker: (1) Can read and changethe values for attacked control signals U a and measurements Y a arbitrarily; (2) Knows [ A ] , [ B ] , [ C ] ; (3) Wants to ensure anattack remains perfectly undetectable.The Type 3 attacker knows only [ A ] , [ B ] , [ C ] . Hence, thisattacker cannot constructively use other actuators to cover anattack against u i , since he/she does not know which attacksignals to inject in these actuators. However, if the systemis in a steady state, the Type 3 attacker can use the Replayattack strategy [20] to conduct a perfectly undetectable attack against u i . In this strategy, the attacker covers an attack against u i by compromising sufficiently many sensors, and repli-cating steady state values from these sensors. Proposition 6establishes the connection between the number of sensors theType 3 attacker needs to compromise and δ r ( u i ) . Proposition 6:
Let u i and Y a be the attacked actuator andsensors, respectively. If the Type 3 attacker can attack u i and ensure the attack remains perfectly undetectable, then |Y a |≥ δ r ( u i ) − . If δ r ( u i )=+ ∞ , the Type 3 attacker cannotattack u i and ensure the attack remains perfectly undetectable.In other words, if the Type 3 attacker wants to ensure theattack against u i remains perfectly undetectable, then he/sheneeds to compromise at least δ r ( u i ) − sensors. Thus, a largevalue of δ r ( u i ) makes an attack against u i more difficult, andthe Type 3 attacker is expected to avoid such actuators. Weclarify the result further in the following example. Example 5:
Let the structural matrices be given by [ A ] = (cid:20) (cid:21) , [ B ] = (cid:20) (cid:21) , [ C ] = (cid:2) (cid:3) , and assume the Type 3 attacker only controls actuator u . Itcan be verified that the robust security index of this actuator is δ r ( u )=2 . Thus, according to Proposition 6, the attacker needsto compromise at least δ r ( u ) − sensor to ensure that anattack against u remains perfectly undetectable. Indeed, letthe realization of the system be A = (cid:20) a a (cid:21) , B = (cid:20) (cid:21) , C = (cid:2) (cid:3) . If a (cid:54) =0 , any attack against u is visible in the sensor mea-surement. Since the Type 3 attacker knows only the structuralmodel of the system, he/she does not know the exact value of a . Thus, he/she needs to compromise the sensor to ensurean attack against u remains perfectly undetectable.
4) Summary:
The main conclusions of this subsection areas follows. Firstly, a small δ r ( u i ) indicates that u i is vulnerablewith respect to the Type 1 and the Type 2 attackers in anyrealization of the system. Secondly, a large δ r ( u i ) does notindicate security with respect to these attackers, but it doesprevent them from easily gathering resources for attacking u i in any realization of the system. Finally, a large δ r ( u i ) indicates security with respect to the Type 3 attacker. For thesereasons, it is useful to derive strategies for increasing δ r . Weconsider this problem in the next section.VI. S ENSOR P LACEMENT FOR I NCREASING δ r In this section, we discus how δ r can be increased by placingadditional sensors. We derive sets of suitable positions to placesensors, and then introduce two sensor placement problemswith the objective to increase the robust indices of actuators.We show that these problems have convenient submodularstructures, which allow us to efficiently obtain suboptimalsolutions of these problems with guarantees on performance.Before we move to the analysis, we introduce a necessarybackground on submodular optimization. Proofs of the resultsfrom this section are available in Appendix D. A. Submodular Optimization
We begin by introducing the definitions of submodular andnondecreasing set functions, and recalling some well knownproperties of these functions [42].
Definition 2:
Let X = { x , . . . , x n } be a finite non-empty setand F :2 X → R be a set function. We say that F is submod-ular if F ( X a ∪ x ) − F ( X a ) ≥ F ( X b ∪ x ) − F ( X b ) holds for all X a ⊆X b ⊆X and x ∈X \X b . We say that F is nondecreasing if F ( X a ) ≤ F ( X b ) holds for all X a ⊆X b ⊆ X . Lemma 1:
Let F , . . . , F n be submodular and non-decreasing set functions and c be an arbitrary constant. Then g ( X a ) = (cid:80) ni =1 F i ( X a ) and g ( X a ) = min { F i ( X a ) , c } aresubmodular and nondecreasing set functions.Submodularity has an important role in combinatorial op-timization. Particularly, many interesting problems with sub-modular structure can be approximately solved in polynomialtime with guarantees on performance [43]. In this work, weare interested in the following two problemsminimize X p |X p | subject to F ( X p ) ≥ F max , (5)maximize X p F (cid:48) ( X p ) subject to |X p | ≤ k max , (6)where F and F (cid:48) are nondecreasing and submodular set func-tions that satisfy F ( ∅ )= F (cid:48) ( ∅ )=0 , F max ∈ Z ≥ , and k max ∈ Z ≥ .Additionally, F is assumed to be an integer valued function.Suboptimal solutions for both (5) and (6) can be obtained inpolynomial time with relatively good performance guarantees. Lemma 2: [44, Theorem 1] Let |X ∗ | be the optimal valueof (5), and H ( d )= (cid:80) di =1 1 i . A suboptimal solution X g of (5)that satisfies |X g |≤ H ( max x ∈X F ( x )) |X ∗ | can be obtained inpolynomial time using the algorithm given in [44, Section 2]. Lemma 3: [45, Proposition 4.3] Let F ∗ be the optimalvalue of (6). A suboptimal solution X g of (6) that satisfies F (cid:48) ( X g ) ≥ (1 − e ) F ∗ can be obtained in polynomial time usingthe algorithm given in [45, Section 4].We remark that the bounds introduced in Lemmas 2 and 3represent the worst case performance guarantees. The algo-rithms mentioned in the lemmas can perform better in practice. B. Suitable Locations to Place Sensors
We now introduce a suitable set of states X u i connectedto each actuator u i . We show that if we place a new sensorto measure any of the states from X u i , δ r ( u i ) is guaranteedto increase. Moreover, if every state adjacent to an actuator isalso adjacent to a sensor, then placing a new sensor to measurea state from X u i is the only way to increase δ r ( u i ) . Theorem 3:
Let G t be the extended graph, u i be an actuatorwith δ r ( u i ) (cid:54) =+ ∞ , and x k ∈X be such that there exists adirected path u i , x j , . . . , x k in which none of the states isadjacent to an actuator from U \ u i . Let the set of all suchstates be denoted with X u i . Assume that a new sensor y l isplaced to measure an arbitrary state from X u i , and let δ (cid:48) r ( u i ) be the robust index of u i after the placement. Then:(1) δ (cid:48) r ( u i ) = + ∞ if y l is protected;(2) δ (cid:48) r ( u i ) = δ r ( u i ) + 1 if y l is unprotected.Furthermore, assume that for every x j ∈ X for which thereexists ( u k , x j ) ∈ E ux , there also exists ( x j , y p ) ∈ E xy . Then δ r ( u i ) is increased if and only if a new sensor is placed tomeasure a state from X u i .The sets X u , . . . , X u nu introduced in the previous theoremhave two important properties. Firstly, for every u i ∈ U , X u i can easily be found as follows. We first remove from the graph G t all the states that are adjacent to an actuator from U \ u i . Inthat case, the set X u i is the set of all the states to which u i isconnected with a directed path. We can then apply the depthfirst search algorithm [46] to find these states. Secondly, thesesets are not affected by the placement of new sensors. Thus,if we place n sensors to monitor the states from X u i , δ r ( u i ) is guaranteed to increase by n .In what follows, we use Theorem 3 to formulate two sensorplacement problems. As we shall see, suboptimal solutionswith performance guarantees can be obtained efficiently forboth of these problems, even in large scale networked systems. Remark 5:
The sensor placement problems we introducenext are developed for increasing δ r , which does not in generalimply that we increase δ at the same time. However, theplacement of new sensors cannot decrease δ (Proposition 2), sowe definitely do not degrade this index. In fact, we illustrate inSection VII that by increasing δ r , we often indirectly increase δ . Future work will investigate how to preselect some of thestates from the previously introduced sets, such that we knowthat δ is increased for at least some classes of realizations. C. Sensor Placement Problems1) Placement of Unprotected Sensors:
We first discus theproblem of placing unprotected sensors. The goal is to placethese sensors such as to increase δ r for every actuator u i by at least k u i ∈ Z ≥ . We assume unprotected sensors to beinexpensive, so we do not have a sharp constraint on thenumber of sensors we should place. Yet, we still want to placethe minimum number of them to achieve the desired benefit.Let the set of sensors be Y s = { y , . . . , y n s } , and x y i bethe state measured by y i . For every actuator u i , we define g u i ( Y p ) = min { (cid:80) y j ∈Y p | x y j ∩ X u i | , k u i } , where Y p ⊆ Y s isthe set of newly placed sensors. This function equals k u i , ifat least k u i sensors from Y p measure the states from X u i . Wethen have from Theorem 3 that δ r ( u i ) is increased by at least k u i . Additionally, if every state adjacent to an actuator is alsoadjacent to a sensor, then δ r ( u i ) is increased exactly by k u i .Let G ( Y p ) = (cid:80) u i ∈U g u i ( Y p ) be the total gain achieved byplacement Y p . If G ( Y p ) ≥ (cid:80) u i ∈U k u i , then the robust indicesof all the actuators are increased by the desired values. Theproblem of placing unprotected sensors is thenminimize Y p ⊆Y s |Y p | subject to G ( Y p ) ≥ (cid:88) u i ∈U k u i . (7)The objective function we are minimizing is the number ofdeployed sensors. The constraint implies that we continueplacing sensors until the robust indices of all the actuators arefor sure increased by the desired value. The following propo-sition shows that this problem is an instance of Problem (5),so we can find a suboptimal solution for it in polynomial timewith guarantees stated in Lemma 2. Proposition 7:
Problem (7) is an instance of Problem (5).
2) Placement of Protected Sensors:
One can also considerthe problem of deploying protected sensors. One objectivecould be to increase δ r to + ∞ for as many actuators aspossible, which would prevent the Type 3 attacker of attackingthese actuators. Since protected sensors might be expensive,we assume that the operator is limited to k max sensors.The problem can be formulated as follows. Let X p ⊆ X be the subset of states that we want to measure using theprotected sensors. Similar to the previous placement problem,we first define the function g (cid:48) u i ( X p ) = min {|X p ∩ X u i | , } foreach u i . If g (cid:48) u i ( X p ) = 1 , then there exist a protected sensormeasuring a state from X u i , and we know from Theorem 3that δ r ( u i ) = + ∞ . Otherwise, g (cid:48) u i ( X p ) = 0 .Let U p ⊆ U be a subset of actuators for which we wantto increase the robust indices to + ∞ . We can then definethe objective function as G (cid:48) ( X p ) = (cid:80) u i ∈U p g (cid:48) u i ( X p ) . Thevalue of this function equals the number of actuators whoserobust indices are equal to + ∞ after placing protected sensorsat locations X p . Naturally, we want to maximize this gainfunction, with no more than k max deployed sensors. Theproblem we want to solve can then be formulated asmaximize X p ⊆X G (cid:48) ( X p ) subject to |X p | ≤ k max . (8)We now show that (8) represents an instance of (6). It thenfollows from Lemma 3 that a suboptimal solution of (8) with − e approximation ratio can be obtained in polynomial time. Proposition 8:
Problem (8) is an instance of Problem (6).VII. I
LLUSTRATIVE E XAMPLES
We now discuss the theoretical developments on illustrativenumerical examples.
A. Comparison of δ and δ r
1) Model:
We consider the IEEE 14 bus system, shown inFig. 3. The system is controlled using 5 generators located atbuses 1,2,3,6, and 8. We modeled the system using linearizedswing equations where the generators were represented by twostates (rotor angle φ i and frequency ω i = ˙ φ i ), and load buseswith one state (voltage angle θ i ) [47]. The parameters givenin [48] were used. The operator has access to phasor measure-ment units providing measurements of θ , θ , θ , θ , θ , θ ,and θ . We considered the following system realizations: • Normal operation, as shown in Fig. 3 (Realization 1); • Power line (Bus 4,Bus 7) switched–off (Realization 2); • Micro–grid consisting of Bus 3 and Generator 3 detachesfrom the grid (Realization 3); • Measurement θ stops being available (Realization 4).We assumed that every generator and every measurement canbe compromised by the attacker. Furthermore, the attacker isassumed to be able to attack the network by changing loadsat some buses [49]. Particularly, the loads at buses , , , and were assumed to have considerable effect to the network,and were modeled as additional actuators. Fig. 3. Schematic of IEEE 14 bus system [13].
G1 G2 G3 G4 G501234567 r Fig. 4. The value of the security index δ and the robust security index δ r of Generators 1–5 for different realizations of the system.
2) Robustness:
We first compare δ and δ r in terms ofrobustness. For this purpose, we calculated the values of δ and δ r of all the generators in the aforementioned four realizationsof the system. The results are shown in Fig. 4.Firstly, the results confirm that δ depends on realizationof the system. Thus, if the operator decides to use δ asa security index, it is not sufficient to consider only onerealization. For example, Generator 3 that appears to be thesecond most secured in Realization 1, becomes one of the twomost vulnerable in Realization 3. A less evident observation isthat the use of δ can lead to a considerable security allocationcost. Particularly, we see that the minimum value of δ for allthe generators is quite similar (except for maybe Generator 4).Therefore, ensuring that each generator has sufficiently largesecurity index δ for every realization of the system may bevery hard, and would require a large security investment.Evidently, the values of δ r are not dependent on the re-alization. Therefore, having a small value of δ r ( u i ) impliesthat actuator u i is vulnerable in any system realization. Forexample, since δ r ( G )=2 , Generator 2 can be attacked bythe Type 1 and the Type 2 attackers by compromising onlytwo components in any realization of the system. However,as it can be seen, δ r is not a tight upper bound on δ . Thus,large δ r does not necessarily imply security, which is the maindrawback of δ r . For instance, note that δ ( G )=2 in the thirdrealization. Hence, the Type 1 attacker can conduct a perfectlyundetectable attack against Generator 3 in this realization bycompromising two components, although δ r ( G )=6 .
3) Computing δ and δ r : We now compare the computa-tional efforts needed to calculate δ and δ r . To calculate δ , we Number of new sensors -4 -2 T i m e [ s ] r Fig. 5. Computational time required for finding the exact value of δ and δ r of Generator 4 once the number of sensors vary. I n c r a s e o f Protected sensor - Gen. 1
Unprotected sensor - Gen. 1
Sensor locations
Protected sensor - Gen. 2 Unprotected sensor - Gen. 2
Fig. 6. Increase of the security index δ for Generator 1 and Generator 2. used the brute force search method explained in Section III. Tocalculate δ r , we used the maxflow function that is includedin Matlab R2017. We kept the realization of the system fixedto Realization 1, and started increasing the number of sensorsby placing new sensors at random locations. We then measuredtime needed to calculate δ and δ r for Generator G .The results are shown in Fig. 5. As expected, the effort forcalculating δ grows exponentially with the number of newlyadded sensors. Furthermore, note that this effort scales withthe number of realization for which we want to calculate δ .The time needed for calculating δ r was almost not affected byplacing this relatively small number of sensors, and remainedbelow 0.01 [s] in all the cases. Additionally, δ r is calculatedonly once, since it has the same value in any realization.
4) Increasing δ and δ r : We now investigate if by increasing δ r we also increase δ . We focus on Generators 1 and 2,since these generators have the lowest values of δ r . UsingTheorem 3, we obtained that suitable locations for placingadditional sensors are X G = { φ , ω , θ } for Generator 1and X G = { φ , ω } for Generator 2.We first investigated how the placement of one protectedsensor influences δ . We placed sensor at each of the locationsfrom X G , and measured the increase of δ ( G ) . While placingthe protected sensor at these locations increases δ r ( G ) to + ∞ , it can be seen from Fig. 6 that δ ( G ) did not increaseto + ∞ in any of the four realizations we considered. Yet, theincrease of δ ( G ) for more than one was achieved in majorityof the cases, which is not possible to achieve by placing anunprotected sensor (Proposition 2). The experiment was alsoconducted for Generator 2. Similarly, δ ( G ) did not increaseto + ∞ in any of the four realizations. However, the placement 𝑦𝑦 𝑢𝑢 𝑢𝑢 Fig. 7. The platoon consisting of two autonomous vehicles. Each vehicle canbe controlled by the operator through the signals u and u . The operatoralso knows the position of the second vehicle y . of one protected sensor lead to increase of δ ( G ) by at leastthree for all the locations from X G and all the realizations.We also considered placing one unprotected sensors at loca-tions from X G , which increases δ r ( G ) by one. Interestingly,from Fig. 6, the placement of one unprotected sensor at anyof the locations from X G lead to increase of δ ( G ) in all therealizations. The same holds for X G and δ ( G ) .Overall, the experiment illustrates that by increasing δ r wecan also indirectly increase δ . However, from the placementof protected sensors, we see that we definitely do not achievethe same level of improvement. This again illustrates thatprotecting the system against the advanced Type 1 attackermay require much more resources than protecting it againstless advanced attackers such as the Type 3 attacker. B. Properties of Full and Limited Model Knowledge Attackers1) Model:
We now illustrate the limitations of the full andlimited model knowledge attackers considered in the paper.For this purpose, we consider the system of two autonomousvehicles shown in Fig. 7. Each vehicle is modeled by a singlestate representing its position relative to some moving refer-ence frame. The operator can control both vehicles throughsignals u and u , and he/she also knows the position of thesecond vehicle y = x . The operator’s goal is to keep thedistance between vehicles equal to . To study this formationcontrol problem, we use the model from [8] x ( k + 1) = (cid:20) − α α α − α (cid:21) x ( k ) + u ( k ) ,y ( k ) = (cid:2) (cid:3) x ( k ) , where α = α = 0 . . We assume that prior to the attack, x (0) = [0 10] T and u (0) = [ − T , so that desired behaviorof the platoon is achieved.We consider the Type 1 attacker and the Type 2 attacker.Both of the attackers control u and y , and have the goal todisrupt the platoon formation without the operator noticing.In the following, we discuss in which situations the attackerscan achieve this goal. By ∆ y F (resp. ∆ y L ), we denote thedifference between the measurement expected in the normaloperation and the received measurement in the case of the first(resp. second) attacker. If the attackers are able to conduct aperfectly undetectable attack, then ∆ y F =∆ y L =0 must hold.We also remark that the properties of the Type 2 attacker weoutline next are the same as for the Type 3 attacker, so we donot explicitly consider the Type 3 attacker.
2) Case 1:
The first case illustrates that both of the at-tackers can conduct a perfectly undetectable attack once the y F Case 1
Case 2
Case 3 k -101 y L k -101 0 10 20 k -101 Fig. 8. Differences ∆ y F and ∆ y L of the expected and attacked sensormeasurements in different cases. system is in a steady state and u ( k ) = u (0) during the attack.The Type 1 attacker applies the following signals a ( u ) F ( k ) = − k,a ( y ) F ( k + 2) = 1 . a ( y ) F ( k + 1) − . a ( y ) F ( k ) − . a ( u ) F ( k ) , (9) which is according to the strategy introduced in the proof ofProposition 1. The Type 2 attacker applies the signals a ( u ) L ( k ) = − k, a ( y ) L ( k ) = − x ( k ) + y (0) , (10) which is according to the strategy introduced in the proofof Proposition 5. As we can see from Fig. 8, Case 1, ∆ y F =∆ y L =0 . Hence, both of the attackers remain perfectlyundetectable. Additionally, note that in this case, the strat-egy (10) reduces to the Replay attack strategy that does notrequire any realization knowledge. Thus, the Type 3 attackerthat controls u and y can also use the strategy (10), so he/shewould also remain perfectly undetectable in this case.
3) Case 2:
The second case is introduced to illustrate thefragility of the Type 1 attacker with respect to modeling errors.Assume u ( k )= u (0) during the attack, and that the Type 1attacker believes that α (cid:48) =0 . . He/she then applies the signals a ( u ) F ( k ) = − k,a ( y ) F ( k + 2) = 1 . a ( y ) F ( k + 1) − . a ( y ) F ( k ) − . a ( u ) F ( k ) . The Type 2 attacker applies the same signals as in the previouscase. From Fig. 8, Case 2, we can see that ∆ y F (cid:54) =0 , so theType 1 attacker is revealed. Since ∆ y L =0 , we see that theType 2 attacker remains undetected. In general, the Type 2attacker can also be vulnerable to modeling errors, since he/shemay require precise local model knowledge to construct thestrategy in some cases. However, the fact that this attackeruses only a fraction of the model (in this case none), lowershis/her chances to become detected because of modelingerrors. Evidently, the Type 3 attacker is not affected by thistype of errors, since he/she knows only the system structure.
4) Case 3:
Finally, assume the scenario where at k = 2 ,the operator increases u by . . The attackers apply thesignals (9) and (10). From Fig. 8, Case 3, we can see ∆ y L (cid:54) =0 .This illustrates that the steady state assumption is in generalrequired for the Type 2 attacker to remain perfectly unde-tectable. The reason is that this attacker does not know neither u nor the equation for x . Thus, once y starts changing, theattacker cannot distinguish if this is because of the attack or TABLE IT
HE MAIN PROPERTIES OF THE ATTACKER TYPES WE CONSIDEREDAND THEIR RELATION TO δ r .Attacker Knowledge of A, B, C / [ A ] , [ B ] , [ C ] Steady StateAssumption Relationto δ r Type 1 Full / Full Not required Upper boundsresourcesType 2 Limited / Full Required Upper boundsresourcesType 3 None / Full Required Lower boundsresources a change in u . The same reasoning applies to the Type 3attacker. We also see that ∆ y F =0 . The reason is that theattack policy (9) can be calculated prior to the attack andimplemented in a feedforward manner. This makes the strategycompletely decoupled from x (0) and u .VIII. C ONCLUSION AND F UTURE W ORK
In this paper, we introduced security indices δ and δ r . Theseindices can be used for localizing vulnerable actuators withinthe system and development of defense strategies. We firstanalyzed δ , which is more suitable for small scale systems. Amethod for computing δ was derived, and it was shown that δ can potentially be increased by placing additional sensors.We then showed that δ may not be appropriate index forlarge scale networked systems since it is: (1) NP hard tocalculate; (2) Vulnerable to system variations; (3) Based on theassumption that the attacker knows the entire system model.The robust security index δ r was then introduced as areplacement of δ . The robust index: (1) Can be calculatedefficiently; (2) Is robust to system variations; (3) Can be relatedto both the full and limited model knowledge attackers, assummarized in Table I. Additionally, two sensor placementproblems for increasing δ r were proposed, and it was shownthat suboptimal solutions with performance guarantees of theseproblems can be obtained efficiently. Finally, the properties of δ and δ r were illustrated through numerical examples.The future work will go into the following directions.Firstly, beside perfectly undetectable attacks, there exist manyother dangerous types of attacks. Therefore, we plan to in-vestigate if novel types of security indices can be formulatedbased on these attack models. Secondly, the sensor placementproblems considered in the paper were formulated withouttaking the security index δ into consideration. The future workwill investigate if it is possible to derive sensor placementstrategies for improving δ and δ r simultaneously.A PPENDIX
A. Proofs of Section III
Proof of Proposition 1.
Before we move to the proof, weintroduce a sufficient and necessary condition for existence ofperfectly undetectable attacks.
Lemma 4: [8, Theorem 1] [9, Theorem 7] A perfectlyundetectable attack conducted using components I a ⊆ I existsif and only if normrank G ( I a ) < |I a | . Proof of Proposition 1: ( ⇒ ) Let A be the Z –transform of a . Assume there exists a perfectly undetectable attack A with A ( i ) (cid:54) = 0 . We split the proof into two cases.Case 1. Assume first normrank G ( I a \ i ) = |I a |− . Since un-detectable attacks are possible, then it follows from Lemma 4that normrank G ( I a ) < |I a | . On the other handnormrank G ( I a ) ≥ normrank G ( I a \ i ) = |I a | − , which implies normrank G ( I a ) = |I a | − . Thus, (2) holds.Case 2. Assume now normrank G ( I a \ i ) < |I a | − . Let I b ⊆ I a \ i be such that:(i) The columns of G ( I b ) span the columns of G ( I a \ i ) ;(ii) normrank G ( I b ) = normrank G ( I a \ i ) = |I b | .Since (i) holds, we can find A (cid:48) that satisfies G ( I a \ i ) A ( I a \ i ) = G ( I b ) A (cid:48) . From the latter relation and G A = 0 , it follows G A = G ( I a \ i ) A ( I a \ i ) + G ( i ) A ( i ) = G ( I b ) A (cid:48) + G ( i ) A ( i ) = 0 . This implies that [( A (cid:48) ) T A ( i ) ] T is a perfectly undetectableattack against [ G ( I b ) G ( i ) ] with A ( i ) (cid:54) = 0 . We then havenormrank [ G ( I b ) G ( i ) ] ∗ = normrank G ( I b ) ∗∗ = normrank G ( I a \ i ) , (11)where (*) follows from (ii) and Case 1, and (**) from (ii).Since G ( I b ) spans the columns of G ( I a \ i ) , we havenormrank G ( I a ) = normrank [ G ( I a \ i ) G ( i ) ]= normrank [ G ( I b ) G ( i ) ] . (12)From (11) and (12), we conclude that (2) holds.( ⇐ ) If (2) holds, then there exist real rational functions P and Q (cid:54) = 0 , such that G ( I a \ i ) P + G ( i ) Q = 0 . Thus, any attacksignal A ( i ) can be masked by applying A ( I a \ i ) = P A ( i ) /Q on the remaining attacked components. (cid:4) Proof of Proposition 2.
By adding a new sensor to thesystem, we introduce additional constraints to Problem 1.Thus, δ (cid:48) ( i ) <δ ( i ) cannot hold. If a new sensor is not protected,the attacker can gain control over it. This can be interpretedas removing the aforementioned constraints from the problem.Hence, δ (cid:48) ( i ) is at most by one larger than δ ( i ) in this case.By adding a new actuator, the number of decision variables ofProblem 1 increases, and the number of constraints remainsthe same. Therefore, δ (cid:48) ( i ) ≤ δ ( i ) holds. (cid:4) Proof of Theorem 1.
To prove NP-hardness of Problem 1, itsuffices to show that every instance of an NP–hard problemcan be mapped into Problem 1. For this purpose, we use thesparse recovery problemminimize d || d || subject to F d = ¯ y, (13)where F ∈ R p × m and ¯ y ∈ R p are given. This problem isknown to be NP–hard [50]. Let F and ¯ y be arbitrary selected.Set A = m × m , B = I m , C =[ − ¯ y F ] , D = p × m , and i = 1 . Then a = a u and x ( k + 1) = a u ( k ) . Hence, Problem 1 becomesminimize a u || a u || subject to Ca u ( k ) = 0 , a (1) u (cid:54) = 0 . (14)It can be seen that to solve (14) for all k , it suffices to solveit for a single k . Thus, (14) reduces tominimize a u (0) || a u (0) || subject to Ca u (0) = 0 , a (1) u (0) = 1 , where the substitution of a (1) u (0) (cid:54) =0 with a (1) u (0)=1 is with-out loss of generality. Let a u (0)=[1 d T ] T . Then minimiz-ing || a u (0) || is equivalent to minimizing || d || , which isthe objective function of (13). Moreover, we also have that Ca u (0) = [ − ¯ y F ] a u (0)= − ¯ y + F d.
Thus, Ca u (0) = 0 implies F d = ¯ y, which is the constraint of (13). Therefore, everyinstance of the NP–hard problem (13) can be mapped intoProblem 1, which concludes the proof. (cid:4) B. Proofs of Section IV
Proof of Theorem 2.
Let X a ∪ Y a be a vertex separator of u i and t in the graph G t . To prove the claim, we introduce anattack strategy that uses the components U a and Y a . We thenprove that this strategy is actively using u i , and it is perfectlyundetectable in any realization A, B, C .For actuator u i , the attacker injects an arbitrary signal a ( u i ) (cid:54) = 0 . This ensures that u i is used in the attack actively.For other actuators u j ∈ U a \ u i , the attack is a ( u j ) ( k ) = − A ( p, :) x ( k ) /B ( p, j ) , (15)where A ( p, :) is the row of A corresponding to attackedactuator u j , and B ( p, j ) is the non-zero element of B mul-tiplying u j (such element exists for any realization due toAssumption 2.(3)). For y l ∈ Y a , the attack is a ( y l ) ( k ) = − C ( l, :) x ( k ) , (16)where C ( l, :) represents the row of C corresponding to y l . Forthe attacker with the full model knowledge, this strategy canbe constructed for any realization. Namely, he/she knows thevalues for A ( p, :) , B ( p, j ) , C ( l, :) , and can predict the valueof x ( k ) for any k ∈ Z ≥ based on the model and theattack signals. We now prove that this strategy is perfectlyundetectable, that is, y = 0 .We first consider attacked sensors. For any y l ∈ Y a and k ∈ Z ≥ , we have y l ( k ) = C ( l, :) x ( k ) + a ( y l ) ( k ) (16) = 0 . Thus,the attacked measurements are equal to 0. It remains to beshown that the non-attacked measurements are also 0.Consider first the non-attacked sensors measuring the statesfrom X a . Let x p ∈ X a , and let u j ∈ U a \ u i be adjacent to x p .Then x p ( k + 1) = A ( p, :) x ( k ) + B ( p, j ) a ( u j ) ( k ) (15) = 0 . Thus,the non-attacked measurements of the states from X a are 0.Let now X b be the set of all the states for which there existsa directed path from u i that does not contain the states from X a . These states cannot be measured using the non attackedsensors. That would imply that there exists a directed pathin between u i and t not intersected by X a ∪ Y a , which is incontradiction with the assumption that X a ∪ Y a is a vertexseparator of u i and t . Finally, let X c = X \ ( X b ∪ X a ) be theset of all the remaining states. Note that the directed edges ( x b , x c ) , x b ∈ X b , x c ∈ X c , cannot exist. That would implythat there exists a directed path from u i to x c that does notcontain the states from X a , so x c would belong to X b . Thus,the states from X c cannot be directly influenced by the statesfrom X b . Since x (0) = 0 , u = 0 , and the states X a are equalto 0, we conclude that the states X c also remain equal to0 during the attack. Thus, the non-attacked measurements of these states remain 0. With this, we prove that all of the non-attacked measurements are equal to 0, so the attack strategyis perfectly undetectable. (cid:4) C. Proofs of Section V
Proof of Proposition 3.
Statement 1. Let U a ∪Y a be a solutionof Problem 2, and X a ∪Y a be a corresponding vertex separator.Let E c ⊆ E u i be constructed as follows. For each x k ∈ X a , weadd ( x k in , x k out ) to E c . For each y j ∈ Y a with ( x k , y j ) ∈ E xy ,we add ( x k out , t ) (resp. ( x k , t ) ) to E c if x k is Type 1 (resp.Type 2). If there exists more than one measurement of x k ,then all of them must belong to Y a . Otherwise, there wouldexist a path from u i to t not intersected by X a ∪ Y a , or y j would not be a part of an optimal solution. It follows from theconstruction of G u i that the edges added to E c have the cost δ c = |U a \ i | + |Y a | = δ r ( u i ) − . We now show that E c is anedge separator of u i and t in G u i (Claim 1) of the minimumcost (Claim 2). This implies δ r ( u i ) = δ c + 1= δ ∗ + 1 , andproves Statement 1.Claim 1. Assume E c is not an edge separator. Then thereexists a simple directed path u i , x j , . . . , x j n , t (Path 1) in G u i , which is not intersected by E c . By the construction of G u i , that implies that there exists a simple directed path u i , x k ,. . ., x k m , y l , t (Path 2) in G t , obtained from Path 1by replacing every pair x p in , x p out that corresponds to x p ofType 1 by x p , and by inserting a measurement y l of x k m .Path 2 has to be intersected with X a ∪ Y a . Then either exists x p ∈ X a that belongs to Path 2 or y l ∈Y a . However, theneither ( x p in ,x p out ) or ( x j n , t ) belongs to E c . This contradictsexistence of Path 1, so Claim 1 holds.Claim 2. Assume there exist an edge separator E (cid:48) c with thecost δ (cid:48) < δ c . Let U (cid:48) a ∪ Y (cid:48) a be constructed as follows. For each ( x k in , x k out ) from E c , we add u j to U (cid:48) a , where u j is adjacentto x k . For each edge ( x p out , t ) or ( x p , t ) from E c , we addall the measurements of x p to Y (cid:48) a . All of these measurementsmust be unprotected (otherwise δ (cid:48) = + ∞ > δ c ). We add u i to U (cid:48) a . Note that E (cid:48) c cannot contain edges of other types, becausetheir weight is + ∞ , which would imply δ (cid:48) > δ c .Firstly, we prove that U (cid:48) a ∪ Y (cid:48) a must be a feasible pointof Problem 2. Assume that is not the case. Since, u i ∈ U (cid:48) a and all the measurements from Y (cid:48) a are unprotected, it followsthat there exists a simple directed path u i , x k ,. . ., x k m , y l , t (Path 1’) in G t , in which none of the states are adjacent to U (cid:48) a \ u i , and y l / ∈ Y (cid:48) a . That implies that there exists a simpledirected path in G u i obtained from Path 1’ by replacing eachnode x p of Type 1 from this path by x p in , x p out , and removing y l . By the construction of U (cid:48) a ∪ Y (cid:48) a and G u i , this path cannotbe intersected by E (cid:48) c . This would contradict the assumptionthat E (cid:48) c is an edge separator, so U (cid:48) a ∪ Y (cid:48) a has to be a feasiblepoint of Problem 2. However, then U a ∪ Y a is not a solutionof Problem 2 because |U (cid:48) a ∪Y (cid:48) a | = δ (cid:48) +1 < |U a ∪Y a | = δ c +1 .Thus, E (cid:48) c cannot exist, and Claim 2 holds.Statement 2. In this case, there has to exist a simpledirected path u i , x j ,. . ., x j n , y l , t in G t that contains onlyType 2 states and protected measurement y l . Then the path u i , x j , . . . , x j n , t exists in G u i , and the weights of all theedges from this path are + ∞ . Any edge separator needs tocut this path, which implies δ ∗ = + ∞ . (cid:4) Proof of Proposition 4.
Let X a be defined as in (3). We provethe claim by showing that X a ∪Y a has to be a vertex separatorof u i and t in G t . Assume this is not the case. Then there existsat least one simple directed path u i , x i , . . . , x i n , y l , t (Path 1)not intersected by X a ∪ Y a . We now show that this impliesexistence of at least one realization of the structural model [ A ] , [ B ] , [ C ] in which a perfectly undetectable attacks against u i cannot be conducted.Assume the following realization of matrices A and C . For x i from Path 1, A ( i , :) = 0 . This ensures that x i cannotbe influenced by any state. For any other x i k from Path 1, A ( i k , j ) (cid:54) = 0 (resp. A ( i k , j ) = 0 ) if j = i k − (resp. j (cid:54) = i k − ).This guarantees that the only state that influences x i k is x i k − .For edge ( x i n , y l ) ∈ E xy from Path 1, C ( l, i n ) (cid:54) = 0 . Thisensures that y l ( k ) (cid:54) = 0 once x i n ( k ) (cid:54) = 0 . We now show thatif this realization is present, a perfectly undetectable attack inwhich u i is actively used does not exist.Let a ( u i ) (cid:54) = 0 be an arbitrary attack signal against u i , andlet k be the first time instant for which a ( u i ) ( k ) (cid:54) = 0 . Since u = 0 and a ( u i ) is the only attack signal that can directlyinfluence x i (due to Assumptions 2.(1) and 2.(2)), we have x i ( k + 1) = A ( i , :) x ( k ) + B ( i , i ) a ( u i ) ( k ) . Given that A ( i , :) = 0 and B ( i , i ) (cid:54) = 0 (Assumption 2.(3)), it follows x i ( k + 1) (cid:54) = 0 . We now show x i ( k + 2) (cid:54) = 0 . Note that theonly state that influences x i is x i . Moreover, since x i cannotbe influenced by attacked actuators ( x i / ∈ X a ), and u =0 , itfollows x i ( k + 2) = A ( i , i ) x i ( k + 1) (cid:54) = 0 . By applyingthe similar reasoning to all other states from Path 1, it can beshown that x i n ( k + n +1) (cid:54) = 0 . Thus, y l ( k + n +1) (cid:54) = 0 , whichimplies that the attack is revealed. Since a ( u i ) was arbitraryselected, there exists no perfectly undetectable attacks with u i actively used in this realization.This contradicts the assumption that the attacker can con-duct a perfectly undetectable attack against u i in any realiza-tion of [ A ] , [ B ] , [ C ] by using U a and Y a . Thus, X a ∪ Y a hasto be a vertex separator of u i and t in G t . Since δ r ( u i ) isthe minimum number of attacked sensors and actuators thatensures X a ∪Y a is a vertex separator of u i and t with u i ∈ U a ,the claim of the proposition holds. (cid:4) Proof of Proposition 5. ( ⇒ ) The proof is by contradiction. If X a ∪ Y a is not a vertex separator of u i and t in G t , we knowfrom the proof of Proposition 4 that we can find at least onerealization in which it is not possible to conduct a perfectlyundetectable attack against u i . Thus, X a ∪Y a has to be a vertexseparator of u i and t .( ⇐ ) If X a ∪Y a is a vertex separator of u i and t , the attackercan conduct a perfectly undetectable attack against u i usingthe strategy similar to the one in the proof of Theorem 2. Foractuator u i , the attacker injects an arbitrary signal a ( u i ) (cid:54) = 0 .For other actuators u j ∈U a \ u i with ( u j , x p ) ∈E ux , the attackis given by a ( u j ) ( k )= − A ( p, :) x ( k ) /B ( p, j ) . For y l ∈ Y a , theattacker selects a ( y l ) ( k ) to maintain y l ( k ) = 0 .The Type 2 attacker can construct this attack. Firstly, theattacker knows the values for A ( p, :) , B ( p, :) that correspondto actuators u j ∈U\ u i . Secondly, the attacker can construct A ( p, :) x ( k ) , since he/she knows the values of in-neighborsof x p , while the elements of A ( p, :) that correspond to otherstates are equal to 0. Thirdly, the Type 2 attacker can also set the signals of attacked sensors and actuators to an arbitraryvalue, so he/she can maintain y l ( k )=0 . The proof that y =0 isthen analogous to the proof of Theorem 2. (cid:4) Proof of Proposition 6.
We prove the claims by showing that Y a has to be a vertex separator of u i and t in G t . Namely,existence of a path from u i to t in G t implies that there existat least one sensor y j that is not compromised by the attacker.From the proof of Proposition 4, we know that there exists atleast one realization of the system in which the attack against u i triggers y j . Since the Type 3 attacker has knowledge ofonly [ A ] , [ B ] , [ C ] , he/she does not know if the attack against u i would be visible in y j or not. Thus, the Type 3 attacker needsto attack y j to ensure being perfectly undetectable. Therefore, Y a has to form a vertex separator of u i and t . By the definition, δ r ( u i ) − is the size of the minimum vertex separator of u i and t in G t (we subtract 1 from δ r ( u i ) to exclude u i ). Hence, |Y a | ≥ δ r ( u i ) − . Finally, if δ r ( u i ) = + ∞ , then there existsa path in between u i and a protected sensor. This implies that Y a cannot be a vertex separator. Hence, the Type 3 attackercannot ensure that a perfectly undetectable attack against u i remains perfectly undetectable, because he/she does not knowif the aforementioned protected sensor would be triggered. (cid:4) D. Proofs of Section VI
Proof of Theorem 3.
Assume we place y l to monitor any ofthe states from X u i . We then introduce at least one additionaldirected path u i , x j , . . . , y l , t from u i to t , which does notcontain states adjacent to U \ u i . Thus, the only way to removethis path is by adding y l to a new vertex separator. If y l isprotected, that is not possible, so δ (cid:48) r ( u i ) = + ∞ . Otherwise,the attacker must attack y l , thus δ (cid:48) r ( u i ) = δ r ( u i ) + 1 .We now show that if for every x j ∈X for which there exists ( u k , x j ) ∈E ux , there also exists ( x j , y p ) ∈E xy , then the onlyway to improve δ r ( u i ) is by placing sensors within X u i . Let U a ∪Y a be a solution of Problem 2 for u i . We first form anotheroptimal solution U (cid:48) a ∪Y (cid:48) a from U a ∪Y a . The set Y (cid:48) a is formedby removing from Y a any y j which measures x k ∈X that isadjacent to u l ∈U\ u i . As a substitute of y j , we add u l to U (cid:48) a .We then add all the actuators U a to U (cid:48) a . This ensures that for allthe states that are both directly influenced by an actuator andmeasured by a sensor, we always select an actuator to belongto a solution of Problem 2 rather than a sensor. Finally, let X (cid:48) a be defined as in (3) based on U (cid:48) a .Let a sensor be placed to measure x l / ∈X u i . If there are nodirected paths from u i to x l , or if all the paths from u i to x l are intersected by X (cid:48) a ∪Y (cid:48) a , then U (cid:48) a ∪Y (cid:48) a is still a solutionof Problem 2 and δ r ( u i ) is not increased. Thus, assume thereexist a simple directed path u i , . . . , x l (Path 1) not intersectedby X (cid:48) a ∪Y (cid:48) a . Since x l / ∈X u i , there has to exist at least one state x p from Path 1 adjacent to an actuator. Then x p has to be alsoadjacent to a sensor, which implies existence of a directed pathin between u i and t passing through x p that is not intersectedby X (cid:48) a ∪ Y (cid:48) a . This is not possible, since U (cid:48) a ∪ Y (cid:48) a is a solution ofProblem 2. Hence, Path 1 cannot exists. Therefore, we cannotincrease δ r ( u i ) by placing sensors outside X u i . (cid:4) Proof of Proposition 7.
We first show that g u i is submodular,nondecreasing, and integer-valued. Firstly, w y j = | x y j ∩ X u i | is a binary integer constant. Thus, g l ( Y p ) = (cid:80) y j ∈Y p w y j is alinear function, so it is both submodular [43, Section 2] andnondecreasing (sum of nonnegative numbers). Since we have g u i ( Y p )= min { g l ( Y p ) , k u i } , it follows from Lemma 1 that g u i is submodular and non-decreasing. Function g u i is alsointeger valued, since g l and k u i are integer valued. Thus, itfollows from Lemma 1 that G is submodular, nondecreasing,and integer valued. We also have G ( ∅ ) = 0 , which impliesthat G has the same properties as the set function from (5).Thus, the claim of the proposition hold. (cid:4) Proof of Proposition 8.
The function g (cid:48) u i is known to be sub-modular [43, Section 2]. Additionally, g (cid:48) u i is a nondecreasingfunction, since |X p ∩ X u i | is nondecreasing in X p . We thenhave from Lemma 1 that G (cid:48) is submodular and nondecreasing.In addition, G (cid:48) ( ∅ ) = 0 . Hence, G (cid:48) has the same properties asthe function from (6), which concludes the proof. (cid:4) R EFERENCES[1] F. L. Cortesi, T. H. Summers, and J. Lygeros, “Submodularity of energyrelated controllability metrics,” in
Proceeding of the 53rd Conferenceon Decision and Control , 2014.[2] F. Pasqualetti, S. Zampieri, and F. Bullo, “Controllability metrics,limitations and algorithms for complex networks,”
IEEE Transactionson Control of Network Systems , vol. 1, no. 1, pp. 40–52, 2014.[3] V. Tzoumas, M. A. Rahimian, G. J. Pappas, and A. Jadbabaie, “Minimalactuator placement with bounds on control effort,”
IEEE Transactionson Control of Network Systems , vol. 3, no. 1, pp. 67–78, 2016.[4] A. Clark, L. Bushnell, and R. Poovendran, “On leader selection forperformance and controllability in multi-agent systems,” in
Proceedingsof the 51st Conference on Decision and Control , 2012.[5] J. Slay and M. Miller, “Lessons learned from the Maroochy waterbreach,” in
Proceedings of the International Conference on CriticalInfrastructure Protection , 2007.[6] D. Kushner, “The real story of STUXNET,”
IEEE Spectrum , vol. 50,no. 3, pp. 48–53, 2013.[7] “Analysis of the cyber attack on the Ukrainian power grid,”
ElectricityInformation Sharing and Analysis Center , 2016.[8] H. Cam, P. Mouallem, Y. Mo, B. Sinopoli, and B. Nkrumah, “Modelingimpact of attacks, recovery, and attackability conditions for situationalawareness,” in
Proceedings of the IEEE International Inter-DisciplinaryConference on Cognitive Methods in Situation Awareness and DecisionSupport , 2014.[9] S. Weerakkody, X. Liu, S. H. Son, and B. Sinopoli, “A graph-theoreticcharacterization of perfect attackability for secure design of distributedcontrol systems,”
IEEE Transactions on Control of Network Systems ,vol. 4, no. 1, pp. 60–70, 2017.[10] A. A. Cardenas, S. Amin, and S. Sastry, “Secure control: Towards sur-vivable cyber-physical systems,” in Proceedings of the 28th InternationalConference on Distributed Computing Systems Workshops , 2008.[11] H. Fawzi, P. Tabuada, and S. Diggavi, “Secure estimation and control forcyber-physical systems under adversarial attacks,”
IEEE Transactions onAutomatic Control , vol. 59, no. 6, pp. 1454–1467, 2014.[12] Y. Mo and B. Sinopoli, “On the performance degradation of cyber-physical systems under stealthy integrity attacks,”
IEEE Transactionson Automatic Control , vol. 61, no. 9, pp. 2618–2624, Sept 2016.[13] F. Pasqualetti, F. Dorfler, and F. Bullo, “Attack detection and identi-fication in cyber-physical systems,”
IEEE Transactions on AutomaticControl , vol. 58, no. 11, pp. 2715–2729, Nov 2013.[14] S. Sundaram and C. N. Hadjicostis, “Distributed function calculation vialinear iterations in the presence of malicious agents Part I: Attacking thenetwork,” in
Proceedings of the American Control Conference , 2008.[15] F. Pasqualetti, A. Bicchi, and F. Bullo, “Consensus computation inunreliable networks: A system theoretic approach,”
IEEE Transactionson Automatic Control , vol. 57, no. 1, pp. 90–104, 2012.[16] Y. Liu, P. Ning, and M. Reiter, “False data injection attacks against stateestimation in electric power grids,”
ACM Transactions on InformationSystems Security , vol. 14, no. 1, pp. 13:1–13:33, 2011.[17] A. Teixeira, I. Shames, H. Sandberg, and K. H. Johansson, “A se-cure control framework for resource-limited adversaries,”
Automatica ,vol. 51, pp. 135–148, 2015. [18] R. S. Smith, “Covert misappropriation of networked control systems:Presenting a feedback structure,”
IEEE Control Systems , vol. 35, no. 1,pp. 82–92, 2015.[19] Z. Guo, D. Shi, K. H. Johansson, and L. Shi, “Optimal linear cyber-attack on remote state estimation,”
IEEE Transactions on Control ofNetwork Systems , vol. 4, no. 1, pp. 4–13, 2017.[20] Y. Mo, S. Weerakkody, and B. Sinopoli, “Physical authentication of con-trol systems: Designing watermarked control inputs to detect counterfeitsensor outputs,”
IEEE Control Systems , vol. 35, no. 1, pp. 93–109, 2015.[21] Y. Z. Lun, A. DInnocenzo, F. Smarra, I. Malavolta, and M. D. D.Benedetto, “State of the art of cyber-physical systems security: Anautomatic control perspective,”
Journal of Systems and Software , vol.149, pp. 174 – 216, 2019.[22] J. Giraldo, E. Sarkar, A. Cardenas, M. Maniatakos, and M. Kantarcioglu,“Security and privacy in cyber-physical systems: A survey of surveys,”
IEEE Design Test , vol. 34, no. 4, pp. 7–17, Aug 2017.[23] H. Sandberg, S. Amin, and K. H. Johansson, “Cyberphysical security innetworked control systems: An introduction to the issue,”
IEEE ControlSystems , vol. 35, no. 1, pp. 20–23, Feb 2015.[24] H. Sandberg, A. Teixeira, and K. Johansson, “On security indicesfor state estimators in power networks,” in
Proceedings of the FirstWorkshop on Secure Control Systems , 2010.[25] O. Vukovi´c, K. Sou, G. Dan, and H. Sandberg, “Network-aware mitiga-tion of data integrity attacks on power system state estimation,”
IEEEJournal on Selected Areas in Communications , vol. 30, no. 6, pp. 1108–1118, 2012.[26] J. M. Hendrickx, K. H. Johansson, R. M. Jungers, H. Sandberg, andK. C. Sou, “Efficient computations of a security index for false dataattacks in power networks,”
IEEE Transactions on Automatic Control ,vol. 59, no. 12, pp. 3194–3208, 2014.[27] K. C. Sou, H. Sandberg, and K. H. Johansson, “Electric power networksecurity analysis via minimum cut relaxation,” in
Proceedings of the 50thConference on Decision and Control and European Control Conference ,2011.[28] ——, “Computing critical k -tuples in power networks,” IEEE Transac-tions on Power Systems , vol. 27, no. 3, pp. 1511–1520, 2012.[29] O. Kosut, “Max-flow min-cut for power system security index compu-tation,” in
Proceedings of the 8th IEEE Sensor Array and MultichannelSignal Processing Workshop , 2014.[30] Y. Yamaguchi, A. Ogawa, A. Takeda, and S. Iwata, “Cyber securityanalysis of power networks by hypergraph cut algorithms,”
IEEE Trans-actions on Smart Grid , vol. 6, no. 5, pp. 2189–2199, 2015.[31] M. S. Chong and M. Kuijper, “Characterising the vulnerability of linearcontrol systems under sensor attacks using a system’s security index,”in
Proceedings of the 55th Conference on Decision and Control , 2016.[32] H. Sandberg and A. M. H. Teixeira, “From control system securityindices to attack identifiability,” in
Proceedings of the Science of Securityfor Cyber-Physical Systems Workshop , 2016.[33] J.-M. Dion, C. Commault, and J. Van Der Woude, “Generic propertiesand control of linear structured systems: a survey,”
Automatica , vol. 39,no. 7, pp. 1125–1144, 2003.[34] J. Miloˇsevi´c, H. Sandberg, and K. H. Johansson, “A security indexfor actuators based on perfect undetectability: Properties and approx-imation,” in
Proceedings of the 56th Annual Allerton Conference onCommunication, Control, and Computing , 2018.[35] A. Teixeira, I. Shames, H. Sandberg, and K. H. Johansson, “Revealingstealthy attacks in control systems,” in Proceedings of the 50th AnnualAllerton Conference on Communication, Control, and Computing , 2012.[36] J. W. Simpson-Porco, F. D¨orfler, and F. Bullo, “Synchronization andpower sharing for droop-controlled inverters in islanded microgrids,”
Automatica , vol. 49, no. 9, pp. 2603–2611, 2013.[37] M. Amin and P. F. Schewe, “Preventing blackouts,”
Scientific American ,vol. 296, no. 5, pp. 60–67, 2007.[38] O. C. Imer, S. Yuksel, and T. Bas¸ar, “Optimal control of LTI systemsover unreliable communication links,”
Automatica , vol. 42, no. 9, pp.1429 – 1439, 2006.[39] V. Tzoumas, A. Jadbabaie, and G. J. Pappas, “Sensor placement foroptimal Kalman filtering: Fundamental limits, submodularity, and algo-rithms,” in
Proceedings of the American Control Conference , 2016.[40] M. Stoer and F. Wagner, “A simple min-cut algorithm,”
Journal of theACM , vol. 44, no. 4, pp. 585–591, 1997.[41] E. Tegling and H. Sandberg, “On the coherence of large-scale networkswith distributed PI and PD control,”
IEEE Control Systems Letters ,vol. 1, no. 1, pp. 170–175, July 2017.[42] A. Krause and D. Golovin, “Submodular function maximization.” 2014. [43] F. Bach et al. , “Learning with submodular functions: A convex opti-mization perspective,” Foundations and Trends in Machine Learning ,vol. 6, no. 2-3, pp. 145–373, 2013.[44] L. Wolsey, “An analysis of the greedy algorithm for the submodular setcovering problem,”
Combinatorica , vol. 2, no. 4, pp. 385–393, 1982.[45] G. Nemhauser, L. Wolsey, and M. Fisher, “An analysis of approximationsfor maximizing submodular set functions–I,”
Mathematical Program-ming , vol. 14, no. 1, pp. 265–294, 1978.[46] T. Cormen,
Introduction to algorithms . MIT press, 2009.[47] A. R. Bergen and D. J. Hill, “A structure preserving model for powersystem stability analysis,”
IEEE Transactions on Power Apparatus andSystems , vol. PAS-100, no. 1, pp. 25–35, Jan 1981.[48] S. K. M. Kodsi and C. A. Canizares, “Modeling and simulation of IEEE14-bus system with facts controllers,”
University of Waterloo, Canada,Tech. Rep , 2003.[49] A. Mohsenian-Rad and A. Leon-Garcia, “Distributed internet-based loadaltering attacks against smart power grids,”
IEEE Transactions on SmartGrid , vol. 2, no. 4, pp. 667–674, Dec 2011.[50] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions ofsystems of equations to sparse modeling of signals and images,”
SIAMreview , vol. 51, no. 1, pp. 34–81, 2009. PLACEPHOTOHERE
Jezdimir Miloˇsevi´c received his M.Sc. degree inElectrical Engineering and Computer Science in2015 from the School of Electrical Engineering, Uni-versity of Belgrade, Serbia. He is currently pursuingthe Ph.D. degree at the Department of AutomaticControl, KTH Royal Institute of Technology, Swe-den. He was a visiting researcher at the University ofHawaii at Manoa in 2014, and Massachusetts Insti-tute of Technology in 2018. His research interests arewithin cyber-security of industrial control systems.PLACEPHOTOHERE
Andr´e Teixeira is an Associate Senior Lecturer atthe Division of Signals and Systems, Department ofEngineering Sciences, Uppsala University, Sweden.He received the M.Sc. degree in electrical and com-puter engineering from the Faculdade de Engenhariada Universidade do Porto, Porto, Portugal, in 2009,and the Ph.D. degree in automatic control fromthe KTH Royal Institute of Technology, Stockholm,Sweden, in 2014. From 2014 to 2015, he was aPostdoctoral Researcher at the Department of Auto-matic Control, KTH Royal Institute of Technology,Stockholm, Sweden. From October 2015 to August 2017, he was an AssistantProfessor at the Faculty of Technology, Policy and Management, DelftUniversity of Technology.PLACEPHOTOHERE
Karl Henrik Johansson is Director of the Stock-holm Strategic Research Area ICT The Next Gener-ation and Professor at the School of Electrical Engi-neering and Computer Science, KTH Royal Instituteof Technology. He received MSc and PhD degreesfrom Lund University. He has held visiting positionsat UC Berkeley, Caltech, NTU, HKUST Institute ofAdvanced Studies, and NTNU. His research interestsare in networked control systems, cyber-physicalsystems, and applications in transportation, energy,and automation. He is a member of the IEEE ControlSystems Society Board of Governors, the IFAC Executive Board, and theEuropean Control Association Council. He has received several best paperawards and other distinctions. He has been awarded Distinguished Professorwith the Swedish Research Council and Wallenberg Scholar. He has receivedthe Future Research Leader Award from the Swedish Foundation for StrategicResearch and the triennial Young Author Prize from IFAC. He is Fellow ofthe IEEE and the Royal Swedish Academy of Engineering Sciences, and heis IEEE Distinguished Lecturer.PLACEPHOTOHERE