Ontology-Based Reasoning about the Trustworthiness of Cyber-Physical Systems
Marcello Balduccini, Edward Griffor, Michael Huth, Claire Vishik, Martin Burns, David Wollman
OOntology-Based Reasoning about theTrustworthiness of Cyber-Physical Systems
Marcello BalducciniSaint Joseph’s University, Philadelphia, PA, USA [email protected]
Edward GrifforNIST, Gaithersburg, MD, USA [email protected]
Michael HuthImperial College London, London, UK [email protected]
Claire VishikIntel Corporation, Austin, TX, USA [email protected]
Martin BurnsNIST, Gaithersburg, MD, USA [email protected]
David WollmanNIST, Gaithersburg, MD, USA [email protected]
Keywords:
CPS Framework, Cross-Cutting Concerns, System Validation, SemanticModels & Analyses, Ontology.
Abstract
It has been challenging for the technical and regulatory communities to formu-late requirements for trustworthiness of the cyber-physical systems (CPS) due tothe complexity of the issues associated with their design, deployment, and opera-tions. The US National Institute of Standards and Technology (NIST), through apublic working group, has released a CPS Framework that adopts a broad and inte-grated view of CPS and positions trustworthiness among other aspects of CPS. Thispaper takes the model created by the CPS Framework and its further developmentsone step further, by applying ontological approaches and reasoning techniques inorder to achieve greater understanding of CPS. The example analyzed in the paperdemonstrates the enrichment of the original CPS model obtained through ontologyand reasoning and its ability to deliver additional insights to the developers andoperators of CPS. a r X i v : . [ c s . CR ] M a r Introduction
The cyber-physical systems (CPS) brought additional complexity to the computingenvironment. In addition to other requirements, the technologists now have to contendwith the behavior and influence of the physical subsystem, creating an even greaterneed for an integrated context and the ability to reason about the application of therequirements.The use of ontologically inspired modeling in computer science is not new. Infact, as Smith and Welty [17] point out, this approach has been used extensively ininformation systems science. Examples include conceptual modeling in the databasedevelopment area or domain modeling in software engineering. Although these usesare separate from applying ontologies to knowledge engineering, there is a direct con-nection.The creation of an extensive ontology is frequently a lengthy process. However,in this case, the authors had the advantage to rely on an extensive model already inexistence. NIST hosted a Public Working Group on Cyber Physical Systems (CPS)with the aim of capturing input from those involved in CPS in order to define a ref-erence framework supporting common definitions and facilitating interoperability be-tween such systems. A key outcome of that work is the CPS Framework (Release 1.0)[9]. The framework proposes a means of supporting three Facets of a CPS life cycle:conceptualization, realization, and assurance of CPS through analytical lenses, calledAspects. In the framework, the Aspect named Trustworthiness, describes a number ofrelated Concerns that deal specifically with the avoidance of flaws in Privacy, Security,Safety, Resilience and Reliability. The framework is extensible and supported with ex-ecutable models, e.g. a UML model of Concerns and Aspects, and all three Facets andthe interdependencies across the CPS life cycle.The CPS framework helps articulate the motivation for important requirements tobe considered in building, composing, and assuring CPS. However, the CPS Frame-work currently does not offer a comprehensive model for reasoning over CPS artifactsand their dependencies.In this paper, we develop a Conceptual Ontology for the Trustworthiness aspect thatcan be extended to other Aspects of the CPS Framework. We illustrate this approachwith a case study, where the Conceptual Ontology is used to model the CPS fromscenarios associated with a camera placed onto an autonomous car in order to supportmultiple aspects of decision making.The model contains sufficient complexity to demonstrate the capabilities of theapproach and how it can be scaled to the full CPS Framework. The case study includes,e.g., considerations such as Transduction (in which a CPS produces a physical signalthat interacts with the Environment) and Influence (in which a CPS produces or receivesa physical signal that brings about a state change of another CPS). The objective isto demonstrate that an ontology-based approach can aid engineers in identifying and
Official contribution of the National Institute of Standards and Technology; not subject to copyright inthe United States. Certain commercial equipment, instruments, or materials are identified in this paper inorder to specify the experimental procedure adequately. Such identification is not intended to imply recom-mendation or endorsement by the National Institute of Standards and Technology, nor is it intended to implythat the materials or equipment identified are necessarily the best available for the purpose.
Intended Audience:
This paper is meant for both academic researchers and engineer-ing professionals. For the former, it can stimulate more research in an area that urgentlyneeds firm foundations for modeling and reasoning about the trustworthiness of CPSs.For the latter, it conveys the main ideas behind our approach and demonstrates that itcan, in principle, be used in standard engineering and production practice.
Ontology-Based Data Access (OBDA) systems (see e.g. [13, 2]) such as Ontop, allowfor semantic queries about an ontology to be interpreted over concrete data – usingengines such as NoSQL, Hadoop, MapReduce and so forth. This is achieved through mappings that mediate between the semantic layer of ontologies and the concrete data.Use of these maps can virtualize the concrete data graphs to those portions that areneeded for evaluating the queries, improving scalability and semantically guiding dataanalytics, see e.g. [13]. Our work is consistent with the use of OBDA to link to, andsupport, data analytics.The Object Management Group has an Insurance Working Group that builds datamodels for that sector, informed by ontologies. Since ontologies can be composed, wemay integrate such Insurance Ontologies as another important concern in the operationof CPS, particularly those related to infrastructure.For the Cybersecurity concern, there is a rich literature on graph-based attack mod-els. Closest to our work are perhaps the Attack-Countermeasure Trees (ACT) by Roy,Kim, and Trivedi [16]. An ACT specifies how (or how likely) an attacker can logi-cally realize a specific goal in a IT system, even when faced with specific mitigationor detection measures. Leaves on trees are basic attack, detection or mitigation actionsand the model assumes that basic attack actions are statistically independent. Our ap-proach is much wider in scope: it applies to CPS, is applicable to all concerns of theCPS framework and their dependencies – not just cybersecurity, and it can formulateand invoke inference rules of interest rather than relying on a static inference structuredetermined by a graph.Our approach can be extended to quantitative reasoning by interpreting queriesand inferences as developed in this paper over the reals, rankings or other domains thatallow a quantitative comparison. One may then generate answers to queries that are optimal with respect to some metrics. The combination of physical (non-linear) inter-action and logical (discrete or Boolean) interaction of CPS make this a mixed-integer,non-linear optimization problem (MINLP) extended with logical inference. MINLPapproaches can support a limited form of logic, e.g. through disjunctive programming[1]. But these methods seem to struggle with supporting richer logics and inferencessuch as “what-if” explorations. We therefore seek support for both MINLP methodsand logic reasoners. This need has already been recognized in the optimization com-munity, we refer to [15] for an overview, a discussion, and first results in addressingthis need for Process Systems Engineering. The tool ManyOpt [4] already providessuch abilities but can only express polynomials as non-linear behavior. The notion of δ -satisfiability [5] relaxes inequalities by up to some δ > in order to satisfy all con-3traints. This renders decidability for a rich theory including transcendental functions,with tool support [6]. It would be of great interest to leverage this to optimization pluslogical inference, e.g., within the tool ManyOpt. We now introduce the NIST Framework for Cyber-Physical Systems, referred to as“CPS Framework” or simply “Framework” below. The Framework comprises a setof concerns and facets related to the system under design or study. This section willclarify the intent and purpose of the framework, as well as its extensible and modifiablenature. The reader interested in documentation of the CPS Framework is directed tothe three volume NIST
Framework for Cyber-Physical Systems : • SP 1500-201 [9] • SP 1500-202 [10] • SP 1500-203 [18]The CPS Framework provides the taxonomy and methodology for designing, build-ing, and assuring cyber-physical systems that meet the expectations and concerns ofsystem stakeholders, including engineers, users, and the community that benefits fromthe system’s functions. The Framework comprises a set of concerns about systems,three development facets and a notion of functional decomposition suited to CPS. ACPS often delivers complex functions that are ultimately implemented in a multitude ofcollaborating systems and devices. This collaboration or interaction can occur throughthe exchange of information or the exchange of energy. We refer to the former aslogical interaction and the latter as physical interaction.The functional decomposition of the Framework breaks a CPS down into functionsor sets of functions, as follows: • the Business Case , a name and brief description of what the system is or does • the Use Case , a set of scenarios or step-by-step description of ways of using thesystem and the functions that realize those steps • the Allocation of Function to subsystems or actors – expressed in the terminologyof Use Cases • the Physical-Logical Allocation : allocation of given subsystem functions to phys-ical or logical implementation.As an example, consider a simplified version of an automated vehicle CPS for au-tomated emergency braking . The business case is a “vehicle system that detects objectsand brings the vehicle safely to a stop without colliding with the obstacle.”
A corre-sponding use-case scenario consists of a sensor array detecting an object and sending4 braking torque request to the braking system, where the amount of torque requestedis based on a calculation of the distance to the object. The underlying subsystems oractors are the sensor array and the braking system which carries out the calculation andconverts the request to an amount of electric power applied to components that producethe appropriate amount of hydraulic pressure on the braking calipers. The sensors arephysical, the communication of the request is logical and the braking system is capa-ble of both logical and physical function – it does calculation and creates hydraulicpressure.Next, we describe how the set of concerns of the CPS Framework is organized and applied to a function in the functional decomposition of a CPS. The concerns of theFramework are represented in a multi-rooted, tree-like structure (a “forest” in graphtheory), where branching corresponds to the decomposition of concerns . We refer tothis structure as the concern tree of the CPS Framework. The concerns at the roots ofthis structure, the highest level concerns, are called aspects and there are nine of them,one of which being
Trustworthiness .A concern about a given system reflects consensus thinking about method or prac-tice, involved in addressing the concern, and in some cases consensus-based standardsdescribing that method or practice. This method or practice is applied to each functionin the functional decomposition of the system and application of a concern to a functionresults in one or more properties to be required of that function in order to address theconcern in question. A concern may be seen as a branch in the concern tree, consistingof the root name followed by a (possibly empty) sequence of concern element namesin the branch, separated by periods or dots. In the Trustworthiness aspect, e.g., we havethe concern
Trustworthiness.Security.Cybersecurity.Confidentiality that may beabbreviated as, e.g.,
Conf (cid:48) d . A sample property, meant to address this concern aboutdata exchanged between components of a system, is use of encryption of some kind(e.g. AES or DES). A property is appended to the concern tree branch in block paren-theses. Here, Conf (cid:48) d [ AES − encr ] states that concern Conf (cid:48) d is intended to be addressedby the use of AES encryption.The facets of the CPS Framework are sets of activities, characteristic of a mode ofthinking about the development of a system. These facets are conceptualization , real-ization , and assurance . We refer to the CPS Framework documentation noted above fortheir complete explanation. The output of the conceptualization facet is a Model of theCPS , consisting of properties of the CPS with an indication of the concerns that gaverise to the properties. The output of the realization facet is the CPS itself. And, finally,the output of the assurance facet is an
Assurance Case for each concern applied to theCPS . The assurance case is sorted by the concerns applied to the CPS and consists of assurance judgment(s) , comprised of: • Properties of the CPS and the concerns that resulted in the addition of thoseproperties to the Model of the CPS. • Argumentation: consensus or authority-based description of criteria for conclud-ing that a property, intended to address a concern, has been established of theCPS. • Evidence: information, accessible to stakeholders, that the criteria used in this5igure 1: Decomposition tree for Trustworthiness Concernsargumentation are indeed met. • Uncertainty: qualitative or quantitative representation of the uncertainty associ-ated with the evidence that the criteria are met.
There are many critical concerns about the CPSs that surround us or that we dependupon – including the sub-concerns of
Trustworthiness : Safety , Security , P rivacy , Resilience , and
Reliability . The urgency of addressing such concerns has only in-creased with the rapid deployment of CPS in domains such as transportation, medicalcare, and energy. There are clear needs to design for trustworthiness and monitor the trustworthiness status of these CPS, since components can fail and new threats canemerge over time.The CPS Framework provides a
CPS Normal Form : any CPS can be analyzedthrough the same analytical lenses of the CPS Framework (see, e.g., Figure 1), result-ing in the functional decomposition of the CPS annotated with its concerns and theproperties introduced in its Model in order to address those concerns. Given two CPSand their respective analyses, we may thus compare these CPS directly, one concern ata time.Subsequent to the initial release of the CPS Framework, referenced above, NISTmodeled the CPS Framework using the Unified Modeling Language (UML) and gener-ated an XML schema or type structure of the CPS Framework. This effort was labeledthe
CPS Framework Open Source Project and, as a follow up, NIST held a CPS Frame-work Open Source Workshop on September 19, 2017. The intent of this modelingeffort, in the format of XML, was to: • Represent CPS in a common data exchange format (to facilitate concern-focused6esign collaboration) • Provide an IT-based mechanism for comparing the concern-integrity of CPS (toenable a concern-centric assessment of CPS composition) • Facilitate a concern-focused interface to CPS (to assess and monitor the status ofa CPS relative to measurable properties and their associated concerns).This CPS Framework and the Open Source technology, depicted in Figure 2, areessential to understanding critical performances of CPSs incrementally, from the per-spective of CPS development, deployment, and adoption.
The work presented in this paper is an extension of the open source project reportedabove. The UML/XML modeling provides a concern-focused portal to CPS. It demon-strates a methodology to reason about mutual dependencies and conflicts in require-ments that need to be taken into consideration during the design, deployment, andoperational stages. Information needed for such reasoning, can be manually enteredor obtained from a continuous feed from a sensor array designed to measure base re-quirement satisfaction. It can also be generated in other ways, depending on the natureof the system. The reasoning engine described in this paper is realized by modeling aCPS through ontologies based on the CPS Framework.The semantic relationship of the CPS framework to the work reported here is asfollows. The above synopsis of pertinent CPS Framework concepts and approachesfeatured the forest of concerns, where each tree represents an aspect. There are twotypes of nodes, concern elements and property nodes, as well as two types of edges:those that represent decomposition of concerns and those that connect concern ele-ments to properties. Both types of edges should be thought of as
AND edges , meaningthat the satisfaction of the parent concern requires that all of the children nodes besatisfied. In our approach, we address a concern by satisfying its node in the concerntree. This means that a concern element satisfies all its children – which are refinedconcerns, and that a property node satisfies all of its properties. Logical conjunction istherefore the basis of this satisfaction relation.
At the core of this approach is an ontology of the CPS Framework and of a CPS ofinterest. An ontology is a formal, logic-based representation that supports reasoningby means of logical inference. In this paper, we adopt a rather broad view of this term:by ontology, we mean a collection of statements in a logical language that representa given domain in terms of classes (i.e., sets) of objects, individuals (i.e., specific ob-jects), relationships between objects and/or classes, and logical statements over theserelationships.In the context of the trustworthiness of CPS, for instance, an ontology might definethe high-level concept of “Concern” with its refinement of “Aspect.” All of these will7igure 2: Open-source tools supporting the CPS Frameworkbe formalized as classes and, for Aspect, subclasses. Specific concerns will be repre-sented as individuals:
Trustworthiness as an individual of class Aspect,
Security and
Cybersecurity of class Concern. Additionally, a relation “has-subconcern” mightbe used to associate a concern with its sub-concerns. Thus, Aspect “has-subconcern”
Security , which in turn “has-subconcern”
Cybersecurity . By introducing a property“satisfied,” one could also indicate which concerns are satisfied.Inference can then be applied to propagate “satisfied” and other relevant propertiesand relations throughout the ontology. For example, given a concern that is not “satis-fied,” one can leverage relation “has-subconcern” to identify the concerns that are notsatisfied, either directly or indirectly, because of it.In practice, it is often convenient to distinguish between the factual part, Ω , ofthe ontology (later, simply called “ontology”), which encodes the factual information(e.g., Trustworthiness “has-subconcern”
Security ), and the axioms , Λ , expressingdeeper, often causal, links between relations (e.g., a concern is not satisfied if any of itssub-concerns is not satisfied). Further, when discussing reasoning tasks, we will alsoindicate, separately, the set Q of axioms encoding a specific reasoning task or query. By leveraging a logic-based representation of a domain of interest, one can apply in-ference and draw new and useful conclusions in a principled, rigorous way. In essence,our approach is agnostic to any specific choice of logical language and inference mech-anisms. Axioms expressed in the used logical language formalize the queries one is in-terested in answering, the type of reasoning that can be carried out, and any additionalcontextual information. Thus, given an ontology Ω , a set of axioms Λ , and an inference8elation (cid:15) , we say that ∆ is an answer to the (implicit) query iff Ω ∪ Λ (cid:15) ∆ . where ∪ denotes the union of two sets. For instance, in the language of propositionallogic, given knowledge that some proposition p is true and that p implies some otherproposition q , one can infer that q is also true, i.e.: { p, p ⊃ q } (cid:15) { q } . In the context of cybersecurity, p might be true when a cyberattack has occurred and p ⊃ q might formalize an expert’s knowledge that, whenever that cyberattack occurs, acertain system becomes inoperative (proposition q ). The logical inference representedby symbol (cid:15) allows to draw the conclusion that, as a result of the cyberattack, thesystem is now inoperative. For increased flexibility of representation, we use herea non-monotonic extension of propositional logic, called Answer Set Programming(ASP) [7, 14, 3]. ASP is a rule-based language, where a rule is a statement of the form h ∨ h ∨ . . . h k ← l , . . . , l m , not l m +1 , . . . , not l n . (1)Every h i and l i is a literal , i.e. an atomic proposition analogous to p and q above,optionally prefixed by the negation symbol ¬ to express its negation. Intuitively, Equa-tion (1), hereafter referred to as (1), states that, if l , . . . , l m hold and there is no reasonto believe (the not keyword in (1)) that l m +1 , . . . , l n hold , then one of h , . . . , h k must hold. Thus, the ASP counterpart of the propositional logic implication p ⊃ q is q ← p . Suppose proposition r represents the fact that the system is patched against thecyberattack. To make conservative predictions about the system state after a cyberat-tack, we might want to conclude that the system should be expected to be inoperativeunless there is positive evidence that it was patched. This can be represented in ASPby: q ← p, not r. Note the difference between ¬ r and not r . The former is true if we have explicitevidence that the system has not been patched. The latter does hold whenever we havethat explicit evidence, but also whenever we simply do not know if it was patchedor not. Depending on specific needs, Answer Set Programming allows either type ofexpression. (This type of default reasoning is an example of the greater flexibility ofrepresentation that motivates our use of ASP in this paper.)Although ASP is propositional in nature, we follow common representational prac-tice and allow for a literal to include a list of arguments, possibly comprising logicalvariables. For example, we may write q ( s ) to indicate that it is system s that isinoperative. Similarly, given a variable X , we may use q ( X ) ← p, not r ( X ) . to say that any system X that is not known to be patched should be assumed to havebeen made inoperative by the cyberattack. 9 .1 Naming Conventions The decomposition of a CPS identifies resources that may satisfy properties. Sup-pose that cam is a camera, a subsystem of an autonomous car, and that mem is amemory sub-system of cam ; we will examine this system in more detail later. Then cam mem [ encr ] , e.g., is a Boolean predicate that is true if the memory mem ofcamera cam uses encryption. Properties thus have form SystemP ath [ prop ] where SystemP ath identifies a system component or part, with subcomponents indicatedby the underscore symbol, and prop a property that this part may enjoy. We interprettwo such properties to be equal only if their actual names are equal: cam mem [ encr ] and cam mem (cid:48) [ encr ] , e.g., are different properties as the same encryption is appliedto different memories of the same camera cam . Properties SystemP ath [ prop ] alsohave a semantic context ConcernP ath that articulates which (sub)concern of an as-pect this property is trying to address. Property cam mem [ encr ] , e.g., may havecontext Trustworthiness.Security.Cybersecurity.Confidentiality , where we usethe dot operator “.” in
ConcernP ath to distinguish this easily from navigations in
SystemP ath . In our semantics below, a property may be either true or false (i.e., sat-isfied or non-satisfied). These truth values in turn influence the satisfaction of concernsand aspects. Below, we elide details of such context or of system paths; e.g.,
Conf (cid:48) d may abbreviate Trustworthiness.Security.Cybersecurity.Confidentiality . For sake of illustration, we consider a lane keeping/assist (LKAS) use case centeredaround an advanced car that uses a camera and a situational awareness module (SAM)for lane keeping/assist. The SAM processes the video stream from the camera andcontrols, through a physical output, the automated navigation system. The camera andthe SAM may use encrypted memory and secure boot. Safety mechanisms in the nav-igation system cause it to shut down if issues are detected in the input received fromthe SAM. This use case is chosen because it encompasses major component types ofa CPS, and lends itself to various non-trivial investigations. Through this use case, wewill highlight the interplay among trustworthiness concerns, as well as their ramifica-tions on other CPS aspects, such as the functional aspect.For sake of presentational simplicity, we will assume that the camera is capableof two recording modes, one at 25 fps (frames per second) and the other at 50 fps.The selection of the recording mode is made by the SAM, by acting on a flag of thecamera’s configuration. It is assumed that two camera models exist, a basic one andan advanced one. Either type of camera can be used when realizing the CPS. Due toassumed technical limitations, the basic camera is likely to drop frames if it attemptsto record at 50 fps while using encrypted memory.In our approach, the formalization of a CPS is organized along multiple levels:(L1) aspects and concerns; (L2) properties; (L3) CPS configuration; (L4) actions; (L5)constraints, dependencies and trade-offs; and (L6) satisfaction axioms. Level L1 andL6 form the
CPS-independent specification , since aspects and concerns are indepen-dent of the specific CPS being modeled. Levels L2-L5 comprise the
CPS-dependentspecification , as the information included in them depends on the CPS being mod-10 imingTime-interval and latency control FunctionalFunctionality TrustworthinessSafety SecurityPhysical security CybersecurityConfidentiality Integrity AvailabilityPrivacy Resilience Reliability
AspectsConcerns
Camera stores all frames SAM uses secure bootCamera uses sec. bootCamera should be capable of recording at 25 fps or at 50 fps SAM uses encrypted memoryCamera uses encrypted memory
Properties
Figure 3: LKAS use case: pertinent part of the concern foresteled. Furthermore, levels L1 and L2 formalize the concepts from the definition of theCPS Framework. Levels L3-L5 extend the CPS Framework in order to provide detailsneeded for reasoning about the behavior of a CPS of interest. Level L6 provides thesemantics of the formalization. Next, we describe our approach through its applicationon the LKAS use case.
Formalization of aspects and concerns.
The formalization of aspects and con-cerns is shared by all CPSs. The nodes of a concern tree are represented by individualsof class
Concern . The root nodes of the concern trees are a particular kind of concern,and so they are placed in a class (
Aspect ) that is a subclass
Concern . Following thedefinition of the CPS Framework, class
Aspect includes individuals
Trustworthiness , Timing and
Functional for the corresponding aspects, while class
Concern includesindividuals
Security , Cybersecurity , Functionality , etc.Edges linking aspects and concerns are represented by the relation subConc , whichis a representation of “sub-concern.” Thus, an edge from a concern x to a concern y isformalized by a statement subConc ( x, y ) . Statement subConc ( Trustworthiness, Security ) e.g., formalizes that the Security concern is a direct sub-concern of the Trustworthi-ness aspect in our LKAS use case. Concerns Cybersecurity and
Conf (cid:48) d are linkedsimilarly. Formalization of properties.
Properties of a CPS are represented by individualsof class
Property . An edge that links a property with an aspect or concern is repre-sented by relation addrBy , which stands for “addressed by.” Let us suppose that, in theLKAS use case, both SAM and camera must use encrypted memory for the confiden-11iality concern to be satisfied (see Figure 3). We may express this by two statements addrBy ( Conf (cid:48) d, SAM mem [ encr ]) and addrBy ( Conf (cid:48) d, cam mem [ encr ]) . Simi-larly, the fact that SAM and camera must use secure boot for the integrity concernto be satisfied is expressed by the statements addrBy ( Integrity, SAM boot [ sec ]) and addrBy ( Integrity, cam boot [ sec ]) .Another property, referred to below, is cam [ storeAll ] , stating that camera cam stores all frames, i.e. does not drop any frames. Note that, in the LKAS use case, thecar heavily depends on the camera for proper lane keeping/assist: not dropping anyframes is essential for satisfaction of the functionality concern. Formalization of configurations.
Properties do not necessarily capture all possi-ble configurable features of a CPS, but only those on which concerns are defined. Forinstance, in the LKAS use case, there is a choice between using the basic camera or theadvanced camera. We describe the choice between the two as part of the configurationof the CPS. Thus, the formalization includes a class
Configuration . Each individual ofthis class represents a different configuration feature, e.g. cam [ basicOne ] is used forthe selection of a type of camera cam . Similarly to properties, configurations can betrue or false in a given state of the CPS. In fact, their truth value is essential in definingthe configuration of the CPS for a scenario of interest. Truth values of properties andconfigurations are specified by relation obs , where a statement obs ( x, true ) declaresthat property or configuration x is (observed to be) true. Observability of falsity isrepresented in a similar way. Formalization of actions.
We use the term “action” to denote both those actionsthat are within the control of an agent (e.g., actions a driver may take), and thoseactions that occur spontaneously, e.g. triggered by a particular state of the CPS suchas the automatic disabling of the LKAS capability if the camera malfunctions. Theformalization includes a suitable class
Action and individuals for the actions of interest.In the LKAS use case, we consider the occurrence of a cyberattack, and formalize itby means of the individual/action labeled
Attack . The case in which the automatednavigation system shuts down is modeled by an individual
N avShutdown . When theconfiguration of a CPS can be modified at run-time, suitable actions
M akeT rue ( c ) and M akeF alse ( c ) may also be introduced, where c is the configuration the action affects.For example, in the LKAS use case, we consider actions M akeT rue ( cam [ basicOne ]) and M akeF alse ( cam [ basicOne ]) , which, respectively, switch on or switch off thebasic camera. Formalization of constraints, dependencies, trade-offs.
An additional feature ofour model is the ability to establish causal links between concerns, properties, config-urations, and actions. This is accomplished by the reasoning over statements. Table 1lists types of statements, their syntactic expressions as judgments, and their correspond-ing encodings for the ASP reasoner. The logical encodings of the statements are usedto implement reasoning capabilities discussed later in the paper. For an example ofa property dependency statement, recall that the use of encrypted memory causes thebasic camera to drop frames if it attempts to record at 50 fps. We formalize this by: cam mem [ encr ] ∧ ¬ cam [ rate fps ] ∧ cam [ basicOne ] impactsneg cam [ storeAll ] (2)12 tatement type Syntax Encoding for reasoner Propertydependency Γ impactspos π Γ impactsneg π impacted ( pos/neg, π, S ) ← holds (Γ , S ) Default propertyvalue σ defaults trueσ defaults false defaults ( σ, true/false ) Effects of actions a causes π if Γ holds ( π, S + 1) ← holds (Γ , S ) , occurs ( a, S ) Triggered actions Γ triggers a occurs ( a, S ) ← holds (Γ , S ) Table 1: Constraints, dependencies, and trade-offs where Γ , π range over (sets of)propositions and a over actionsThe statement states that, under the conditions specified, the storeAll propertyis impacted negatively , that is, is made false. If a property is impacted positively,impactspos is used instead. As shown in this example, properties and configurationscan be negated by prefixing them by ¬ . Let us list relevant aspects of concerns from thecontexts of these properties: Conf (cid:48) d for encr , T iming for rate fps , Configuration for basicOne , and
Functionality for storeAll . In the case of storeAll , one mayalso want to specify that the property should be assumed to hold true in the absence ofcontrary evidence. This can be achieved by a statement: storeAll defaults true
The effects of actions on properties are given by statements borrowed from action lan-guage AL [8], which has been designed specifically for a compact specification of thecausal dependencies in complex domains. Let us say, for instance, that in the LKASuse case a cyberattack may force the camera to record at 50 fps. Using action
Attack ,introduced earlier, this may be formalized by a law
Attack causes ¬ rate fps. The last type of statement from Table 1 describes the spontaneous triggering of actionswhen suitable conditions are satisfied. To illustrate this, recall that, in the LKAS usecase, safety mechanisms in the navigation system cause the navigational system toshut down if issues are detected in the input received from the SAM. One obviouscircumstance in which this will happen is if the system is not fully functional. Thislink can be formalized by the trigger: ¬ F unctional triggers
N avShutdown. (3)
Axioms.
Recall that our approach reduces the task of answering a query of interestto that of finding one or more answers, ∆ , such that Ω ∪ Λ (cid:15) ∆ holds, where theontology Ω and any supporting axioms Λ are expressed in a logical language for thereasoner of choice – ASP in this paper. The statements presented so far can be easilytranslated into logic statements as seen in the last column of Table 1, e.g. (2) translates While we find AL convenient, our approach does not depend on a particular choice of language. Otherlanguages, e.g. PDDL, can be easily incorporated into our approach. impacted ( neg, cam [ storeAll ] , S ) ← holds ( cam mem [ encr ] , S ) , ¬ holds ( cam [ rate fps ] , S ) ,holds ( cam [ basicOne ] , S ) . (4)where holds is an auxiliary relation that states that its argument holds at a discrete step S in the evolution of the CPS. As we will demonstrate later, the inclusion of a stepargument makes it possible to analyze the evolution of the CPS over time in responseto possible events.It remains to formalize the meaning of relation impacted in terms of the effect onthe truth value of cam [ storeAll ] . In our approach, this is accomplished by a set ofaxioms that complete the translation of the statements from Table 1 and, additionally,enable reasoning about the satisfaction of properties, concerns, and aspects. Due tospace considerations, we focus the presentation on the latter, shown in Figure 4. ¬ holds ( sat ( C ) , S ) ← addrBy ( C , π ) , not holds ( π, S ) . (5) ¬ holds ( sat ( C ) , S ) ← subConc ( C , C ) , ¬ holds ( sat ( C ) , S ) . (6) holds ( X , S ) ← defaults ( X , true ) , not ¬ holds ( X , S ) . (7) holds ( π, ← obs ( π, true ) . (8) ¬ holds ( π, ← obs ( π, f alse ) . (9)Figure 4: Satisfaction-related axioms for LKAS use caseAxiom (5) intuitively states that a concern is not satisfied if any of the propertiesthat address it does not hold. This ensures that the lack of satisfaction of a property π is propagated to the concern(s) that are addressed by π according to the addrBy statements provided by the formalization of properties. The lack of satisfaction is thenpropagated up the relevant concern tree by axiom (6) according to the concern-concerndependencies specified by the subConc statements in our ontology.One may note that axioms (5)-(6) only address the lack of satisfaction of propertiesand concerns. The specification of the notion of satisfaction is completed by defaults statements saying that all properties and concerns are satisfied by default, by axiom(7), which embodies the semantics of the defaults statements, and by axioms (8)-(9),which link the observations about the initial state to auxiliary relation holds .Thus, if the basic camera is used with encrypted memory while recording at 50fps, (4) makes it possible to conclude that property storeAll is not satisfied. In turn,(5) yields that Functionality is not satisfied. Finally, (6) concludes that the functionalaspect is not satisfied. 14 .3 Reasoning
The formalization presented above makes it possible to reason about aspects and con-cerns of a CPS, their interdependencies, and their implications in relation to the othersystems the CPS may interact with. Now, we illustrate these reasoning capabilitiesby focusing mostly on the trustworthiness concerns, but the reasoning mechanisms weestablished can be applied to arbitrary parts of the aspects hierarchy.
Concern tree.
For the LKAS CPS, let the basic camera be used, SAM and camerause encrypted memory and secure boot, and the recording rate be set to 50 fps. Onceaspects, concerns, properties, and configurations are formalized as described earlier,this system state is formalized by the statements: obs ( basicOne, true ) , obs ( cam mem [ encr ] , true ) ,obs ( cam boot [ sec ] , true ) , obs ( cam [ rate fps ] , f alse ) ,obs ( SAM mem [ encr ] , true ) , obs ( SAM boot [ sec ] , true ) By inspecting Figure 3, one can see that the confidentiality concern is satisfied. Froma technical perspective a query “is χ satisfied by the design of the CPS?”, where X is a property (e.g., storeAll ) or concern, is answered by checking whether Ω ∪ Λ (cid:15) holds ( χ, . By specifying a different time step, one can also check whetherthe query is satisfied at run-time. In our running example, starting from the observa-tion that encrypted memory is used, axiom (5) allows one to conclude that Ω ∪ Λ (cid:15) holds ( sat ( Conf (cid:48) d ) , . Similarly, one can formally conclude holds ( sat ( Integrity ) , .From (6) and (7), it also follows that Cybersecurity is satisfied and, in turn, all concernsup to
Trustworthiness . Thus the LKAS CPS is deemed to be trustworthy.On the other hand, Ω ∪ Λ entails that both statements ¬ holds ( storeAll, and ¬ holds ( sat ( F unctional ) , are true and, recursively, the Functionality concern andthe
Functional aspect are thus not satisfied.
All-sat.
One may also want to check whether all aspects are satisfied. This queryis encoded by the set Q of axioms: sat ( all ) defaults true. ¬ holds ( sat ( all ) , S ) ← aspect ( A ) , ¬ holds ( sat ( A ) , S ) . (10)These axioms introduce a “meta-aspect” all , representing the satisfaction of the entireconcern forest, and state that it is enough for one aspect not to be satisfied, to causethe concern forest not to be satisfied as a whole. In our example, one can check that Ω ∪ Λ ∪ Q (cid:15) ¬ holds ( sat ( all ) , . In fact, as we saw in the previous paragraph, ¬ holds ( sat ( F unctionality ) , is entailed. This is sufficient to trigger (10) and derive ¬ holds ( sat ( all ) , . That is, the CPS is deemed to be trustworthy, but does not satisfythe functional aspect: therefore, the concern forest, as a whole, is not satisfied. Partial synthesis/Design completion.
Our approach also allows for the comple-tion of a partially specified CPS design so that desired constraints are satisfied. Let γ be the requirement that must be satisfied, e.g. sat ( Conf (cid:48) d ) or sat ( all ) . The corre-sponding query is encoded by the set Q of axioms: holds ( π, ∨ ¬ holds ( π, . ⊥ ← not holds ( γ, . π can be true or false and the second saysthat holds ( γ, must be true in every solution/answer returned. For illustration, let uscomplete the partial design: obs ( basicOne, true ) , obs ( cam boot [ sec ] , true ) ,obs ( cam [ rate fps ] , false ) , obs ( SAM mem [ encr ] , true ) ,obs ( SAM boot [ sec ] , true ) . Note that the design does not specify whether the camera uses encrypted memory ornot. Let us suppose that we are interested in finding a completion of the design in whichthe LKAS CPS is trustworthy. To do that, we specify γ to be sat ( T rustworthiness ) .One can now check that Ω ∪ Λ ∪ Q entails holds ( cam mem [ encr ] , . In fact, thecompletion of the design in which the camera uses encrypted memory makes the CPStrustworthy for purposes of the design analysis. What-if. A What-if reasoning task studies how the CPS is affected by the oc-currence of actions, in terms of which properties hold, which concerns are satisfied,and which other actions may be triggered. Let the expression occurs ( a, s ) denotethe occurrence of action a at step s and let a history H be a set of such expres-sions. A query “is χ satisfied at step s (cid:48) ?”, where χ is a property (e.g., storeAll ) orconcern and s (cid:48) is a step during or after history H , is answered by checking whether Ω ∪ Λ ∪ H (cid:15) holds ( χ, s (cid:48) ) . A query “does action a occur at step s (cid:48) ?” is answered by checking whether Ω ∪ Λ ∪ H (cid:15) occurs ( a, s (cid:48) ) . Obviously, the same mechanism allows for answering moregeneral questions, such as “is X satisfied (or not satisfied) at some point during H ?”and “which actions are triggered during H ?”. In reference to the LKAS use case, letus consider a scenario in which, initially, the basic camera is used, SAM and camerause encrypted memory and secure boot, and the recording rate is set to 25 fps. Clearly,the functional aspect is satisfied by the CPS. We want to study whether the functionalaspect remains satisfied after occurs ( Attack, . That is, we need to check whether Ω ∪ Λ ∪ H (cid:15) holds ( sat ( Functional ) , . Note the use of step in the query, which corresponds to the step that follows thehypothesized occurrence of Attack . One can check that the answer to the query isnegative. In fact, as we discussed earlier, the attack forces the camera to record at 25fps. From (4), it follows that the camera will begin to drop frames, which in turn af-fects the functional aspect negatively. One may wonder whether there are any furtherside-effects – for instance, whether any follow-up actions are triggered. This can beaccomplished by checking if there is any other action a that occurs at step . Giventhat the functional aspect is no longer satisfied, (3) will cause Ω ∪ Λ ∪ H to entail occurs ( N avShutdown, , indicating that the navigation system will shut down. (Re-call that occurs ( · , · ) is derived from the triggers statement, as seen in Table 1.) Mitigation.
The last reasoning task we illustrate is aimed at determining how theeffects of a history can be mitigated. As before, let H be a set of occurrences of actions. Note that the axioms of Λ prevent the selection of truth values that conflict with obs ( · , · ) statementsprovided. To be precise, credulous entailment is used in this example.
16e are interested in answering the query “which mitigation measure can restore γ ?”where γ is a concern or the meta-aspect all . To simplify the presentation, let us focuson the case in which all mitigation actions are executed concurrently after the last actionof H . Let s denote the corresponding step. The set Q of axioms that encode the queryincludes a rule of the form occurs ( a, s ) ∨ ¬ occurs ( a, s ) for every action a that oneis interested in allowing, as well as a rule ⊥ ← not holds ( sat ( γ ) , s +1) . stating that it is impossible for γ not to be satisfied. The question is answered byfinding the set of actions a such that Ω ∪ Λ ∪ H ∪ Q (cid:15) occurs ( a, s ) . In the LKASuse case, it is not difficult to check that the mitigation action returned by this process is M akeF alse ( cam [ basicOne ]) , indicating that the basic camera should be replaced bythe advanced camera in order to compensate for the fact that the cyberattack is forcingthe CPS to record at 50 fps.If the underlying inference mechanism allows for finding multiple solutions, onecan also use our approach to find optimal solutions. For instance, one might ask “whichmitigation measures can restore γ and involve the smallest number of actions?”. If ASPis the underlying logical formalism, the query can be easily encoded by extending Q by a rule: < ∼ occurs ( A , s ) . (11)where “ < ∼ ” is the advanced weak constraint connective, requesting the minimizationof occurrences of its right-hand side in any solution found. To illustrate the task, consider a variation of the LKAS use case in which a SAMthat is affected by the cyberattack can be patched (action
Patch ) to force it to request 25fps recording at all times. Let Ω and Λ be modified accordingly and Q be expanded asdescribed above. One can check that Ω ∪ Λ ∪ H ∪ Q now entails two alternative solu-tions: occurs ( P atch, s ) and occurs ( M akeF alse ( cam [ basicOne ] , s ) . While, inprinciple, another possible mitigation consists in both replacing the basic camera and patching the SAM, it is ruled out by (11) because it is non-minimal. Kolbe et al. [12] stress the importance of situational awareness in complex systemsand the benefits of ontologies to enable a rich context that permits the developers andoperators to model a large number of situations. Others, e.g. Gyrard et al. [11], stressthe advantages of using ontologies and logical reasoning for cross-domain applica-tion development. Our experimental use cases illustrate that the richer context broughtforward by the proposed approach supports more holistic insights into complex sys-tems, their development, and operations, and allows developers to model rich contextsand anticipate issues, constraints, and conflicts that are not self-evident and are multi-domain in nature. For illustration purposes, we focus on after-the-fact mitigation. It is not difficult to extend the techniqueto cover preventive measures. It is possible to use other types of minimization as well.
In this paper, we presented a methodology for developing a Conceptual Ontology ofthe CPS Framework and its Aspects. We then tested parts of such a Conceptual Ontol-ogy to illustrate the approach with a use case for CPS, the lane keeping/assist scenarioof an advanced car. We demonstrated that the model supports multiple aspects of de-cision making based on the formulation and automatic answering of semantic queries.Although we focused this work on Trustworthiness, the model contains sufficient com-plexity to demonstrate the capabilities of the approach and its scalability to the full CPSFramework. Our experiment already includes complex considerations such as Trans-duction and Influence. Our work demonstrates that an ontology-based methodologycan aid engineers in identifying and resolving important issues for design, implemen-tation, and validation of CPS.
Acknowledgements.
M. Balduccini was partly supported by NIST grant 70NANB17H260. M. Huth ac-18nowledges the UK EPSRC funded projects EP/N020030/1 and EP/N023242/1.
References [1] E. Balas. Disjunctive programming: Cutting planes from logical conditions. In
Nonlinear Programming 2 , pages 279–312. Elsevier, 1975.[2] Marcello Balduccini, Sarah Kushner, and Jacquelin Speck. Ontology-Driven DataSemantics Discovery for Cyber-Security. In Enrico Pontelli and Tran Cao Son,editors,
PADL’15: Practical Aspects of Declarative Languages , Jun 2015.[3] Chitta Baral.
Knowledge Representation, Reasoning, and Declarative ProblemSolving . Cambridge University Press, Jan 2003.[4] Andrea Callia D’Iddio and Michael Huth. ManyOpt: An Extensible Tool forMixed, Non-Linear Optimization Through SMT Solving.
CoRR , abs/1702.01332,2017.[5] Sicun Gao, Jeremy Avigad, and Edmund M. Clarke. Delta-decidability over thereals. In
Proc. of the 27th Annual IEEE Symp. on Logic in Computer Science ,pages 305–314, 2012.[6] Sicun Gao, Soonho Kong, and Edmund M. Clarke. dreal: An SMT solver for non-linear theories over the reals. In
Proc. of 24th International Conf. on AutomatedDeduction , pages 208–214, 2013.[7] Michael Gelfond and Vladimir Lifschitz. Classical Negation in Logic Programsand Disjunctive Databases.
New Generation Computing , 9:365–385, 1991.[8] Michael Gelfond and Vladimir Lifschitz. Action Languages.
Electronic Trans-actions on AI , 3(16), 1998.[9] Edward Griffor, Christopher Greer, David Wollman, and Martin Burns. Frame-work for Cyber-Physical Systems: Volume 1, Overview. Technical Report NIST-SP-1500-201, National Institute of Standards and Technology, Jun 2017.[10] Edward Griffor, Christopher Greer, David Wollman, and Martin Burns. Frame-work for Cyber-Physical Systems: Volume 2, Working Group Reports. TechnicalReport NIST-SP-1500-202, National Institute of Standards and Technology, Jun2017.[11] Amelie Gyrard, Soumya Kanti Datta, Christian Bonnet, and Karima Boudaoud.Cross-Domain Internet of Things Application Development: M3 Framework andEvaluation. In , pages 9–16, Aug 2015.[12] Niklas Kolbe, Arkady Zaslavsky, Sylvain Kubler, Jeremy Robert, and Yves LeTraon. Enriching a Situation Awareness Framework for IoT with KnowledgeBase and Reasoning Components, 2017.1913] Dimitrios A. Koutsomitropoulos and Aikaterini K. Kalou. A standards-based on-tology and support for Big Data Analytics in the insurance industry.
ICT Express ,3(2):57–61, 2017.[14] Victor W. Marek and Miroslaw Truszczynski.
The Logic ProgrammingParadigm: a 25-Year Perspective , chapter Stable Models and an AlternativeLogic Programming Paradigm, pages 375–398. Springer Verlag, Berlin, 1999.[15] Miten Mistr, Andrea Callia D’Iddio, Michael Huth, and Ruth Misener. Satisfiabil-ity modulo theories for process systems engineering. eprints for the optimizationcommunity, 19 June 2017.[16] Arpan Roy, Dong Seong Kim, and Kishor S. Trivedi. Attack countermeasuretrees (ACT): towards unifying the constructs of attack and defense trees.
Securityand Communication Networks , 5(8):929–943, 2012.[17] Barry Smith and Chris Welty. Ontology: Towards a New Synthesis.