A Knowledge-based Approach for the Automatic Construction of Skill Graphs for Online Monitoring
AA Knowledge-based Approach for the Automatic Construction of SkillGraphs for Online Monitoring*
Inga Jatzkowski , Till Menzel , and Markus Maurer Abstract — Automated vehicles need to be aware of the capa-bilities they currently possess. Skill graphs are directed acylicgraphs in which a vehicle’s capabilities and the dependenciesbetween these capabilities are modeled. The skills a vehiclerequires depend on the behaviors the vehicle has to performand the operational design domain (ODD) of the vehicle. Skillgraphs were originally proposed for online monitoring of thecurrent capabilities of an automated vehicle. They have alsobeen shown to be useful during other parts of the developmentprocess, e.g. system design, system verification. Skill graphconstruction is an iterative, expert-based, manual process withlittle to no guidelines. This process is, thus, prone to errorsand inconsistencies especially regarding the propagation ofchanges in the vehicle’s intended ODD into the skill graphs. Inorder to circumnavigate this problem, we propose to formalizeexpert knowledge regarding skill graph construction into aknowledge base and automate the construction process. Thus,all changes in the vehicle’s ODD are reflected in the skillgraphs automatically leading to a reduction in inconsistenciesand errors in the constructed skill graphs.
This work has been submitted to the IEEE for possible publication.Copyright may be transferred without notice, after which this version may no longer be accessible.
I. INTRODUCTIONIn order for an automated vehicle to operate safely inits environment, it must have knowledge of its currentcapabilities and whether they suffice for safe operation [2].Skill and ability graphs have been proposed as a frameworkfor modeling and monitoring of the (current) capabilities ofautomated vehicles [3]. The construction of these graphs isdone manually by experts who possess a thorough under-standing of the system and the intended operational designdomain (ODD) [4]. This construction process is an ad-hocprocess following no clear directions or guidelines leavingthe experts without a clear starting point or idea of when agraph is complete.Skill graphs as proposed in [3] are constructed as a directedacyclic graph of the skills necessary to perform an abstractbehavior, e.g. a driving maneuver, and the dependenciesbetween these skills. As several behaviors can require thesame skills, these graphs may partially overlap. Manualconstruction of the skill graphs for an automated vehicle,as any manual modeling process, is error prone. Practicalexperience has shown that the experts constructing the graphsmay forget crucial skills or dependencies during the modelingprocess. Skill graphs are designed iteratively and adjustedduring the development process, thus changes in the graph *This research is accomplished within the project “UNICAR agil ” [1] (FKZ16EMO0285). We acknowledge the financial support for the project by theGerman Federal Ministry of Education and Research (BMBF). Inga Jatzkowski, Till Menzel, and Markus Maurer are with the Instituteof Control Engineering at TU Braunschweig, 38106 Braunschweig, Germany { lastname } @ifr.ing.tu-bs.de have to be tracked especially for overlapping parts of thegraphs to prevent inconsistencies. Even when the initial graphswere consistent and were constructed correctly, integrationand tracking of changes in the graphs proves to be a challengefor human modelers.Experts are usually an expensive resource. Rather thanhaving experts perform the entire modeling task includingchecking for inconsistencies between the graphs for theindividual behaviors, it would be more efficient to automate asmuch of the skill graph construction process as possible. Thus,formalizing the experts knowledge as well as the constructionprocess itself can reduce expert involvement in the modelingprocess. Experts can be more effectively utilized to producereusable artifacts for the modeling process and to evaluatethe result of the automated modeling process.In previous works, the modeled capabilities and theintended ODD were either small [5–7] or the construction ofskill graphs was only demonstrated for one or a few selectedbehaviors [3, 8, 9]. The construction of a full set of skillgraphs for a fully automated vehicle capable of performing arange of behaviors in a complex ODD has not been presentedso far. Thus, the challenges accompanying the construction ofskill graphs for multiple behaviors in a complex ODD havenot arisen before and a structured and formalized constructionprocess was not necessary due to the reduced complexity ofthe task.To handle the complexity of the construction process,we propose to design the construction process to requireonly minimal expert involvement. Thus, expert knowledgeis composed into a knowledge base. Every vehicle behaviorrequires a foundation of skills for its execution. Modelingthese foundation skills still involves experts with knowledgeof the respective behaviors. Additional necessary skills dependon the scene elements present in the vehicle’s ODD. Theseadditional skills are inferred from the ODD and automaticallyadded to the base graph of foundation skills using theinformation stored in the knowledge base. Experts should beinvolved again in validating the generated graphs.This process relieves experts from the tedious parts ofskill graph construction while keeping them involved for theaspects where their expertise is indispensable. Generatingthe ODD-dependent part of the graphs automatically hasthe additional advantage that changes in the ODD aredirectly reflected in the skill graphs. At this point, onlyskill requirements derived from the ODD are reflected inthis knowledge-based construction process. It is conceivablethat skills may also depend on other aspects such as trafficrules or Object and Event Detection and Response (OEDR) a r X i v : . [ c s . A I] F e b trategies. However, additional dependencies can be easilyadded to the knowledge base. Another possible advantage of aknowledge-based automatic generation of skill graphs is thatit is sufficient to verify the correctness of the knowledge baseinstead of the correctness of every graph. Correctness of thegraphs is guaranteed due to the reasoning of the ontology aslong as the information inside the knowledge base is correctand complete.The remainder of this paper is structured as follows:Section II and Section III give a brief overview of the conceptof skill graphs and the concept of ontologies for knowledgerepresentation. In Section IV, we provide an overview ofrelevant related publications before we present our approachfor automatic skill graph construction and illustrate it withan example in Section V. We discuss preliminary results andlimitations of this approach in Section VI and conclude thepaper in Section VII.II. SKILL GRAPHSSkill graphs were introduced by Reschka et al. [3] and arebased on the concept of a skill network presented in [5–7,10]. Skill graphs are directed acyclic graphs. The nodes ofthe graph represent skills and the directed edges between thenodes represent ”depends on” relations between the skills.The level of abstraction within the skill graph is highest atthe root of the graph and becomes less abstract towards theleaves. Each skill in the skill graph belongs to one of thefollowing seven categories: system skills, behavioral skills,planning skills, perception skills, data acquisition skills, actionskills, and actuation skills. In earlier publications [3, 8, 9,11], data acquisition skills and actuation skills were titleddata sources and data sinks respectively. However, data sinksand data sources are objectively not skills the same way thateyes or legs are not skills [12]. The underlying skills are theacquisition of sensory data (from sensor hardware or the opticnerve) and the control of the actuators (controlling actuatorhardware or the capability to move the legs). We, therefore,adjust the terminology accordingly. The aforementioned skillcategories form a hierarchy based on their level of abstraction,meaning a skill of one category can only have child nodesof specific other categories, c.f. [11]. Data acquisition skillsand actuation skills form the leaves of the graph and have nochild nodes. An example graph showing the general graphstructure is depicted in Fig. 1.III. ONTOLOGIESAccording to Guarino et al. , ”an ontology is a formal,explicit specification of a shared conceptualization” [13,p. 2]. A conceptualization formally represents the entitiesthat are of interest and the relationships that hold amongthese entities for a domain of interest. ’Formal’ refers tothe fact, that the representation must be machine-readable.Ontologies should also be human-readable as they facilitatethe communication between human and machine. A human-and machine-readable formal representation is achieved byusing a subset of first-order predicate logic reduced tounary and binary predicates as a language for representing behavioral skillplanningskillperceptionskill 1 perceptionskill 2 actionskill 1actionskill 2actuationskill 1actuationskill 2data acqui-sition skill 1data acqui-sition skill 2 Fig. 1. Example of the structure of a skill graph. Boxes represent skillnodes, colors denote the skill categories: behavioral (yellow), action (orange),actuation (red), planning (light blue), perception (green), data acquisition(dark blue). Solid arrows represent ’depends on’-relations, dashed arrowsrepresent ’may depend on’-relations. knowledge. Concepts also called classes are described byunary predicates, roles also called relations or properties aredescribed by binary predicates, and individuals are instancesof a concept or class [13].Ontologies are structured into terminological boxes (T-box) describing the concepts of a domain, i.e. hierarchicalclasses, axioms, and properties, and assertional boxes (A-box)representing individuals of classes and knowledge from data.Reasoners can infer additional knowledge from termino-logical and assertional boxes, identify conflicts in conceptand axiom definitions, and check for consistency [14].IV. RELATED WORKSkill graphs were proposed for online capability monitoringin [3] and further substantiated in [8, 9]. Skill graphsmodel the skills necessary for a vehicle to perform anabstract behavior as nodes in a directed acyclic graph andthe dependencies between these skills as directed edgesbetween nodes. Reschka [8] also proposes the use of skillgraphs during the development process to aid in the con-struction of a functional system architecture. Nolte et al. [9] extend the use of skill graphs in early stages of thedevelopment process by demonstrating their usefulness forthe derivation and refinement of functional requirementsfrom safety-requirements along the skills in a skill graph.Bagschik et al. [15] propose to regard skill graphs as oneview in an architecture framework that is connected tother architecture views such as the software, hardware,or functional system architecture. Skill graphs provide afunctional viewpoint independent from the implementationrealized in software or hardware and independent fromthe representation of functional components and interfaces.Through interconnections with other architecture views, skillscan be related to their implementation or functional systemcomponents. Kn¨uppel et al. [11] utilize skill graphs for theverification of cyber-physical systems. The authors combineskill graphs as a formal system model with a formal theoremprover. They connect the individual skills of a skill graphwith models for the realization of the skills and show averification of a skill graph regarding safety requirementsfor the skills. Kn¨uppel et al. also provide a formalizationof skill graphs. While several possible applications for skillgraphs have been proposed, none of the publications providea structured process for the construction of skill graphs.Colwell et al. [16] note that a change in capabilities ofthe vehicle results in a restriction of the ODD the vehicle isable to operate in safely. They define one or more so-calleddegraded operation modes caused by system impairments foreach subsystem of the automated vehicle and relate thesemodes to restrictions of the ODD. While [16] do not makeuse of the skill graph concept to manage ODD restrictions,they note that skill graphs could provide a useful abstractionbetween degraded operation modes and ODD restrictions.This connection of ODD and required vehicle skills is alsostated in [2]. Nolte et al. [2] provide a taxonomy of self-monitoring concept for automated vehicles and relate skillgraphs as a capability representation to other aspects of self-representation and to the ODD. They state that the ODDdetermines the necessary capabilities of an automated vehicleas well as the functional requirements these capabilities haveto fulfill.Several recent publications have focused on a formaldescription of the ODD [17–21]. While they differ in thedetails, they all include a representation of scene elements.For a scene representation, Bagschik et al. [22] proposea five-layer model to structure scene elements such astraffic participants and their interactions, and environmentalinfluences. They demonstrate the usefulness of this model ina knowledge-based scene generation for German highways.While not initially intended for ODD description, the five-layer model can be utilized to structure scene elements in anODD description. Similar approaches for a representation of(parts of) the ODD in a knowledge base were presented in[23] and [14]. V. APPROACHTo enable automatic skill graph construction, several stepsof manual information processing are required beforehand.An overview of the process is shown in Fig. 2. In a firststep, base skill graphs must be constructed by experts forskills and behaviors. The base skill graphs, along with expertknowledge about skills, and regulatory information concerningscene elements are represented in a scene and skill ontology.This ontology and a user’s selection of scene elements form the input for a python-based implementation realizing theautomatic skill graph construction according to stored rules.
Base skill graphExpert skillknowledgeBase skill graph constructionGuidelinesTraffic signcatalogRoad trafficregulationsRepresentation in scene and skill ontologyScene and skill ontologyScene ontology Skill ontology User inputAutomatic Construction of Skill GraphsSkill graph Documentation of skillgraph construction
Fig. 2. Overview of the process for automatic skill graph construction.Gray boxes represent process steps, white boxes represent inputs/outputs.
Requirements for the necessary skills of an automatedvehicle stem from at least two sources: the behaviors thevehicle shall be able to perform and the ODD the vehicleshall perform these behaviors in. Every behavior requires a setof foundation skills to perform it regardless of the intendeddomain. Thus, a base skill graph can be constructed fromthese foundation skills for every behavior. The constructionof these base graphs is a task for experts as it requires deeperknowledge about what each behavior entails. However, theconstruction of the base skill graphs only has to be done once.The base skill graphs for the behaviors are ODD-independentand can be reused for different domains as long as the requiredbehaviors do not change. Once the base skill graphs havebeen constructed, requirements for additional skills can bederived from the ODD. The ODD plays a central part in thedevelopment of an automated vehicle and several approachesfor its description have been proposed [17–19]. What they allhave in common is that they describe the scene elements [24]that can occur in the ODD. To structure these scene elements,the five-layer model for the representation of driving scenesby Bagschik et al. [22] can be used. It structures the sceneelements in the following five layers:L1: road-level elementsL2: traffic infrastructureL3: temporary manipulation of L1 and L2L4: objectsL5: environmental conditionsKnowledge regarding the scene elements of these layerscan be modeled in an ontology as demonstrated in [22] forGerman highways. By extending the ontology in [22] with cene elementlayerL1: road-levelL2: traffic in-frastructureL3: temporarymanipulationof L1 and L2L4: objectsL5: environment skillsystem skill behavioral skill planning skillperception skillaction skilldata acqui-sition skillactuation skillskill graph determines1..* 0..* .. *1 .. .. *1 .. * scene ontology skill ontologyscene and skill ontology Fig. 3. Class diagram of the connections between skills and scene elements. the base skill graphs for the behaviors, the skill(s) necessaryfor handling a scene element, and dependencies betweenskills, this extended scene and skill ontology can be used toautomatically generate a skill graph for a certain behaviorand a specific ODD. The structure of the scene and skillontology is depicted in Fig. 3.In order to use this scene and skill ontology for automatedskill graph construction, it is necessary to access the in-formation stored within the ontology and infer additionalinformation from the stored properties. A python-basedimplementation with a QT-based graphical user interface wasprogrammed to access the information stored in the ontologyand utilize it for automated skill graph construction. Theimplementation utilizes the python library Owlready2 [25]to access the information in the ontology. Via the GUI, abehavior and a general domain, e.g. highway, can be selected.The domain can be further specified by manipulating theoccurring scene elements. This input is used by the underlyingimplementation to access the ontology and infer the skill graphfor the specified behavior and the selected ODD.In the following, we describe the individual steps of theapproach in detail, i.e. the construction of the base skill graphs,building the scene and skill ontology, and the automatic skillgraph construction. Each step is illustrated using the behavior”lane keeping” as an example.
A. Construction of base skill graph
The base skill graph is constructed by experts withknowledge of the behavior the vehicle shall perform. Everyvehicle behavior is connected to some basic infrastructurethat needs to be present for the vehicle to be able to performthe behavior. Every behavior requires a driving surface andindividual infrastructure elements.This selection of minimum necessary scene elements canaid experts in the derivation of the foundation set of skills avehicle requires to perform the behavior. These skills and thedependencies between these skills form the base skill graphfor the behavior and are always required regardless of the ODD. This approach was inspired by the utilization of a basecase in a maneuver description in [26].
1) Example: Lane keeping:
We will illustrate the baseskill graph construction at the example of the behavior ”lanekeeping”. This behavior comprises the lateral aspects offollowing a lane but not the longitudinal aspects. The behaviorlane keeping requires the existence of at least one lane on adrivable area with some form of lane boundaries. The laneboundaries are intentionally kept vague as it is only relevantthat there is some way of discerning where the lane endsbut not how. At this point the generalized unspecific conceptof lane boundaries includes all possible variations in thefield: lane markings, implicit boundaries of the drivable area,curbs, virtual boundaries stored in a digital map. A visualrepresentation of the selection of minimum necessary sceneelements is depicted in a graphical format in Fig. 4.
Fig. 4. Visual representation of the selection of scene elements which mustbe present for a vehicle to perform the behavior ”lane keeping”. The grayarea depicts the road surface or surface of the drivable area. The areas withred crosshatch depict unspecified lane boundaries.
The resulting base skill graph is depicted in Fig. 5. Tofollow a lane, the vehicle needs to be able to plan its trajectoryto stay within the lane boundaries. Thus, it needs to be ableto perceive the course of the lane, estimate its position andorientation relative to the lane, and estimate its own vehicle ane keepingPlantrajectoryEstimate lanerelative positionand orientationPerceivelane course Estimatemotion ControllateraldynamicsControlcourse angleActuate steer-ing systemActuatebrake system ActuatepowertrainAcquire motionsensor dataAcquire imagingsensor dataAcquire digitalmap data
Fig. 5. Base skill graph for ’lane keeping’. Notation see Fig. 1 motion. The perception of the lane course requires either theevaluation of acquired digital map data and a pose estimate,or the evaluation of acquired imaging sensor data and a poseestimate. Vehicle motion may be estimated by the evaluationof acquired motion sensor data or the evaluation of acquiredimaging sensor data.Additional to planning and perception skills, lane keepingalso requires action skills. To stay within lane boundaries,the vehicle must be able to control its lateral motion. Thus,it needs to be able to control the course angle of the vehicleand it needs an estimate of the vehicle’s motion. This requiresthe skill of controlling the steering system. It may also berealized by controlling the powertrain or the brake system.Skills closer to the root are more abstract and are necessaryfor most behaviors. Skills closer to the leaves are more specificand depend more on the ODD. The actuation skills at theleaves are fixed due to the general actuator design of a vehicle.The data acquisition skills are intentionally kept vague and areonly separated into evaluation of digital map data, evaluationof imaging sensor data, and evaluation of motion sensor data.In this way, skill graphs can assist in deriving a sensor conceptbased on the required skills.
B. Scene and skill ontology
The scene and skill ontology contains the connectionsbetween scene elements and required skills, the connectionsbetween individual skills, and connection between behaviorsand the base skill graphs. An ontology provides a format toorganize information, i.e. data and their semantic connections,in a human- and machine-readable manner, c.f. Section III.The scene and skill ontology is a simplification and anextension of the scene ontology in [22]. The scene elements in the scene and skill ontology arestructured using the five-layer-model for the representation ofdriving scenes [22]. Scene elements for the domains (German)highways and urban areas were included in the ontology. Thescene elements were derived from guideline documents forthe construction of highways [27] and of urban roads [28],and from German traffic regulations [29]. Each scene elementbelongs to none or multiple domains but belongs to exactlyone layer.Skills are structured using the seven skill categoriesintroduced in Section II. Each skill belongs to exactly oneskill category. The skills were derived using expert knowledgeas no skill catalog exists for all the skills an automated vehiclerequires. Dependency relations between the individual skillswere also derived from expert knowledge and added to theskills as properties. A skill can depend on none or multipleother skills. Only actuation and data acquisition skills dependon no other skills. Skills are connected via a dependencyrelation representing the edges between skills in the skillgraph. For each behavioral skill, all skills forming the baseskill graph were added using a separate necessity relation.The same necessity relation is used to model relationshipsbetween skills that exclusively occur together in a skill graph.Finally, the scene element part and the skill part of theontology were connected via relations between scene elementsand skills. A scene element can determine the necessity fornone or multiple skills. A skill can also be determined bymultiple scene elements. This relation is modeled as a propertyof the individual scene elements but could just as well bemodeled as a skill property.For skill graph construction, it is only relevant whethera scene element exists within the ODD. The placement ofelements is (mostly) irrelevant. Thus, connections betweenscene elements necessary for automatic scene creation asin [22] were not included in the ontology. For simplicity,temporary manipulations of road-level elements and trafficinfrastructure (L3), such as road works, were omitted from theontology as well. Layer 4, objects, includes the interactionsbetween objects and the maneuvers dynamic objects perform.Maneuvers are a representation of behaviors for whichthe skill graphs are constructed, and, thus, are connectedto behavioral skills as the root nodes of the individualskills graphs. The environmental conditions on layer 5 mayinfluence the quality of a skill but do not evoke requirementsfor a skill’s existence and were, thus, not used for skill graphconstruction. In essence, only scene elements of layers 1, 2,and layer 4 were used for the skill graph generation.The resulting ontology contains the experts knowledgeabout the scene elements that can be present in a domain, theskills determined due to the presence of these scene elements,and the dependencies between all skills.
C. Automated skill-graph construction
In order to use the scene and skill ontology for automatedskill graph construction, we need to be able to access theinformation stored in it and infer additional informationfrom the stored properties. The information stored in thentology in the form of classes and properties representthe T-box. As stated above, a python-based implementationwith a graphical user interface (GUI) was programmed toaccess the information stored in the ontology and utilize it forautomated skill graph construction. Via the GUI, a behaviorand a general domain, e.g. highway domain or urban domain,can be selected. For the selected domain, the occurring sceneelements can be further specified. The selection of behaviorand scene elements generates instances of the respectiveclasses stored in the T-box. These instances are added to theA-box and are used as input for the underlying implementationto access the information in the T-box and infer the skill graphfor the selected behavior and the specified ODD.During the inference process the A-box is populated withinstances of skills inferred from the existence of sceneelements and from necessity relations between skills. Thebehavioral skill determined by the behavior scene element isextracted from the ontology and an instance of the class isadded to the A-box. At the start of the inference process, thebase graph for the behavior is extracted from the ontologyas the behavioral skill has direct necessity relations to theseskills of the base skill graph. Instances of the skills of thebase skill graph and the dependencies between these instancesare added to the A-box. As the base skills have dependencyrelations between each other, the base skill graph can beconstructed according to these relations. Then, the selectedscene elements are used to infer additional skills using theassociation of scene elements to required skills. Instances ofthese skills and their dependencies to other skills are added tothe A-box. Does an added skill necessitate a skill that is notpart of the A-box yet an instance of the missing skill is addedas well. This is a recursive process until all missing skills areadded. Once all scene elements have been considered andall resulting instances of skills and their dependencies havebeen added to the A-box, the inference process is complete.The resulting skill graph is extracted from the A-box andtransformed into a suitable output format. The implementationoutputs the final graph and, additionally, a document detailingthe insertion process for traceability of the modeling steps.It may be worth noting at this point that it is not strictlynecessary for an automated vehicle to possess all skillsincluded in the generated skill graph. Some requirements fromthe existence of certain scene elements may have differentredundant solutions requiring different underlying skills. Allredundant solutions are modeled in this initial generation.This allows the approach presented here to be used from thestart of the development process. Reschka [8] proposes skillgraphs as a tool to aid in the creation of the functional systemarchitecture. We would also suggest the use of skill graphsfor modeling possible redundant solutions to a problem andto help guide system implementation. A highly detailed graphis likely not useful for capability monitoring considering thenecessity of very detailed monitoring metrics that may bedifficult to provide. Therefore, we suggest to generate a verydetailed initial skill graph for each behavior and prune andcondense it as needed later in the development process tomake it suitable for other application. This pruning process
SceneElementMarking- determines PerceiveLaneMarkings - belongs to layer:L1 SolidLaneMarking- determines PerceiveSolidLaneMarkings - belongs to layer:L1 DashedLaneMarking- determines PerceiveDashedLaneMarkings - belongs to layer:L1 SkillPerceptionSkillPerceiveLaneCourse- depends on PerceiveLaneMarkings
PerceiveLaneMarkings- depends on PerceiveSolidLaneMarkings - depends on PerceiveDashedLaneMarkings PerceiveSolidLaneMarkings- depends on EvaluateImagingSensorData - necessitates PerceiveLaneMarkings PerceiveDashedLaneMarkings- depends on EvaluateImagingSensorData - necessitates PerceiveLaneMarkings T-Box
Solid Marking determines depends on A-Box
Fig. 6. Excerpt of T-box and A-box for ’lane keeping’ example with sceneelements, solid marking and dashed marking, and related skills. may be automated as well.
1) Example: Lane keeping:
We will illustrate the process ofskill graph generation using the example of the ”lane keeping”behavior discussed above. Fig. 6 shows part of the T-box forthis example to illustrate the inference process. Properties ofmore abstract classes are inherited by their child classes. Weselect the behavior ’lane keeping’, the domain, e.g. ’urban’,and solid lane markings and dashed lane markings as explicitdelineation between lanes on layer 1 as scene elements presentin our ODD via the GUI. If the ODD also contains roads withmultiple lanes not delineated by lane marking this must bemodeled as well, as the absence of lane markings may requireadditional skills. This also applies to other infrastructureelements such as, e.g. stop lines at intersections. Instances ofthe scene elements (behavior and lane markings) are added tothe A-box. The scene element ’lane keeping’ is connected tothe behavioral skill ’lane keeping’ via a relation stored in theT-box, therefore an instance of the skill ’lane keeping’ is addedto the A-Box. This behavioral skill necessitates the skills ofthe base skill graph which are extracted from the ontologyand instances of the skills and their dependency relationsare added to the A-box. The existence of any type of (lane)marking within the ODD determines the skill to ’perceive lanemarkings’. This property is inherited by the scene elements’solid lane marking’ and ’dashed lane marking’ from thesuper-class ’marking’. An instance of a determined skill is ane keepingPlanpathEstimate lanerelative positionand orientationPerceivelane coursePerceive lanemarkings EstimatemotionPerceive dashedlane markingsPerceive solidlane markings ControllateraldynamicsControlcourse angleActuate steer-ing systemActuatebrake system ActuatepowertrainAcquire motionsensor dataAcquire imagingsensor dataAcquire digitalmap data
Fig. 7. Skill graph for ’lane keeping’ for an ODD with the scene elements’solid lane marking’ and ’dashed lane marking’ on L1. Notation see Fig. 1 only added to the A-box once if it is determined by multiplescene elements. The two different types of lane markingsalso determine the skills ’perceive solid lane markings’ and’perceive dashed lane markings’. Instances of these skills areadded to the A-box according to the dependency relationsstored in the skills’ properties. If in this example the skill’perceive lane markings’ were not determined by the existenceof a scene element it would have been added to the A-boxbased on its necessity relations. Both the skills ’perceivesolid lane markings’ and ’perceive dashed lane markings’necessitate the skill ’perceive lane markings’. Fig. 6 showspart of the A-box for this example. The resulting skill graphextracted from the A-box is depicted in Fig. 7. As statedabove, it is not strictly necessary for an automated vehicleto perceive lane markings and infer the course of the lanefrom the perceived markings. Extracting the course of thelane from the evaluation of digital map data and using amap-relative pose estimate of the vehicle is also a possibleway to determine the lane relative position and orientationof the vehicle. Both solutions are modeled in the graph andcan be pruned later if desired. VI. PRELIMINARY RESULTS AND DISCUSSIONSkill graphs for several different vehicle behaviors and avariety of scene element combinations were automaticallygenerated using the approach presented above. The base skillgraphs for each of these behaviors were constructed manuallyby experts. The generated skill graphs were analyzed byexperts and found to be sound in their general construction.No skills or dependencies were missing from the graph andall dependency connections were correctly drawn.Automatically generated skill graphs still require expertassessment after generation to account for possible gaps in theknowledge base. However, the automatic generation of skillgraphs reduced errors in the construction process compared toa manual construction. A number of automatically generatedgraphs were compared with manually constructed graphs.This comparison managed to highlight inconsistencies inbetween the manually constructed graphs, missing dependencyrelations, and in rare cases missing skills. Errors in theautomatically generated graphs can be traced back to errors orgaps in the knowledge base using the automatically generateddocumentation of the skill graph construction steps. Oncethe knowledge base is corrected, the errors in all affectedgraphs are corrected. The construction process is, thus, mostlyreduced to a review process.This initial implementation serves mostly as a proof ofconcept and has several limitations. In this initial imple-mentation, the automatically generated skill graphs havea very fine skill granularity. Meaning, for example, thatevery individual traffic sign type present in the ODD willrequire a skill to perceive this particular type of sign. Inorder to derive requirements for system implementation, skillgraphs with such a fine granularity can be helpful. Forpurposes of capability monitoring during operation, sucha fine granularity is most likely not useful. For capabilitymonitoring it is more relevant that traffic signs in generalcan still be perceived rather than every individual sign type.Additionally, monitoring metrics for the perception quality oftraffic sign detection in general can be more easily providedthan the perception quality of each individual traffic sign type.Thus, different levels of abstraction in skill granularity arenecessary for different application. One solution can be todefine superordinate skills and group related skills under thesesuper-skills. Depending on a selected level of granularity onlythe super-skills are included in the final graph or the super-skills with all their sub-skills. Adding granularity can increasethe usefulness of the approach for different applications.Bagschik et al. use the highway scene ontology presented in[22] to generate traffic scenes from the ontology. During thisprocess, they automatically exclude impossible combinationsand relations of scene elements in the generated scenes.While automated skill graph construction does not requirethe same semantic information as scene generation, semanticinformation could improve the selection of scene elementsfor the definition of the ODD. ODD specification couldbe improved by including semantic information, e.g., aboutscene elements that are interdependent, meaning one elementill never occur without the other, and therefore cannotbe separated. These connections can be unidirectional orbidirectional. To include these semantic relations in thescene ontology can limit the mistakes made during ODDspecification.VII. CONCLUSION & FUTURE WORKIn this paper, we proposed a knowledge-based approachfor the automatic construction of skill graphs. Automating theconstruction of skill graphs relieves experts of a tedious anderror prone modeling task and allows to integrate changesin the graphs automatically. Automating this constructionprocess also means, that non-experts can generate skill graphsto use in other parts of the development process. Experts willstill be necessary to review the generated graphs.We stated the influence of the ODD on the requiredcapabilities of an automated vehicle in a previous contribu-tion [2]. In this contribution, we detailed how this influencemanifests itself in the relation between scene elements andrequired skills in the skill graphs. It would be interesting toevaluate the influence of other aspects of the ODD on therequired vehicle skills. The organization of this relationalknowledge into an ontology also provides the opportunity ofadding additional information such as monitoring metrics ormonitoring requirements to this ontology as indicated in [9].The ontology used in the presented approach for theautomatic generation of skill graphs was adapted from ascene ontology for automatic scene generation. Thus, at leasttwo possible applications for an ontological representationof domain knowledge have been presented. Additional ap-plications in environment perception or scene understandingare evident possibilities. As domain knowledge is required atseveral points during automated vehicle development, a singledomain knowledge representation for all possible applicationscould be useful to limit inconsistencies during development.VIII. ACKNOWLEDGMENTWe would like to thank Ansgar Bock for assisting in theimplementation of this approach and Marcus Nolte for thevaluable discussions during conceptualization.R
EFERENCES[1] T. Woopen et al. , “UNICARagil-disruptive modular architecturesfor agile automated vehicle concepts,” in ,Aachen, 2018.[2] M. Nolte, I. Jatzkowski, S. Ernst, and M. Maurer, “Supporting SafeDecision Making Through Holistic System-Level Representations& Monitoring – A Summary and Taxonomy of Self-RepresentationConcepts for Automated Vehicles,” arXiv:2007.13807 [cs, eess] ,2020. arXiv: .[3] A. Reschka, G. Bagschik, S. Ulbrich, M. Nolte, and M. Maurer,“Ability and skill graphs for system modeling, online monitoring, anddecision support for vehicle guidance systems,” in
IEEE IntelligentVehicles Symposium (IV) , 2015.[4] SAE, “J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems,” Standard, 2018.[5] M. Maurer, “Flexible Automatisierung von Straßenfahrzeugen mitRechnersehen,” Dissertation, Universit¨at der Bundeswehr M¨unchen,M¨unchen, 2000.[6] K.-H. Siedersberger, “Komponenten zur automatischenFahrzeugf¨uhrung in sehenden (semi-)autonomen Fahrzeugen,”Dissertation, Universit¨at der Bundeswehr M¨unchen, M¨unchen, 2003. [7] M. Pellkofer, “Verhaltensentscheidung f¨ur autonome Fahrzeuge mitBlickrichtungssteuerung,” Dissertation, Universit¨at der BundeswehrM¨unchen, M¨unchen, 2003.[8] A. Reschka, “Fertigkeiten- und F¨ahigkeitengraphen als Grundlagedes sicheren Betriebs von automatisierten Fahrzeugen im ¨offentlichenStraßenverkehr in st¨adtischer Umgebung,” Dissertation, TechnischeUniversit¨at Braunschweig, Braunschweig, 2017.[9] M. Nolte, G. Bagschik, I. Jatzkowski, T. Stolte, A. Reschka, andM. Maurer, “Towards a skill- and ability-based development processfor self-aware automated road vehicles,” in
IEEE 20th InternationalConference on Intelligent Transportation Systems (ITSC) , 2017.[10] P. J. Bergmiller, “Towards Functional Safety in Drive-by-WireVehicles,” Ph.D. dissertation, Springer International Publishing,Cham, 2015.[11] A. Kn¨uppel, I. Jatzkowski, M. Nolte, T. Th¨um, T. Runge, andI. Schaefer, “Skill-Based Verification of Cyber-Physical Systems,” in
Fundamental Approaches to Software Engineering , H. Wehrheim andJ. Cabot, Eds., vol. 12076, Cham: Springer International Publishing,2020, pp. 203–223.[12] M. Nolte,
Personal communication , 2020.[13] N. Guarino, D. Oberle, and S. Staab, “What Is an Ontology?” In
Handbook on Ontologies , S. Staab and R. Studer, Eds., Berlin,Heidelberg: Springer Berlin Heidelberg, 2009.[14] M. H¨ulsen, J. M. Z¨ollner, and C. Weiss, “Traffic intersection situationdescription ontology for advanced driver assistance,” in , 2011, pp. 993–999.[15] G. Bagschik, M. Nolte, S. Ernst, and M. Maurer, “A System’sPerspective Towards an Architecture Framework for Safe AutomatedVehicles,” in
IEEE 21st International Conference on IntelligentTransportation Systems (ITSC) , 2018.[16] I. Colwell, B. Phan, S. Saleem, R. Salay, and K. Czarnecki, “AnAutomated Vehicle Safety Concept Based on Runtime Restriction ofthe Operational Design Domain,” in , 2018, pp. 1910–1917.[17] British Standards Institution, “PAS 1883: Operational Design Domain(ODD) taxonomy for an automated driving system (ADS) – Specification,” Tech. Rep., 2020.[18] P. Koopman and F. Fratrik, “How Many Operational Design Domains,Objects, and Events?” In
SafeAI@ AAAI , 2019.[19] M. Gyllenhammar et al. , “Towards an Operational Design DomainThat Supports the Safety Argumentation of an Automated DrivingSystem,” in , Toulouse, France, 2020.[20] K. Czarnecki, “Operational World Model Ontology for AutomatedDriving Systems - Part 1: Road Structure,”
Waterloo IntelligentSystems Engineering (WISE) Lab , 2018.[21] ——, “Operational World Model Ontology for Automated DrivingSystems - Part 2: Road Users, Animals, Other Obstacles, andEnvironmental Conditions,”
Waterloo Intelligent Systems Engineering(WISE) Lab , 2018.[22] G. Bagschik, T. Menzel, and M. Maurer, “Ontology based SceneCreation for the Development of Automated Vehicles,” in , 2018, pp. 1813–1820.[23] B. Hummel, W. Thiemann, and I. Lulcheva, “Scene understanding ofurban road intersections with description logic,” in
Dagstuhl SeminarProceedings , Schloss Dagstuhl-Leibniz-Zentrum f¨ur Informatik,2008.[24] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer,“Defining and Substantiating the Terms Scene, Situation, and Scenariofor Automated Driving,” in , 2015, pp. 982–988.[25] J.-B. Lamy, “Owlready: Ontology-oriented programming in Pythonwith automatic classification and high level constructs for biomedicalontologies,”
Artificial Intelligence in Medicine , vol. 80, pp. 11–28,2017.[26] K. Czarnecki, “Automated Driving System (ADS) Task Analysis -Part 2: Structured Road Maneuvers,”