Can determinism and compositionality coexist in RML? (extended version)
aa r X i v : . [ c s . L O ] A ug Submitted to: c (cid:13)
D. Ancona, A. Ferrando, and V. MascardiThis work is licensed under theCreative Commons Attribution License.
Can determinism and compositionality coexist in RML?(extended version)
Davide Ancona Viviana Mascardi
DIBRIS, University of Genova, Italy { Davide.Ancona,Viviana.Mascardi } @unige.it Angelo Ferrando
University of Manchester, UK [email protected]
Runtime verification (RV) consists in dynamically verifying that the event traces generated by singleruns of a system under scrutiny (SUS) are compliant with the formal specification of its expectedproperties. RML (Runtime Monitoring Language) is a simple but expressive Domain Specific Lan-guage for RV; its semantics is based on a trace calculus formalized by a deterministic rewriting systemwhich drives the implementation of the interpreter of the monitors generated by the RML compilerfrom the specifications. While determinism of the trace calculus ensures better performances of thegenerated monitors, it makes the semantics of its operators less intuitive. In this paper we movea first step towards a compositional semantics of the RML trace calculus, by interpreting its basicoperators as operations on sets of instantiated event traces and by proving that such an interpretationis equivalent to the operational semantics of the calculus.
RV [34, 26, 12] consists in dynamically verifying that the event traces generated by single runs of a SUSare compliant with the formal specification of its expected properties.The RV process needs as inputs the SUS and the specification of the properties to be verified, usuallydefined with either a domain specific (DSL) or a programming language, to denote the set of valid eventtraces; RV is performed by monitors, automatically generated from the specification, which consumethe observed events of the SUS, emit verdicts and, in case they work online while the SUS is executing,feedback useful for error recovery.RV is complimentary to other verification methods: analogously to formal verification, it uses aspecification formalism, but, as opposite to it, scales well to real systems and complex properties and itis not exhaustive as happens in software testing; however, it also exhibits several distinguishing features:it is quite useful to check control-oriented properties [2], and offers opportunities for fault protectionwhen the monitor runs online. Many RV approaches adopt a DSL language to specifiy properties tofavor portability and reuse of specifications and interoperability of the generated monitors and to providestronger correctness guarantees: monitors automatically generated from a higher level DSL are morereliable than ad hoc code implemented in a ordinary programming language to perform RV.RML [27] is a simple but expressive DSL for RV which can be used in practice for RV of complexnon Context-Free properties, as FIFO properties, which can be verified by the generated monitors intime linear in the size of the inspected trace; the language design and implementation is based on previ-ous work on trace expressions and global types [7, 16, 4, 9], which have been adopted for RV in severalcontexts. Its semantics is based on a trace calculus formalized by a rewriting system which drives the im-plementation of the interpreter of the monitors generated by the RML compiler from the specifications; toallow better performances, the rewriting system is fully deterministic [10] by adopting a left-preferential https://rmlatdibris.github.io Candeterminism and compositionality coexist inRML?evaluation strategy for binary operators and, thus, no monitor backtracking is needed and exponentialexplosion of the space allocated for the states of the monitor is avoided. A similar strategy is followedby mainstream programming languages in predefined libraries for regular expressions for efficient incre-mental matching of input sequences, to avoid the issue of Regular expression Denial of Service (ReDoS)[23]: for instance, given the regular expression a?(ab)? (optionally a concatenated with optionally ab )and the input sequence ab , the Java method lookingAt() of class java.util.regex.Matcher matches a instead of the entire input sequence ab because the evaluation of concatenation is deterministicallyleft-preferential.As explained more in details in Section 5, with respect to other existing RV formalisms, RML hasbeen designed as an extension of regular expressions and deterministic context-free grammars, whichare widely used in RV because they are well-understood among software developers as opposite to othermore sophisticated approaches, as temporal logics. As shown in previous papers [7, 16, 4, 9], the calculusat the basis of RML allows users to define and efficiently check complex parameterized properties and ithas been proved to be more expressive than LTL [8].Unfortunately, while determinism ensures better performances, it makes the compositional semanticsof its operators less intuitive; for instance, the example above concerning the regular expression a?(ab)? with deterministic left-preferential concatenation applies also to RML, which is more expressive thanregular expressions: the compositional semantics of concatenation does not correspond to standard lan-guage concatenation, because a? and (ab)? denote the formal languages { λ , a } and { λ , ab } , respectively,where λ denotes the empty string, while, if concatenation is deterministically left-preferential, then thesemantics of a?(ab)? is { λ , a , aab } which does not coincide with the language { λ , a , ab , aab } obtainedby concatenating { λ , a } with { λ , ab } . In Section 4 we show that the semantics of left-preferential con-catenation can still be given compositionally, although the corresponding operator is more complicatethan standard language concatenation. Similar results follow for the other binary operators of RML(union, intersection and shuffle); in particular, the compositional semantics of left-preferential shuffle ismore challenging. Furthermore, the fact that RML supports parametricity makes the compositional se-mantics more complex, since traces must be coupled with the corresponding substitutions generated byevent matching. To this aim, as a first step towards a compositional semantics of the RML trace calculus,we provide an interpretation of the basic operators of the RML trace calculus as operations on sets ofinstantiated event traces, that is, pairs of trace of events and substitutions computed to bind the variablesoccurring in the event type patterns used in the specifications and to associate them with the data valuescarried by the matched events. Furthermore we prove that such an interpretation is equivalent to theoriginal operational semantics of the calculus based on the deterministic rewriting system.The paper is structured as follows: Section 2 introduces the basic definitions which are used in thesubsequent technical sections, Section 3 formalizes the RML trace calculus and its operational semantics,while Section 4 introduces the semantics based on sets of instantiated event traces and formally proves itsequivalence with the operational semantics; finally, Section 5 is devoted to the related work and Section 6draws conclusions and directions for further work. For space limitations, some proof details can be foundin the Appendix. This section introduces some basic definitions and propositions used in the next technical sections..Ancona, A.Ferrando, and V.Mascardi 3
Partial functions:
Let f : D → C be a partial function; then dom ( f ) ⊆ D denotes the set of elements d ∈ D s.t. f ( d ) is defined (hence, f ( d ) ∈ C ).A partial function over natural numbers f : N → N , with N ⊆ N , is strictly increasing iff for all n , n ∈ dom ( f ) , n < n implies f ( n ) < f ( n ) . From this definition one can easily deduce that a strictlyincreasing partial function over natural numbers is always injective, and, hence, it is bijective iff it issurjective. Proposition 2.1
Let f : N → N, with N ⊆ N , be a strictly increasing partial function. Then for alln , n ∈ dom ( f ) , if f ( n ) < f ( n ) , then n < n . Proposition 2.2
Let f : N → N, with N ⊆ N , be a strictly increasing partial function satisfying the fol-lowing conditions:1. f is surjective (hence, bijective);2. for all n ∈ N , if n + ∈ dom ( f ) , then n ∈ dom ( f ) ;3. for all n ∈ N , if n + ∈ N, then n ∈ N;Then, for all n ∈ N , if n ∈ dom ( f ) , then f ( n ) = n, hence f is the identity over dom ( f ) , and dom ( f ) = N. Event traces:
Let E denotes a possibly infinite set E of events, called the event universe . An eventtrace over the event universe E is a partial function ¯ e : N → E s.t. for all n ∈ N , if n + ∈ dom ( ¯ e ) , then n ∈ dom ( ¯ e ) . We call ¯ e finite / infinite iff dom ( ¯ e ) is finite/infinite, respectively; when ¯ e is finite, its length | ¯ e | coincides with the cardinality of dom ( ¯ e ) , while | ¯ e | is undefined for infinite traces ¯ e . From the definitionsabove one can easily deduce that if ¯ e is finite, then dom ( ¯ e ) = { n ∈ N | n < | ¯ e |} . We denote with λ theunique trace over E s.t. | λ | =
0; when not ambiguous, we denote with e the trace ¯ e s.t. | ¯ e | = e ( ) = e .For simplicity, in the rest of the paper we implicitly assume that all considered event traces aredefined over the same event universe. Concatenation:
The concatenation ¯ e · ¯ e of event trace ¯ e and ¯ e is the trace ¯ e s.t. • if ¯ e is infinite, then ¯ e = ¯ e ; • if ¯ e is finite, then ¯ e ( n ) = ¯ e ( n ) for all n ∈ dom ( ¯ e ) , ¯ e ( n + | ¯ e | ) = ¯ e ( n ) for all n ∈ dom ( ¯ e ) , and if¯ e is finite, then dom ( ¯ e ) = { n | n < | ¯ e | + | ¯ e |} .From the definition above one can easily deduce that λ is the identity of · , and that ¯ e · ¯ e is infinite iff¯ e or ¯ e is infinite. The trace ¯ e is a prefix of ¯ e , denoted with ¯ e ⊳ ¯ e , iff there exists ¯ e s.t. ¯ e · ¯ e = ¯ e . If T and T are two sets of event traces over E , then T · T is the set { ¯ e · ¯ e | ¯ e ∈ T , ¯ e ∈ T } . We write¯ e ⊳ T to mean that there exists ¯ e ∈ T s.t. ¯ e ⊳ ¯ e . Shuffle:
The shuffle ¯ e | ¯ e of event trace ¯ e and ¯ e is the set of traces T s.t. ¯ e ∈ T iff dom ( ¯ e ) can bepartitioned into N and N in such a way that there exist two strictly increasing and bijective partialfunctions f : dom ( ¯ e ) → N and f : dom ( ¯ e ) → N s.t.¯ e ( n ) = ¯ e ( f ( n )) and ¯ e ( n ) = ¯ e ( f ( n )) , for all n ∈ dom ( ¯ e ) , n ∈ dom ( ¯ e ) .From the definition above, the definition of λ and Proposition 2.2 one can deduce that λ | ¯ e = ¯ e | λ = { ¯ e } ;it is easy to show that for all ¯ e ∈ ¯ e | ¯ e , ¯ e is infinite iff ¯ e or ¯ e is infinite, and | ¯ e | = n iff | ¯ e | = n , | ¯ e | = n and n = n + n .If T and T are two sets of event traces over E , then T | T is the set S ¯ e ∈ T , ¯ e ∈ T ( ¯ e | ¯ e ) . Actually, the sufficient condition is surjectivity, but bijectivity can be derived from the fact that the functions are strictlyincreasing over natural numbers.
Candeterminism and compositionality coexist inRML?
Left-preferential shuffle:
The left-preferential shuffle ¯ e ← | ¯ e of event trace ¯ e and ¯ e is the set oftraces T ⊆ ¯ e | ¯ e s.t. ¯ e ∈ T iff dom ( ¯ e ) can be partitioned into N and N in such a way that there existtwo strictly increasing and bijective partial functions f : dom ( ¯ e ) → N and f : dom ( ¯ e ) → N s.t. • ¯ e ( n ) = ¯ e ( f ( n )) and ¯ e ( n ) = ¯ e ( f ( n )) , for all n ∈ dom ( ¯ e ) , n ∈ dom ( ¯ e ) ; • for all n ∈ dom ( ¯ e ) , if m = min { n ∈ dom ( ¯ e ) | f ( n ) < f ( n ) } , then ¯ e ( m ) = ¯ e ( n ) .In the definition above, if { n ∈ dom ( ¯ e ) | f ( n ) < f ( n ) } = /0, then the second condition triviallyholds.As an example, if we have two traces of events ¯ e = e · e , and ¯ e = e · e , by applying the left-preferential shuffle we obtain the set of traces ¯ e ← | ¯ e = { e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e } . With respect to ¯ e | ¯ e , the trace e · e · e · e has been excluded, since this can be obtainedonly when the first occurrence of e belongs to ¯ e ; formally, this correponds to the functions f : { , } →{ , } and f : { , } → { , } s.t. f ( ) = f ( ) = , f ( ) = , f ( ) =
2, which satisfy the first itemof the definition, but not the second, because min { n ∈ { , } | f ( ) = < f ( n ) } = e ( ) = e = ¯ e ( ) ; the functions f ′ and f ′ s.t. f ′ ( ) = f ′ ( ) = , f ′ ( ) = , f ′ ( ) = f ′ is not strictly increasing. Generalized left-preferential shuffle:
Given a set of event traces T , the generalized left-preferentialshuffle ¯ e ← | T ¯ e of event trace ¯ e and ¯ e w.r.t. T is the set of traces T ′ ⊆ ¯ e ← | ¯ e s.t. ¯ e ∈ T ′ iff dom ( ¯ e ) canbe partitioned into N and N in such a way that there exist two strictly increasing and bijective partialfunctions f : dom ( ¯ e ) → N and f : dom ( ¯ e ) → N s.t. • ¯ e ( n ) = ¯ e ( f ( n )) and ¯ e ( n ) = ¯ e ( f ( n )) , for all n ∈ dom ( ¯ e ) , n ∈ dom ( ¯ e ) ; • for all n ∈ dom ( ¯ e ) , if m = min { n ∈ dom ( ¯ e ) | f ( n ) < f ( n ) } , then ¯ e ′ ( m ) = ¯ e ( n ) for all¯ e ′ ∈ T s.t. m ∈ dom ( ¯ e ′ ) .From the definitions of the shuffle operators above one can easily deduce that ¯ e ← | /0 ¯ e = ¯ e | ¯ e and¯ e ← | { ¯ e } ¯ e = ¯ e ← | ¯ e , for all event traces ¯ e , ¯ e . This generalisation of the left-preferential shuffle isneeded to define the compositional semantics of the shuffle in Section 4. Let us consider T = { e · e , e · e } and T = { e · e } ; one might be tempted to define T ← | T as the set { ¯ e | ¯ e ∈ T , ¯ e ∈ T , ¯ e ∈ ¯ e ← | ¯ e } ,which corresponds to { e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e , e · e · e · e } . But, the last three traces, where e is consumed from T as firstevent, are not correct, because the event e in T must take the precedence. Thus, the correct definition isgiven by { ¯ e | ¯ e ∈ T , ¯ e ∈ T , ¯ e ∈ ¯ e ← | T ¯ e } , which does not contain the three traces mentioned above. In this section we define the operational semantics of the trace calculus on which RML is based on.An RML specification is compiled into a term of the trace calculus, which is used as an IntermediateRepresentation, and then a SWI-Prolog monitor is generated; its execution employs the interpreter ofthe trace calculus, whose SWI-Prolog implementation is directly driven by the reduction rules definingthe labeled transition system of the calculus. Syntax.
The syntax of the calculus is defined in Figure 1. The main basic building block of the calculus This happens iff in ¯ e all events of ¯ e precede position n , hence, event ¯ e ( n ) . .Ancona, A.Ferrando, and V.Mascardi 5 v :: = l | { k : v , . . . , k n : v n } | [ v , . . . , v n ] (data value) b :: = x | l | { k : b , . . . , k n : b n } | [ b , . . . , b n ] (basic data expression) θ :: = τ ( b , . . . , b n ) (event type pattern) t :: = ε (empty trace) θ (single event) | t · t (concatenation) | t ∧ t (intersection) | t ∨ t (union) | t | t (shuffle) | { let x ; t } (parametric expression) Figure 1: Syntax of the RML trace calculus: θ is defined inductively, t is defined coinductively on theset of cyclic terms.is provided by the notion of event type pattern , an expression consisting of a name τ of an event type ,applied to arguments which are basic data expressions denoting either variables or the data values (ofprimitive, array, or object type) associated with the events perceived by the monitor. An event type is apredicate which defines a possibly infinite set of events; an event type pattern specifies the set of eventsthat are expected to occur at a certain point in the event trace; since event type patterns can containvariables, upon a successful match a substitution is computed to bind the variables of the pattern with thedata values carried by the matched event.RML is based on a general object model where events are represented as JavaScript object literals;for instance, the event type open( fd ) of arity 1 may represent all events stating ‘function call fs.open has returned file descriptor fd ’ and having shape {event:’func_post’, name:’fs.open’, res: fd } . Theargument fd consists of the file descriptor (an integer value) returned by a call to fs.open . The definitionis parametric in the variable fd which can be bound only when the corresponding event is matched withthe information of the file descriptor associated with the property res ; for instance, open(42) matchesall events of shape {event:’func_post’, name:’fs.open’, res:42} , that is, all returns from call to fs.open with value 42.Despite RML offers to the users the possibility to define the event types that are used in the specifi-cation, for simplicity the calculus is independent of the language used to define event types; correspond-ingly, the definition of the rewriting system of the calculus is parametric in the relation match assigninga semantics to event types (see below).A specification is represented by a trace expression t built on top of the constant ε (denoting thesingleton set with the empty trace), event type patterns θ (denoting the sets of all traces of length 1with events matching θ ), the binary operators (able to combine together sets of traces) of concatenation(juxtaposition), intersection ( ∧ ), union ( ∨ ) and shuffle ( | ), and a let-construct to define the scope ofvariables used in event type patterns.Differently from event type patterns, which are inductively defined terms, trace expressions are as-sumed to be cyclic (a.k.a. regular or rational) [22, 28, 5, 6] to provide an abstract support to recursion,since no explicit constructor is needed for it: the depth of a tree corresponding to a trace expression isallowed to be infinite, but the number of its different subtrees must be finite. This condition is proved tobe equivalent [22] to requiring that a trace expression can always be defined by a finite set of possiblyrecursive syntactic equations. The internal representation of cyclic terms in SWI-Prolog is indeed based on such approach.
Candeterminism and compositionality coexist inRML? (e- ε ) ⊢ E ( ε ) (e-al) ⊢ E ( t ) ⊢ E ( t ) ⊢ E ( t op t ) op ∈{| , · , ∧} (e-or-l) ⊢ E ( t ) ⊢ E ( t ∨ t ) (e-or-r) ⊢ E ( t ) ⊢ E ( t ∨ t ) (e-par) ⊢ E ( t ) ⊢ E ( { let x ; t } ) (single) θ e −→ ε ; σ σ = match ( e , θ ) (or-l) t e −→ t ′ ; σ t ∨ t e −→ t ′ ; σ (or-r) t e −→ t e −→ t ′ ; σ t ∨ t e −→ t ′ ; σ (and) t e −→ t ′ ; σ t e −→ t ′ ; σ t ∧ t e −→ t ′ ∧ t ′ ; σ σ = σ ∪ σ (shuffle-l) t e −→ t ′ ; σ t | t e −→ t ′ | t ; σ (shuffle-r) t e −→ t e −→ t ′ ; σ t | t e −→ t | t ′ ; σ (cat-l) t e −→ t ′ ; σ t · t e −→ t ′ · t ; σ (cat-r) t e −→ t e −→ t ′ ; σ t · t e −→ t ′ ; σ ⊢ E ( t ) (par-t) t e −→ t ′ ; σ { let x ; t } e −→ σ | x t ′ ; σ \ x x ∈ dom ( σ ) (par-f) t e −→ t ′ ; σ { let x ; t } e −→ { let x ; t ′ } ; σ x dom ( σ ) (n- ε ) ε e −→ (n-single) θ e −→ match ( e , θ ) undef (n-or) t e −→ t e −→ t ∨ t e −→ (n-and-l) t e −→ t ∧ t e −→ (n-and-r) t e −→ t ∧ t e −→ (n-and) t e −→ t ′ ; σ t e −→ t ′ ; σ t ∧ t e −→ σ ∪ σ undef(n-shuffle) t e −→ t e −→ t | t e −→ (n-cat-l) t e −→ t · t e −→ E ( t ) (n-cat-r) t e −→ t e −→ t · t e −→ (n-par) t e −→{ let x ; t } 6 e −→ Figure 2: Transition system for the trace calculus.Since event type patterns are inductive terms, the definition of free variables for them is standard.
Definition 3.1
The set of free variables pfv ( θ ) occurring in an event type pattern θ is inductively definedas follows: pfv ( x ) = { x } pfv ( l ) = /0 pfv ( τ ( b , . . . , b n )) = pfv ( { k : b , . . . , k n : b n } ) = pfv ([ b , . . . , b n ]) = S i = ... n pfv ( b i ) Given their cyclic nature, a similar inductive definition of free variables for trace expressions doesnot work; for instance, if t = open( fd ) · t , a definition of fv given by induction on trace expressions wouldwork only for non-cyclic terms and would be undefined for fv ( t ) . Unfortunately, neither a coinductivedefinition could work correctly since the set S returned by fv ( t ) has to satisfy the equation S = { fd } ∪ S which has infinitely many solutions; hence, while an inductive definition of fv leads to a partial functionwhich is undefined for all cyclic terms, a coinductive definition results in a non-functional relation fv ;luckily, such a relation always admits the “least solution” which corresponds to the intended semantics. Fact 3.1
Let p be the predicate on trace expressions and set of variables, coinductively defined as fol-lows: p ( ε , /0 ) p ( θ , S ) pfv ( θ )= S p ( t , S ) p ( { let x ; t } , S \ { x } ) p ( t , S ) p ( t , S ) p ( t op t , S ∪ S ) op ∈{| , · , ∧ , ∨} Then, for any trace expression t, if L = T { S | p ( t , S ) holds } , then p ( t , L ) holds. Proof:
By case anaysis on t and coinduction on the definition of p ( t , S ) . (cid:3) Definition 3.2
The set of free variables fv ( t ) occurring in a trace expression is defined by fv ( t ) = T { S | p ( t , S ) holds } . .Ancona, A.Ferrando, and V.Mascardi 7 Semantics.
The semantics of the calculus depends on three judgments, inductively defined by theinference rules in Figure 2. Events e range over a fixed universe of events E . The judgment ⊢ E ( t ) is derivable iff t accepts the empty trace λ and is auxiliary to the definition of the other two judgments t e −→ t ; σ and t e −→ ; the rules defining it are straightforward and are independent from the remainingjudgments, hence a stratified approach is followed and ⊢ E ( t ) and its negation E ( t ) are safely used inthe side conditions of the rules for t e −→ t ; σ and t e −→ (see below).The judgment t e −→ t ; σ defines the single reduction steps of the labeled transition system on whichthe semantics of the calculus is based; t e −→ t ; σ is derivable iff the event e can be consumed, with thegenerated substitution σ , by the expression t , which then reduces to t . The judgment t e −→ is derivable iffthere are no reduction steps for event e starting from expression t and is needed to enforce a deterministicsemantics and to guarantee that the rules are monotonic and, hence, the existence of the least fixed-point;the definitions of the two judgments are mutually recursive.Substitutions are finite partial maps from variables to data values which are produced by successfulmatches of event type patterns; the domain of σ and the empty substitution are denoted by dom ( σ ) and /0,respectively, while σ | x and σ \ x denote the substitutions obtained from σ by restricting its domain to { x } and removing x from its domain, respectively. We simply write t e −→ t to mean t e −→ t ; /0. Applicationof a substitution σ to an event type patter θ is denoted by σθ , and defined by induction on θ : σ x = σ ( x ) if x ∈ dom ( σ ) , σ x = x otherwise σ l = l σ { k : b , . . . , k n : b n } = { k : σ b , . . . , k n : σ b n } σ [ b , . . . , b n ] = [ σ b , . . . , σ b n ] στ ( b , . . . , b n ) = τ ( σ b , . . . , σ b n ) Application of a substitution σ to a trace expression t is denoted by σ t , and defined by coinductionon t : σε = ε σθ = στ ( b , . . . , b n ) if θ = τ ( b , . . . , b n ) σ ( t op t ) = σ t op σ t for op ∈ {· , ∧ , ∨ , |} σ { let x ; t } = { let x ; σ \ x t } Since the calculus does not cover event type definitions, the semantics of event types is parametric in theauxiliary partial function match , used in the side condition of rules (prefix) and (n-prefix): match ( e , θ ) returns the substitution σ iff event e matches event type σθ and fails (that is, is undefined) iff there isno substitution σ for which e matches σθ . The substitution is expected to be the most general one and,hence, its domain to be included in the set of free variables in θ (see Def. 3.1).As an example of how match could be derived from the definitions of event types in RML, if weconsider again the event type open( fd ) and e = {event:’func_post’, name:’fs.open’, res:42} , then match ( e , open(fd) ) = { fd } , while match ( e , open(23) ) is undefined.Except for intersection, which is intrinsically deterministic since both operands need to be reduced,the rules defining the semantics of the other binary operators depend on the judgment t e −→ to forcedeterminism; in particular, the judgment is used to ensure a left-to-right evaluation strategy: reduction ofthe right operand is possible only if the left hand side cannot be reduced.The side condition of rule (and) uses the partial binary operator ∪ to merge substitutions: σ ∪ σ returns the union of σ and σ , if they coincide on the intersection of their domains, and is undefinedotherwise.Rule (cat-r) uses the judgment E ( t ) in its side condition: event e consumed by t can also be con-sumed by t · t only if e is not consumed by t (premise t e −→ forcing left-to-right deterministic reduction),and the empty trace is accepted by t (side condition ⊢ E ( t ) ).Rule (par-t) can be applied when variable x is in the domain of the substitution σ generated by thereduction step from t to t ′ : the substitution σ | x restricted to x is applied to t ′ , and x is removed from the Candeterminism and compositionality coexist inRML?domain of σ , together with its corresponding declaration. If x is not in the domain of σ (rule (par-f)), nosubstitution and no declaration removal is performed.The rules defining t e −→ are complementary to those for t e −→ t ′ , and the definition of t e −→ dependson the judgment t e −→ t ′ because of rule (n-and): there are no reduction steps for event e starting fromexpression t ∧ t , even when t e −→ t ′ ; σ and t e −→ t ′ ; σ are derivable, if the two generated substitutions σ and σ cannot be successfully merged together; this happens when there are two event type patternsthat match event e for two incompatible values of the same variable.Let us consider an example of a cyclic term with the let-construct: t = { let fd ; open ( fd ) · close ( fd ) · t } .The trace expression declares a local variable fd (the file descriptor), and requires that two immediatelysubsequent open and close events share the same file descriptor. Since the recursive occurrence of t contains a nested let-construct, the subsequent open and close events can involve a different file de-scriptor, and this can happen an infinite number of times. In terms of derivation, starting from t , if theevent {event:’func_post’, name:’fs.open’, res:42} is observed, which matches open(42) , thenthe substitution { fd } is computed. As a consequence, the residual term close ( ) · t is obtained, bysubstituting fd with 42 and removing the let-block. After that, the only valid event which can be observedis {event:’func_pre’,name:’close’,args:[42]} , matching close ( fd ) . Thus, after this rewriting stepwe get t again; the behavior continues as before, but a different file descriptor can be matched becauseof the let-block which hides the outermost declaration of fd ; indeed, the substitution is not propagatedinside the nested let-block. Differently from t , the term { let fd ; t ′ } with t ′ = open ( fd ) · close ( fd ) · t ′ wouldrequire all open and close events to match a unique global file descriptor. As further explained in Sec-tion 5, such example shows how the let-construct is a solution more flexible than the mechanism of traceslicing used in other RV tools to achieve parametricty.The following lemma can be proved by induction on the rules defining t e −→ t ′ ; σ . Lemma 3.1
If t e −→ t ′ ; σ is derivable, then dom ( σ ) ∪ fv ( t ′ ) ⊆ fv ( t ) . Since trace expressions are cyclic, they can only contain a finite set of free variables, therefore thedomains of all substitutions generated by a possibly infinite sequence of consecutive reduction stepsstarting from t are all contained in fv ( t ) . The reduction rules defined above provide the basis for the semantics of the calculus; because of com-puted substitutions and free variables, the semantics of a trace expression is not just a set of event traces:every accepted trace must be equipped with a substitution specifying how variables have been instanti-ated during the reduction steps. We call it an instantiated event trace ; this can be obtained from the pairsof event and substitution traces yield by the possibly infinite reduction steps, by considering the disjointunion of all returned substitutions. Such a notion is needed to allow a compositional semantics. Thenotion of substitution trace can be given in an analogous way as done for event traces in Section 2. Bythe considerations related to Lemma 3.1, the substitution associated with an instantiated event trace hasalways a finite domain, even when the trace is infinite; this means that the substitution is always fullydefined after a finite number of reduction steps. Definition 3.3 A concrete instantiated event trace over the event universe E is a pair ( ¯ e , σ ) of eventtraces over E , and substitution traces s.t. either ¯ e and σ are both infinite, or they are both finite and havethe same length, all the substitutions in σ have mutually disjoint domains and S { dom ( σ ′ ) | σ ′ ∈ ¯ σ } isfinite. See the example in Section 4. .Ancona, A.Ferrando, and V.Mascardi 9 An abstract instantiated event trace (instantiated event trace, for short) over E is a pair ( ¯ e , σ ) where ¯ e is an event trace over E and σ is a substitution. We say that ( ¯ e , σ ) is derived from the concreteinstantiated event trace ( ¯ e , ¯ σ ) , written ( ¯ e , ¯ σ ) ( ¯ e , σ ) , iff σ = S { σ ′ | σ ′ ∈ ¯ σ } . In the rest of the paper we use the meta-variable I to denote sets of instantiated event traces. Weuse the notations I ↓ and I ↓ to denote the two projections { ¯ e | ( ¯ e , σ ) ∈ I } and { σ | ( ¯ e , σ ) ∈ I } ,respectively; we write ¯ e ⊳ I to mean ¯ e ⊳ I ↓ . The notation I ↓ ω denotes the set { ( ¯ e , σ ) | ( ¯ e , σ ) ∈ I , ¯ e infinite } restricted to infinite traces.We can now define the semantics of trace expressions. Definition 3.4
The concrete semantics J t K c of a trace expression t is the set of concrete instantiated eventtraces coinductively defined as follows: • ( λ , λ ) ∈ J t K c iff ⊢ E ( t ) is derivable; • ( e · ¯ e , σ · ¯ σ ) ∈ J t K c iff t e −→ t ′ ; σ is derivable and ( ¯ e , ¯ σ ) ∈ J σ t ′ K c .The (abstract) semantics J t K of a trace expression t is the set of instantiated event traces { ( ¯ e , σ ) | ( ¯ e , ¯ σ ) ∈ J t K c , ( ¯ e , ¯ σ ) ( ¯ e , σ ) } . The following propositions show that the concrete semantics of a trace expression t as given inDefinition 3.4 is always well-defined. Proposition 3.1 If ( ¯ e , ¯ σ ) ∈ J t K c and ¯ e is finite, then | ¯ e | = | ¯ σ | . Proposition 3.2 If ( ¯ e , ¯ σ ) ∈ J t K c and ¯ e is infinite, then ¯ σ is infinite as well. Proposition 3.3 If ( ¯ e , ¯ σ ) ∈ J t K c , then for all n , m ∈ N , n = m implies dom ( ¯ σ ( n )) ∩ dom ( ¯ σ ( m )) = /0 . Proposition 3.4 If ( ¯ e , ¯ σ ) ∈ J t K c , then for all n ∈ N dom ( ¯ σ ( n )) ⊆ fv ( t ) . In this section we show how each basic trace expression operator can be interpreted as an operationover sets of instantiated event traces and we formally prove that such an interpretation is equivalent tothe semantics derived from the transition system of the calculus in Definition 3.4, if one considers onlycontractive terms.
Left-preferential union:
The left-preferential union I ← W I of sets of instantiated event traces I and I is defined as follows: I ← W I = I S { ( ¯ e , σ ) ∈ I | ¯ e = λ or ( ¯ e = e · ¯ e ′ , e ⊳ I ) } . In the deterministic left-preferential version of union, instantiated event traces in I are kept only ifthey start with an event which is not the first element of any of the traces in I (the condition vacuouslyholds for the empty trace); since reduction steps can involve only one of the two operands at time, norestriction on the substitutions of the instantiated event traces is required.0 Candeterminism and compositionality coexist inRML? Left-preferential concatenation:
The left-preferential concatenation I ← · I of sets of instantiatedevent traces I and I is defined as follows: I ← · I = I ↓ ω ∪ { ( ¯ e · ¯ e , σ ) | ( ¯ e , σ ) ∈ I , ( ¯ e , σ ) ∈ I , σ = σ ∪ σ , ( ¯ e = λ or ( ¯ e = e · ¯ e , ( ¯ e · e ) ⊳ I )) } . The left operand I ↓ ω of the union corresponds to the fact that in the deterministic left-preferentialversion of concatenation, all infinite instantiated event traces in I belong to the semantics of concate-nation. The right operand of the union specifies the behavior for all finite instantiated event traces ¯ e in I ; in such cases, the trace in I ← · I can continue with ¯ e in I if ¯ e is not allowed to continuein I with the first event e of ¯ e ( ( ¯ e · e ) ⊳ I , the condition vacuously holds if ¯ e is the empty trace).Since the reduction steps corresponding to ¯ e follow those for ¯ e , the overall substitution σ must meetthe constraint σ = σ ∪ σ ensuring that σ and σ match on the shared variables of the two operands. Intersection:
The intersection I V I of sets of instantiated event traces I and I is defined asfollows: I V I = { ( ¯ e , σ ) | ( ¯ e , σ ) ∈ I , ( ¯ e , σ ) ∈ I , σ = σ ∪ σ } . Since intersection is intrinsically deterministic, its semantics throws no surprise.
Left-preferential shuffle:
The left-preferential shuffle I ← | I of sets of instantiated event traces I and I is defined as follows: I ← | I = { ( ¯ e , σ ) | ( ¯ e , σ ) ∈ I , ( ¯ e , σ ) ∈ I , σ = σ ∪ σ , ¯ e ∈ ¯ e ← | I ↓ ¯ e } . The definition is based on the generalized left-preferential shuffle defined in Section 2; an event in¯ e at a certain position n can contribute to the shuffle only if no trace in I ↓ could contribute with thesame event at the same position n . Since the reduction steps corresponding to ¯ e and ¯ e are interleaved,the overall substitution σ must meet the constraint σ = σ ∪ σ ensuring that σ and σ match on theshared variables of the two operands. Variable deletion:
The deletion I \ x of x from the set of instantiated event traces I is defined asfollows: I \ x = { ( ¯ e , σ \ x ) | ( ¯ e , σ ) ∈ I } . As expected, variable deletion only affects the domain of the computed substitution.The definitions above show that instantiated event traces are needed to allow a compositional se-mantics; let us consider the following simplified variation of the example given in Section 3: t ′ = { let fd ; open ( fd ) · close ( fd ) } . If we did not keep track of substitutions, then the compositional semanticsof open ( fd ) and close ( fd ) would contain all traces of length 1 matching open ( fd ) and close ( fd ) , respec-tively, for any value fd , and, hence, the semantics of open ( fd ) · close ( fd ) could not constrain open and close events to be on the same file descriptor. Indeed, such a constraint is obtained by checking that thesubstitution of the event trace matching open ( fd ) can be successfully merged with the substitution of theevent trace matching close ( fd ) , so that the two substitutions agree on fd . Contractivity is a condition on trace expressions which is statically enforced by the RML compiler; sucha requirement avoids infinite loops when an event does not match the specification and the generatedmonitor would try to build an infinite derivation. Although the generated monitors could dynamicallycheck potential loops dynamically, a syntactic condition enforced statically by the compiler allows mon-itors to be relieved of such a check, and, thus, to be more efficient.Contractivity can be seen as a generalization of absence of left recursion in grammars [36]; loopsin cyclic terms are allowed only if they are all guarded by a concatenation where the left operand t .Ancona, A.Ferrando, and V.Mascardi 11cannot contain the empty trace (that is, E ( t ) holds), and the loop continues in the right operand of theconcatenation. If such a condition holds, then it is not possible to build infinite derivations for t e −→ t .Interestingly enough, such a condition is also needed to prove that the interpretation of operators asgiven in Section 4.1 is equivalent to the semantics given in Definition 3.4. Indeed, the equivalence resultproved in Theorem 4.1 is based on Lemma 4.1 stating that for all contractive term t and event e , thereexist t and σ s.t. t e −→ t ; σ is derivable if and only if t e −→ is not derivable; such a claim does not holdfor a non contractive term as t = t ∨ t , because for all e , t ′ and σ , t e −→ t ′ ; σ and t e −→ are not derivable.This is due to the fact that both judgments are defined by an inductive inference system. Intuitively, froma contractive term we cannot derive a new term without passing through at least one concatenation. Forinstance, considering the term t = e · t , we have contractivity because we have to consume e before goinginside the loop. But, if we swap the operands, we obtain instead t = t · e , where contractivity does nothold; in fact, deriving the concatenation we go first inside the head, but it is cyclic. Since the −→ and judgements are defined inductively, both are not derivable because a finite derivation tree cannot bederived for neither of them. Definition 4.1
Syntactic contexts C are inductively defined as follows: C :: = (cid:3) | C op t | t op C | { let x ; C } with op ∈ {∧ , ∨ , | , ·} Definition 4.2
A syntactic context C is contractive if one of the following conditions hold: • C = { let x ; C ′ } and C ′ is contractive; • C = C ′ op t, C ′ is contractive and op ∈ {· , ∧ , ∨ , |} ; • C = t op C ′ , C ′ is contractive and op ∈ {∧ , ∨ , |} ; • C = t · C ′ , ⊢ E ( t ) and C ′ is contractive; • C = t · C ′ and E ( t ) . Definition 4.3
A term is part of t iff it belongs to the least set partof ( t ) matching the following definition:partof ( ε ) = partof ( θ ) = /0 partof ( { let x ; t } ) = { t } ∪ partof ( t ) partof ( t op t ) = { t , t } ∪ partof ( t ) ∪ partof ( t ) for op ∈ {| , · , ∧ , ∨} Because trace expressions can be cyclic, the definition of partof follows the same pattern adopted for fv .One can prove that a term t is cyclic iff there exists t ′ ∈ partof ( t ) s.t. t ′ ∈ partof ( t ′ ) . Definition 4.4
A term t is contractive iff the following conditions old: • for any syntactic context C , if t = C [ t ] then C is contractive; • for any term t ′ , if t ′ ∈ partof ( t ) , then t ′ is contractive. We first list all the auxiliary lemmas used to prove Theorem 4.1.
Lemma 4.1
For all contractive term t and event e, there exist t and σ s.t. t e −→ t ; σ is derivable if andonly if t e −→ is not derivable. Lemma 4.2 If ( ¯ e , ¯ σ ) ( ¯ e , σ ) , then ( ¯ e , ¯ σ \ x ) ( ¯ e , σ \ x ) . σ \ x denotes the substitution sequence where x is removed from the domain of each substitutionin ¯ σ . Lemma 4.3
Given a substitution function σ and a term t, we have that σ t = σ \ x σ | x t = σ | x σ \ x t, for everyx ∈ dom ( σ ) . Lemma 4.4
Let t be a term, σ be a substitution function s.t. dom ( σ ) = { x } ; we have that: ∀ ( ¯ e , σ ) ∈ J t K . (( σ ∪ σ is defined ) = ⇒ ( ¯ e , σ \ x ) ∈ J σ t K ) Lemma 4.5
Let t be a term, σ be a substitution function s.t. dom ( σ ) = { x } ; we have that: ∀ ( ¯ e , σ ) ∈ J σ t K . (( σ ∪ σ is defined ) = ⇒ ( ¯ e , σ ) ∈ J t K ) Lemma 4.6 t e −→ ⇐⇒ e ⊳ J t K . Lemma 4.7 If ( ¯ e , σ ) ∈ J t K , then ( ¯ e , /0 ) ∈ J σ t K . Lemma 4.8 If ( ¯ e , ¯ σ ) ∈ J t K c and ¯ e is infinite, then ( ¯ e , ¯ σ ) ∈ J t · t ′ K c for every t ′ . Lemma 4.9
If e · ¯ e ∈ ¯ e ← | T ¯ e , then ¯ e = e · ¯ e ′ , or ¯ e = e · ¯ e ′ and e ⊳ ¯ e . Lemma 4.10 If ( ¯ e , ¯ σ ) ∈ J t K c and E ( t ′ ) , then ( ¯ e , ¯ σ ) ∈ J t · t ′ K c . Lemma 4.11
Given ( ¯ e , ¯ σ ) ∈ J t K c , t e −→ t ; σ and ( ¯ e , ¯ σ ) ∈ J σ t K c with ¯ σ = σ · ¯ σ . If ¯ e = e · . . . · e n is finite, t e −→ t ; σ , t e −→ t ; σ , . . . , t n − e n −→ t n ; σ n , with σ = σ · . . . · σ n and t n e −→ , then ( ¯ e · e · ¯ e , ¯ σ · ¯ σ ) ∈ J t · t K c . In Theorem 4.1 we claim that for every operator of the trace calculus, the compositional semantics isequivalent to the abstract semantics. To prove such claim, we need to show that, for each operator, everytrace belonging to the compositional semantics belongs to the abstract semantics, which means we onlyconsider correct traces ( soundness ); and, every trace belonging to the abstract semantics belongs to thecompositional semantics, which means we consider all the correct traces ( completeness ).Each operator requires a customised proofs, but in principle, all the proofs follow the same reason-ing. Both soundness and completeness proof start expanding the compositional semantics definition interms of its concrete semantics, which in turn is rewritten in terms of the operational semantics. At thispoint, the compositional operator’s operands can be separately analysed in order to be recombined withthe corresponding trace calculus operator. Finally, the proofs are concluded going backwards from theoperational semantics to the abstract one, through the concrete semantics. For all the operators, except ∨ and ∧ , the proofs are given by coinduction over the terms structure. In every proof which is not analysedseparately ( ⇐⇒ cases), we implicitly apply Lemma 4.1. Theorem 4.1
The following claims hold for all contractive terms t and t : • J t ∨ t K = J t K ← W J t K • J t · t K = J t K ← · J t K • J t ∧ t K = J t K V J t K • J t | t K = J t K ← | J t K • J { let x ; t } K = J t K \ x .Ancona, A.Ferrando, and V.Mascardi 13The proofs for the union, intersection, shuffle and let cases are omitted and can be found in theappendix. We decided not to report them due to space constraints. In the proofs that follow, we provecomposed implications such as A ∨ . . . ∨ A n = ⇒ B , by splitting them into n separate implications A = ⇒ B , . . . , A n = ⇒ n B .The first operator we are going to analyse is the concatenation, where we are going to show that ( ¯ e , σ ) ∈ J t · t K ⇐⇒ ( ¯ e , σ ) ∈ J t K ← · J t K .The proof for the empty trace is trivial, and is constructed on top of the definition of the E predicate. ( λ , /0 ) ∈ J t · t K ⇐⇒ ( λ , λ ) ∈ J t · t K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K ) ⇐⇒ E ( t · t ) is derivable (by definition of J t K c ) ⇐⇒ E ( t ) is derivable ∧ E ( t ) is derivable (by definition of E ( t ) ) ⇐⇒ ( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K c ) ⇐⇒ ( λ , /0 ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K (by definition of J t K ) ⇐⇒ ( λ , /0 ) ∈ ( J t K ← · J t K ) (by definition of ← · )When the trace is not empty, we present the procedure to prove completeness ( = ⇒ ) and soundness ( ⇐ = ), separately.Let us start with completeness . To prove it, we have to show that the abstract semantics J t · t K (basedon the original operational semantics) is included in the composition of the abstract semantics J t K and J t K , using ← · operator. More specifically, in the first part of the proof ( = ⇒ ), the first event of the tracebelongs to the head of the concatenation. Thus, the head is expanded through operational semantics,causing the term to be rewritten into a concatenation, where the head is substituted with a new term.Since the concrete semantics has been defined coinductively, we can conclude that the proof is satisfiedby the so derived concatenation by coinduction. Finally, the proof is concluded recombining the newconcatenation in terms of ← · . The second part of the proof ( = ⇒ ) does not require coinduction, sincethe trace belongs to the tail of the concatenation. Through the operational semantics, the concatenation isrewritten into the new tail, and the proof is straightforwardly concluded following the abstract semantics. ( e · ¯ e , σ ) ∈ J t · t K = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t · t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K ) = ⇒ t · t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by definition of J t K c ) = ⇒ ( t e −→ t ′ ; σ ′ is derivable ∧ t · t e −→ t ′ · t ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ ( t ′ · t ) K c ) ∨ ( t e −→ ∧ E ( t ) ∧ t e −→ t ′ ; σ ′ is derivable ∧ t · t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c ) (by operational semantics) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ t · t e −→ t ′ · t ; σ ′ is derivable ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ ( t ′ · t ) K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′ ) ∧ σ = σ ′′ ∪ σ ′ (by definition of J t K ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ t · t e −→ t ′ · t is derivable ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ t ′ K ← · J σ ′ t K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′ ) ∧ σ = σ ′′ ∪ σ ′ (by coinduction over J t K ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ t · t e −→ t ′ · t is derivable ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ t ′ K ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ t K ∧ ¯ e = ¯ e · ¯ e ∧ ( ¯ e = λ ∨ ( ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J σ ′ t ′ K )) (by definition of ← · ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ) ∈ J σ ′ t ′ K c ∧ ( ¯ e , ¯ σ ) ( ¯ e , σ ′′ ) ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ t K ∧ ¯ e = ¯ e · ¯ e ∧ ( ¯ e = λ ∨ ( ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J σ ′ t ′ K )) (by definition of J t K ) = ⇒ ( e · ¯ e , σ ′ · ¯ σ ) ∈ J t K c ∧ ( ¯ e , ¯ σ ) ( ¯ e , σ ′′ ) ∧ ( ¯ e , σ ′′ ) ∈ J σ ′ t K ∧ ¯ e = ¯ e · ¯ e ∧ ( ¯ e = λ ∨ ( ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J σ ′ t ′ K )) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ′′ ) ∈ J σ ′ t K ∧ ( ¯ e , σ ′′ ∪ σ ′ ) ∈ J t K ∧ ¯ e = ¯ e · ¯ e ∧ ( ¯ e = λ ∨ ( ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J σ ′ t ′ K )) (by definition of J t K and Lemma 4.7) = ⇒ ( e · ¯ e , σ ) ∈ J t K ← · J t K (by definition of ← · ) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t K c ∧ ( λ , λ ) ∈ J t K c ∧ t e −→ (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K ∧ t e −→ (by definition of J t K ) = ⇒ ( e · ¯ e , σ ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K ∧ ( λ · e ) ⊳ J t K (by Lemma 4.6) = ⇒ ( e · ¯ e , σ ) ∈ J t K ← · J t K (by definition of ← · )We now prove soundness . To prove it, we show that the composition of abstract semantics J t K and J t K using the ← · operator is included in the abstract semantics of the related concatenation term J t · t K .The resulting proof is splitted in four separated cases. When the trace belongs to J t K is infinite ( = ⇒ ).The proof is based on the fact that an infinite trace concatenated to another trace is always equal to itself.In all the other cases, the proof can be fully derived by a direct application of the operational semantics. ( e · ¯ e , σ ) ∈ J t K ← · J t K = ⇒ ( e · ¯ e ) ∈ J t K ↓ ω ∨ ( e · ¯ e = ¯ e · ¯ e ∧ ( ¯ e , σ ) ∈ J t K ∧ ( ¯ e , σ ) ∈ J t K ∧ σ = σ ∪ σ ∧ ( ¯ e = λ ∨ ( ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J t K ))) (by definition of ← · ) = ⇒ ( e · ¯ e ) ∈ J t K ↓ ω ∨ ( ¯ e = λ ∧ ( λ , /0 ) ∈ J t K ∧ ( e · ¯ e , σ ) ∈ J t K ∧ e ⊳ J t K ) ∨ ( ¯ e = λ ∧ ( e · ¯ e ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K ) ∨ ( ¯ e = e · ¯ e ′ ∧ ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J t K ∧ ( e · ¯ e , σ ) ∈ J t K ∧ ( e ′ · ¯ e ) ∈ J t K ∧ σ = σ ∪ σ )( e · ¯ e , σ ) ∈ J t K ↓ ω = ⇒ ( e · ¯ e , σ ) ∈ J t K ∧ ¯ e infinite (by definition of ↓ ω ) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) ∧ ¯ e infinite (by definition of J t K ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) ∧ ¯ e infinite (by definition of J t K c ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ ( t ′ · t ) K c ∧ .Ancona, A.Ferrando, and V.Mascardi 15 ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) ∧ ¯ e infinite (by Lemma 4.8) = ⇒ t · t e −→ t ′ · t ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ ( t ′ · t ) K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) ∧ ¯ e infinite (by operational semantics) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t · t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ) ∈ J t · t K (by definition of J t K ) ( ¯ e = λ ∧ ( λ , /0 ) ∈ J t K ∧ ( e · ¯ e , σ ) ∈ J t K ∧ e ⊳ J t K ) = ⇒ E ( t ) is derivable ∧ ( e · ¯ e , σ ) ∈ J t K ∧ e ⊳ J t K (by definition of J t K ) = ⇒ E ( t ) is derivable ∧ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c ∧ e ⊳ J t K ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K c ) = ⇒ t · t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by operational semantics) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t · t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ) ∈ J t · t K (by definition of J t K ) ( ¯ e = λ ∧ ( e · ¯ e ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K ) = ⇒ ( e · ¯ e , σ ) ∈ J t · t K (by Lemma 4.10) ( ¯ e = e · ¯ e ′ ∧ ¯ e = e ′ · ¯ e ∧ ¯ e · e ′ ⊳ J t K ∧ ( e · ¯ e , σ ) ∈ J t K ∧ ( e ′ · ¯ e , σ ) ∈ J t K ∧ σ = σ ∪ σ ) = ⇒ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ) ∈ J σ ′ t K c ∧ ( ¯ e , ¯ σ ) ( ¯ e , σ ′′ ) ∧ t e ′ −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′ ) ∧ t e ′ −→ is derivable ∧ σ = σ ′ ∪ σ ′′ ∧ σ = σ ′ ∪ σ ′′ (by operational semantics) = ⇒ ( ¯ e · ¯ e , ¯ σ ) ∈ J t · t K c (by Lemma 4.11) Compositionality, determinism and events-based semantics are topics very central to concurrent systems.Winskel has introduced the notion of event structure [43] to model computational processes as sets ofevent occurrences together with relations representing their causal dependencies. Vaandrager [42] hasproved that for concurrent deterministic systems it is sufficient to observe the beginning and end ofevents to derive its causal structure. Lynch and Tuttle have introduced input/output automata [35] tomodel concurrent and distributed discrete event systems with a trace semantics consisting of both finiteand infinite sequences of actions.The rest of this section describes some of the main RV techniques and state-of-the-art tools andcompares them with respect to RML; more comprehensive surveys on RV can be found in literature[24, 29, 34, 40, 25, 12, 30] which mention formalisms for parameterised runtime verification that havenot deliberately presented here for space limitation.6 Candeterminism and compositionality coexist inRML?
Monitor-oriented programming:
Similarly as RML, which does not depend on the monitored systemand its instrumentation, other proposals introduce different levels of separation of concerns.
Monitor-oriented programming ( MOP [18]) is an infrastructure for RV that is neither tied to any particular pro-gramming language nor to a single specification language.In order to add support for new logics, one hasto develop an appropriate plug-in converting specifications to one of the format supported by the MOPinstance of the language of choice; the main formalisms implemented in existing MOP include finitestate machines, extended regular expressions, context-free grammars and temporal logics. Finite statemachines (or, equivalently, regular expressions) can be easily translated to RML, have limited expressive-ness, but are widely used in RV because they are well-understood among software developers as oppositeto other more sophisticated approaches, as temporal logics. Extended regular expressions include inter-section and complement; although such operators allow users to write more compact specifications, theydo not increase the formal expressive power since regular languages are closed under both. Determinis-tic Context-Free grammars (that is, deterministic pushdown automata) can be translated in RML usingrecursion, concatenation, union, and the empty trace, while the relationship with Context-Free grammars(that is, pushdown automata) has not been fully investigated yet; as stated in the introduction, RML canexpress several non Context-Free properties, hence RML cannot be less expressive than Context-Freegrammars, but we do not know whether Context-Free grammars are less expressive than RML.
Temporal logics:
Since RV has its roots in model checking, it is not surprising that logic-based formal-ism previously introduced in the context of the latter have been applied to the former.
Linear TemporalLogic ( LTL ) [37], is one of the most used formalism in verification.Since the standard semantics of LTL is defined on infinite traces only, and RV monitors can onlycheck finite trace prefixes (as opposed to static formal verification), a three-valued semantics for LTL,named
LTL has been proposed [14]. Beyond the basic “true” and “false” truth values, a third “inconclu-sive” one is considered (LTL specification syntax is unchanged, only the semantics is modified to takeinto account the new value). This allows one to distinguish the satisfaction/violation of the desired prop-erty (“false”) from the lack of sufficient evidence among the events observed so far (“inconclusive”),making this semantics more suited to RV. Differently from LTL, the semantics of LTL is defined onfinite prefixes, making it more suitable for comparison with other RV formalisms. Further developmentof LTL led to RV-LTL [13], a 4-valued semantics on which RML monitor verdicts are based on.The expressive power of LTL is the same as of star-free ω -regular languages [38]. When restricted tofinite traces, RML is much more expressive than LTL as any regular expression can be trivially translatedto it; however, on infinite traces, the comparison is slightly more intricate since RML and LTL haveincomparable expressiveness [8]. There exist many extensions of LTL that deal with time in a morequantitative way (as opposed to the strictly qualitative approach of standard LTL) without increasing theexpressive power, like interval temporal logic [17], metric temporal logic [41] and timed LTL [14]. Otherproposals go beyond regularity [3] and even context-free languages [15].Several temporal logics are embeddable in recHML [33], a variant of the modal µ -calculus [32];this allows the formal study of monitorability [1] in a general framework, to derive results for freeabout any formalism that can be expressed in such calculi. It would be interesting to study whether theRML trace calculus could be derivable to get theoretical results that are missing from this presentation.Unfortunately, it is not clear whether our calculus and recHML are comparable at all. For instance, recHML is a fixed-point logic including both least and greatest fixpoint operators, while our calculusimplicitly uses a greatest fixpoint semantics for recursion. Nonetheless, recHML does not include ashuffle operator, and we are not aware of a way to derive it from other operators..Ancona, A.Ferrando, and V.Mascardi 17Regardless of the formal expressiveness, RML and temporal logics are essentially different: RMLis closer to formalisms with which software developers are more familiar, as regular expressions andContext-Free languages, but does not offer direct support for time; however, if the instrumentation pro-vides timestamps, then some time-related properties can still be expressed exploiting parametricity. State machines:
As opposite to the language-based approach, as RML, specifications can be definedusing state machines (a.k.a. automata or finite-state machines). Though the core concept of a finite setof states and a (possibly input-driven) transition function between them is always there, in the field ofautomata theory different formalizations and extensions bring the expressiveness anywhere from simpledeterministic finite automata to Turing machines.An example of such formalisms is
DATE (Dynamic Automata with Timers and Events [20]), an ex-tension of the finite-state automata computational model based on communicating automata with timersand transitions triggered by observed events. This is the basis of
LARVA [21], a Java RV tool focused oncontrol-flow and real-time properties, exploiting the expressiveness of the underlying system (DATE).The main feature of LARVA that is missing in RML is the support for temporized properties, as ob-served events can trigger timers for other expected events. On the other hand, the parametric verificationsupport of RML is more general. LARVA scope mechanism works at the object level, thus parametricityis based on trace slicing [30] and implemented by spawning new monitors and associating them withdifferent objects. The RML approach is different as specifications can be parametric with respect to anyobserved data thanks to event type patterns and the let-construct to control the scope of the variables oc-curring in them. Limitations of the parametric trace slicing approach described above, as well as possiblegeneralizations to overcome them, have been explored by [19, 11, 39].Finally, the goals of the two tools are different: while RML strives to be system-independent, LARVAis devoted to Java verification, and the implementation relies on AspectJ [31] as an “instrumentation”layer allowing one to inject code (the monitor) to be executed at specific locations in the program.
We have moved a first step towards a compositional semantics of the RML trace calculus, by introducingthe notion of instantiated event trace, defining the semantics of trace expressions in terms of sets ofinstantiated event traces and showing how each basic trace expression operator can be interpreted asan operation over sets of instantiated event traces; we have formally proved that such an interpretationis equivalent to the semantics derived from the transition system of the calculus if one considers onlycontractive terms.For simplicity, here we have considered only the core of the calculus, but we plan to extend our resultto the full calculus, which includes also the prefix closure operator and a top-level layer with constructsto support generic specifications [27]. Another interesting direction for further investigation consists instudying how the notion of contractivity influences the expressive power of the calculus and, hence, ofRML; although we have failed so far to find a non-contractive term whose semantics is not equivalentto a corresponding contractive trace expression, we have not formally proved that contractivity does notlimit the expressive power of the calculus.8 Candeterminism and compositionality coexist inRML?
References [1] L. Aceto, A. Achilleos, A. Francalanza, A. Ing´olfsd´ottir & K. Lehtinen (2019):
Adventures in Monitorability:From Branching to Linear Time and Back Again . Proc.ACMProgram.Lang. 3(POPL), pp. 52:1–52:29.[2] Wolfgang Ahrendt, Jes´us Mauricio Chimento, Gordon J. Pace & Gerardo Schneider (2017):
Verifying data-and control-oriented properties combining static and runtime verification: theory and tools . FormalMethodsinSystemDesign 51(1), pp. 200–265.[3] Rajeev Alur, Kousha Etessami & P. Madhusudan (2004):
A Temporal Logic of Nested Calls and Returns . In:ToolsandAlgorithmsfortheConstructionandAnalysisofSystems,10thInternationalConference,TACAS2004, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2004,Barcelona,Spain,March29-April2,2004,Proceedings, pp. 467–481.[4] Davide Ancona, Viviana Bono, Mario Bravetti, Joana Campos, Giuseppe Castagna, Pierre-Malo Deni´elou,Simon J. Gay, Nils Gesbert, Elena Giachino, Raymond Hu, Einar Broch Johnsen, Francisco Martins, Vi-viana Mascardi, Fabrizio Montesi, Rumyana Neykova, Nicholas Ng, Luca Padovani, Vasco T. Vasconcelos& Nobuko Yoshida (2016):
Behavioral Types in Programming Languages . Foundationsand Trendsin Pro-grammingLanguages3(2-3), pp. 95–230.[5] Davide Ancona & Andrea Corradi (2014):
Sound and Complete Subtyping between Coinductive Types forObject-Oriented Languages . In: ECOOP2014, pp. 282–307.[6] Davide Ancona & Andrea Corradi (2016):
Semantic subtyping for imperative object-oriented languages . In:OOPSLA2016, pp. 568–587.[7] Davide Ancona, Sophia Drossopoulou & Viviana Mascardi (2012):
Automatic Generation of Self-monitoringMASs from Multiparty Global Session Types in Jason . In: Declarative Agent Languages and TechnologiesX - 10th InternationalWorkshop, DALT 2012, Valencia, Spain, June 4, 2012, Revised Selected Papers, pp.76–95.[8] Davide Ancona, Angelo Ferrando & Viviana Mascardi (2016):
Comparing Trace Expressions and LinearTemporal Logic for Runtime Verification . In: TheoryandPracticeofFormalMethods-EssaysDedicatedtoFrankdeBoerontheOccasionofHis60thBirthday, pp. 47–64.[9] Davide Ancona, Angelo Ferrando & Viviana Mascardi (2017):
Parametric Runtime Verification of MultiagentSystems . In: Proceedingsofthe16thConferenceonAutonomousAgentsandMultiAgentSystems,AAMAS2017,S˜aoPaulo,Brazil,May8-12,2017, pp. 1457–1459.[10] Davide Ancona, Luca Franceschini, Angelo Ferrando & Viviana Mascardi (2019):
A Deterministic EventCalculus for Effective Runtime Verification . In Alessandra Cherubini, Nicoletta Sabadini & Simone Tini,editors: Proceedingsof the 20thItalian Conferenceon TheoreticalComputerScience, ICTCS 2019, Como,Italy,September9-11,2019, CEURWorkshopProceedings2504, CEUR-WS.org, pp. 248–260.[11] Howard Barringer, Yli`es Falcone, Klaus Havelund, Giles Reger & David E. Rydeheard (2012):
QuantifiedEvent Automata: Towards Expressive and Efficient Runtime Monitors . In: FM2012: FormalMethods-18thInternationalSymposium,Paris,France,August27-31,2012.Proceedings, pp. 68–84.[12] Ezio Bartocci, Yli`es Falcone, Adrian Francalanza & Giles Reger (2018):
Introduction to Runtime Verification .In: LecturesonRuntimeVerification-IntroductoryandAdvancedTopics, pp. 1–33.[13] Andreas Bauer, Martin Leucker & Christian Schallhart (2007):
The Good, the Bad, and the Ugly, But HowUgly Is Ugly?
In Oleg Sokolsky & Serdar Tas¸ıran, editors: RuntimeVerification, Springer Berlin Heidelberg,Berlin, Heidelberg, pp. 126–138.[14] Andreas Bauer, Martin Leucker & Christian Schallhart (2011):
Runtime verification for LTL and TLTL . ACMTransactionsonSoftwareEngineeringandMethodology(TOSEM) 20(4), pp. 14:1–14:64.[15] Benedikt Bollig, Normann Decker & Martin Leucker (2012):
Frequency Linear-time Temporal Logic . In:SixthInternationalSymposiumonTheoreticalAspectsofSoftwareEngineering,TASE2012,4-6July2012,Beijing,China, pp. 85–92. .Ancona, A.Ferrando, and V.Mascardi 19 [16] G. Castagna, M. Dezani-Ciancaglini & L. Padovani (2012):
On Global Types and Multi-Party Session . Log-icalMethodsinComputerScience 8(1).[17] Antonio Cau & Hussein Zedan (1997):
Refining Interval Temporal Logic Specifications . In: Transformation-Based Reactive Systems Development, 4th International AMAST Workshop on Real-Time Systems andConcurrent and Distributed Software, ARTS’97, Palma, Mallorca, Spain, May 21-23, 1997, Proceedings,pp. 79–94.[18] Feng Chen & Grigore Rosu (2007):
Mop: an efficient and generic runtime verification framework . In:Proceedings of the 22nd Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems,Languages, and Applications, OOPSLA 2007, October 21-25, 2007, Montreal, Quebec, Canada, pp. 569–588.[19] Feng Chen & Grigore Rosu (2009):
Parametric Trace Slicing and Monitoring . In: Tools and AlgorithmsfortheConstructionandAnalysisof Systems, 15thInternationalConference,TACAS2009,Heldas PartoftheJointEuropeanConferencesonTheoryandPracticeofSoftware,ETAPS2009,York,UK,March22-29,2009.Proceedings, pp. 246–261.[20] Christian Colombo, Gordon J. Pace & Gerardo Schneider (2008):
Dynamic Event-Based Runtime Moni-toring of Real-Time and Contextual Properties . In: Formal Methods for Industrial Critical Systems, 13thInternationalWorkshop,FMICS2008,L’Aquila,Italy,September15-16,2008,RevisedSelectedPapers, pp.135–149.[21] Christian Colombo, Gordon J. Pace & Gerardo Schneider (2009):
LARVA – Safer Monitoring of Real-TimeJava Programs . In: SEFM2009, pp. 33–37.[22] Bruno Courcelle (1983):
Fundamental Properties of Infinite Trees . Theor.Comput.Sci. 25, pp. 95–169.[23] James C. Davis, Christy A. Coghlan, Francisco Servant & Dongyoon Lee (2018):
The impact of regularexpression denial of service (ReDoS) in practice: an empirical study at the ecosystem scale . In: Proceedingsof the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on theFoundationsof SoftwareEngineering,ESEC/SIGSOFT FSE 2018,LakeBuena Vista, FL, USA, November04-09,2018, pp. 246–256.[24] Nelly Delgado, Ann Q. Gates & Steve Roach (2004):
A Taxonomy and Catalog of Runtime Software-FaultMonitoring Tools . IEEETrans.SoftwareEng. 30(12), pp. 859–872.[25] Yli`es Falcone, Klaus Havelund & Giles Reger (2013):
A Tutorial on Runtime Verification . In: EngineeringDependableSoftwareSystems, pp. 141–175.[26] Yli`es Falcone, Srdan Krstic, Giles Reger & Dmitriy Traytel (2018):
A Taxonomy for Classifying RuntimeVerification Tools . In: Runtime Verification - 18th International Conference, RV 2018, Limassol, Cyprus,November10-13,2018,Proceedings, pp. 241–262.[27] Luca Franceschini (March 2020):
RML: Runtime Monitoring Language . Ph.D. thesis, DIBRIS - Universityof Genova. Available at http://hdl.handle.net/11567/1001856 .[28] A. Frisch, G. Castagna & V. Benzaken (2008):
Semantic subtyping: Dealing set-theoretically with function,union, intersection, and negation types . J.ACM 55(4).[29] Klaus Havelund & Allen Goldberg (2005):
Verify Your Runs . In: Verified Software: Theories, Tools, Ex-periments, First IFIP TC 2/WG 2.3 Conference, VSTTE 2005, Zurich, Switzerland, October 10-13, 2005,RevisedSelectedPapersandDiscussions, pp. 374–383.[30] Klaus Havelund, Giles Reger, Daniel Thoma & Eugen Zalinescu (2018):
Monitoring Events thatCarry Data . In: Lectures on Runtime Verification - Introductory and Advanced Topics, pp. 61–102,doi:10.1007/978-3-319-75632-5 3.[31] Gregor Kiczales, Erik Hilsdale, Jim Hugunin, Mik Kersten, Jeffrey Palm & William G. Griswold (2001):
An Overview of AspectJ . In: ECOOP 2001 - Object-Oriented Programming, 15th European Conference,Budapest,Hungary,June18-22,2001,Proceedings, pp. 327–353.[32] Dexter Kozen (1983):
Results on the Propositional mu-Calculus . Theor.Comput.Sci. 27, pp. 333–354. [33] Kim Guldstrand Larsen (1990):
Proof Systems for Satisfiability in Hennessy-Milner Logic with Recursion .Theor.Comput.Sci. 72(2&3), pp. 265–288.[34] Martin Leucker & Christian Schallhart (2009):
A brief account of runtime verification . TheJournalofLogicandAlgebraicProgramming78(5), pp. 293–303.[35] Nancy A. Lynch & Mark R. Tuttle (1987):
Hierarchical Correctness Proofs for Distributed Algorithms .In Fred B. Schneider, editor: Proceedings of the Sixth Annual ACM Symposium on Principles of Dis-tributed Computing, Vancouver, British Columbia, Canada, August 10-12, 1987, ACM, pp. 137–151,doi:10.1145/41840.41852.[36] RC Moore (2000):
Removing left recursion from context-free grammars . NAACL 2000: Proceedingsofthe1stNorthAmericanchapteroftheAssociationforComputationalLinguisticsconference.[37] Amir Pnueli (1977):
The temporal logic of programs . In: 18th Annual Symposium on Foundations ofComputerScience,1977, IEEE, pp. 46–57.[38] Amir Pnueli & Lenore D. Zuck (1993):
In and Out of Temporal Logic . In: ProceedingsoftheEighthAnnualSymposiumonLogicinComputerScience(LICS’93),Montreal,Canada,June19-23,1993, pp. 124–135.[39] Giles Reger, Helena Cuenca Cruz & David E. Rydeheard (2015):
MarQ: Monitoring at Runtime with QEA .In: Tools and Algorithms for the Construction and Analysis of Systems - 21st International Conference,TACAS 2015, Held as Part of the EuropeanJoint Conferenceson Theoryand Practice of Software, ETAPS2015,London,UK,April11-18,2015.Proceedings, pp. 596–610.[40] Oleg Sokolsky, Klaus Havelund & Insup Lee (2012):
Introduction to the special section on runtime verifica-tion . STTT 14(3), pp. 243–247.[41] Prasanna Thati & Grigore Rosu (2005):
Monitoring Algorithms for Metric Temporal Logic Specifications .Electr.NotesTheor.Comput.Sci. 113, pp. 145–162.[42] Frits W. Vaandrager (1991):
Determinism - (Event Structure Isomorphism = Step Sequence Equiva-lence) . Theor. Comput. Sci. 79(2), pp. 275–294, doi:10.1016/0304-3975(91)90333-W. Available at https://doi.org/10.1016/0304-3975(91)90333-W .[43] Glynn Winskel (1986):
Event Structures . In Wilfried Brauer, Wolfgang Reisig & Grzegorz Rozenberg,editors: Petri Nets: CentralModelsand Their Properties, Advancesin Petri Nets 1986, Part II, Proceedingsof an AdvancedCourse, Bad Honnef, Germany, 8-19 September 1986, LectureNotes in Computer Science255, Springer, pp. 325–392, doi:10.1007/3-540-17906-2 31.
A Appendix
Proof of Proposition 2.1
Since f ( n ) < f ( n ) implies f ( n ) = f ( n ) , and f is a function, we candeduce that n = n , therefore n < n or n > n . By contradiction, let us assume that n > n ; since f is strictly increasing we have f ( n ) > f ( n ) which is not possible because f ( n ) < f ( n ) by hypothesis. Proof of Proposition 2.2
We prove the claim by induction over N . • basis: let us assume that 0 ∈ dom ( f ) and, by contradiction, f ( ) = m + m ∈ N ; hence, m + ∈ N and, by condition 3, m ∈ N and by condition 1, there exists n ′ ∈ dom ( f ) s.t. f ( n ′ ) = m ;since f ( n ′ ) = m < m + = f ( ) , by proposition 2.1 we deduce the contradiction n ′ < • induction step: let us assume that n + ∈ dom ( f ) , then by condition 2, n ∈ dom ( f ) and by induc-tion hypothesis, f ( n ) = n . Since f is strictly increasing n = f ( n ) < f ( n + ) ; by contradiction,let us assume that f ( n + ) = m + m > n +
1. Hence, m + ∈ N and, by condition 3, m ∈ N and by condition 1, there exists n ′ ∈ dom ( f ) s.t. f ( n ′ ) = m . Since f is strictly increasing, f ( n ) = n , f ( n ′ ) = m , f ( n + ) = m +
1, and n < m < m +
1, by proposition 2.1 we can deduce thecontradiction n < n ′ < n + Proof of Proposition 3.1
By induction on | ¯ e | . Proof of Proposition 3.2
By induction on n one can prove that for all n ∈ N , ¯ σ has a prefix of length n . Proof of Proposition 3.3
By induction on n + m , Lemma 3.1 and the fact that fv ( σ t ) ∩ dom ( σ ) = /0. Proof of Proposition 3.4
By induction on n , Lemma 3.1 and the fact that fv ( σ t ) ⊆ fv ( t ) . Proof of Lemma 4.10
Let us say that ¯ e = e · . . . · e n is a finite sequence of events; we can expand ( ¯ e , ¯ σ ) ∈ J t K c into t e −→ t ; σ , t e −→ t ; σ , . . . , t n − e n −→ t n ; σ n with σ = σ · . . . · σ n and E ( t n ) derivable. Since E ( t ′ ) is derivable, by definition of E ( − ) , also E ( t n · t ′ ) is derivable. By operational semantics, if t e −→ t ; σ ,then t · t ′ e −→ t · t ′ ; σ , so we can rewrite the previous sequence as t · t ′ e −→ t · t ′ ; σ , t · t ′ e −→ t · t ′ ; σ , . . . , t n − · t ′ e n −→ t n · t ′ ; σ n , concluding that ( ¯ e , ¯ σ ) ∈ J t · t ′ K c . Proof of Lemma 4.11
By operational semantics, we can rewrite each transition t i − e i −→ t i ; σ i , as t i − · t e i −→ t i · t ; σ i . Since ( ¯ e , σ ) ∈ J t K , we know E ( t n ) (by definition of J t K ). Since we assume t n e −→ and t e −→ t ; σ , we infer t n · t e −→ t ; σ (by operational semantics). Consequently, t · t e −→ t · t ; σ , . . . , t n − · t e n −→ t n · t ; σ n , t n · t e −→ t ; σ , with σ = σ ∪ . . . ∪ σ n and ( ¯ e , σ ) ∈ J t K . Following definition of J t K , by knowing σ = σ ∪ σ exists, we can infer ( e · . . . e n · e · ¯ e , σ ∪ σ ) ∈ J t · . . . t n · t K , concluding ( ¯ e , σ ) ∈ J t · t K . Proof of Union
To prove that J t ∨ t K = J t K ← W J t K , we have to show that J t ∨ t K ⊆ J t K ← W J t K and J t K ← W J t K ⊆ J t ∨ t K . Let us start with the first inclusion, which can be reformulated as ∀ ( t , σ ) ∈ J t ∨ t K . ( t , σ ) ∈ J t K ← W J t K . We split the demonstration over the trace structure.When the trace is empty, we derive: ( λ , /0 ) ∈ J t ∨ t K = ⇒ ( λ , λ ) ∈ J t ∨ t K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K ) = ⇒ E ( t ∨ t ) (by definition of J t K c ) = ⇒ E ( t ) ∨ E ( t ) (by definition of E ( t ) ) = ⇒ ( λ , λ ) ∈ J t K c ∨ ( λ , λ ) ∈ J t K c (by definition of J t K c ) = ⇒ (( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ( λ , /0 )) ∨ (( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ( λ , /0 )) (by definition of J t K c ) = ⇒ ( λ , /0 ) ∈ J t K ∨ ( λ , /0 ) ∈ J t K (by definition of J t K ) = ⇒ ( λ , /0 ) ∈ J t K ← _ J t K (by definition of ← _ )When the trace is not empty, we derive: ( e · ¯ e , σ ) ∈ J t ∨ t K = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t ∨ t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K ) = ⇒ t ∨ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by definition of J t K c )2 Candeterminism and compositionality coexist inRML? = ⇒ ( t e −→ t ′ ; σ ′ is derivable ∨ ( t e −→ ∧ t e −→ t ′ ; σ ′ is derivable )) ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by operational semantics) = ⇒ (( e · ¯ e , ¯ σ ) ∈ J t K c ) ∨ ( e ⊳ J t K ∧ ( e · ¯ e , ¯ σ ) ∈ J t K c ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ) ∈ J t K ∨ ( e · ¯ e , σ ) ∈ J t K (by definition of J t K ) = ⇒ ( e · ¯ e , σ ) ∈ J t K ← _ J t K (by definition of ← _ )Now we show the other way around, i.e. ∀ ( t , σ ) ∈ J t K ← W J t K . ( t , σ ) ∈ J t ∨ t K . We split the demonstrationover the trace structure again.When the trace is empty, we derive: ( λ , /0 ) ∈ J t K ← _ J t K = ⇒ ( λ , /0 ) ∈ J t K ∨ ( λ , /0 ) ∈ J t K (by definition of ← _ ) = ⇒ E ( t ) is derivable ∨ E ( t ) is derivable (by definition of J t K ) = ⇒ E ( t ∨ t ) is derivable (by definition of E ( t ) ) = ⇒ ( λ , /0 ) ∈ J t ∨ t K (by definition of J t K )When the trace is not empty, we derive: ( e · ¯ e , σ ) ∈ J t K ← _ J t K = ⇒ ( e · ¯ e , σ ) ∈ J t K ∨ (( e · ¯ e , σ ) ∈ J t K ∧ e ⊳ J t K ) (by definition of ← _ ) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t K c ∨ (( e · ¯ e , ¯ σ ) ∈ J t K c ∧ e ⊳ J t K c ) ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K ) = ⇒ ( t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c ) ∨ ( t e −→ t ′ ; σ ′ is derivable ∧ t e −→ ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c ) (by definition of J t K c ) = ⇒ t ∨ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by operational semantics) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t ∨ t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ) ∈ J t ∨ t K (by definition of J t K ) Proof of Intersection
To prove that J t ∧ t K = J t K V J t K , we have to show ( ¯ e , σ ) ∈ J t ∧ t K ⇐⇒ ( ¯ e , σ ) ∈ J t K V J t K . ( λ , /0 ) ∈ J t ∧ t K ⇐⇒ ( λ , λ ) ∈ J t ∧ t K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K ) ⇐⇒ E ( t ∧ t ) is derivable (by definition of J t K c ) ⇐⇒ E ( t ) is derivable ∧ E ( t ) is derivable (by definition of E ( t ) ) ⇐⇒ ( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ∈ J t K c (by definition of J t K c ) ⇐⇒ ( λ , /0 ) ∈ J t K ∧ ( λ , /0 ) ∈ J t K (by definition of J t K ) ⇐⇒ ( λ , /0 ) ∈ ( J t K ^ J t K ) (by definition of ^ ) ( e · ¯ e , σ ) ∈ J t ∧ t K ⇐⇒ ( e · ¯ e , ¯ σ ) ∈ J t ∧ t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K ).Ancona, A.Ferrando, and V.Mascardi 23 ⇐⇒ t e −→ t ′ ; σ ′ is derivable ∧ t e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by definition of J t K c and operational semantics) ⇐⇒ ( e · ¯ e , ¯ σ ) ∈ J t K c ∧ ( e · ¯ e , ¯ σ ) ∈ J t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) ⇐⇒ ( e · ¯ e , σ ) ∈ J t K ∧ ( e · ¯ e , σ ) ∈ J t K (by definition of J t K ) ⇐⇒ ( e · ¯ e , σ ) ∈ ( J t K ^ J t K ) (by definition of ^ ) Proof of Variable deletion
To prove that J { let x ; t } K = J t K \ x , we have to show ( ¯ e , σ ) ∈ J { let x ; t } K ⇐⇒ ( ¯ e , σ ) ∈ J t K \ x . ( λ , /0 ) ∈ J { let x ; t } K ⇐⇒ ( λ , λ ) ∈ J { let x ; t } K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K ) ⇐⇒ E ( { let x ; t } ) is derivable (by definition of J t K c ) ⇐⇒ E ( t ) is derivable (by definition of E ( t ) ) ⇐⇒ ( λ , λ ) ∈ J t K c ∧ ( λ , λ ) ( λ , /0 ) (by definition of J t K c ) ⇐⇒ ( λ , /0 ) ∈ J t K (by definition of J t K ) ⇐⇒ ( λ , /0 ) ∈ J t K \ x (by definition of \ x ) ( e · ¯ e , σ ) ∈ J { let x ; t } K = ⇒ ( e · ¯ e , ¯ σ ) ∈ J { let x ; t } K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ) (by definition of J t K ) = ⇒ { let x ; t } e −→ t ′ ; σ ′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′ t ′ K c (by definition of J t K c ) = ⇒ t e −→ t ′′ ; σ ′′ is derivable ∧ (( x ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ σ ′′| x t ′′ ; σ ′′\ x is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′\ x σ ′′| x t ′′ K c ) ∨ ( x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′′ } ; σ ′′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′ { let x ; t ′′ } K c )) (by operational semantics) = ⇒ t e −→ t ′′ ; σ ′′ is derivable ∧ { let x ; t } e −→ σ ′′| x t ′′ ; σ ′′\ x is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′ t ′′ K c (by Lemma 4.3) = ⇒ ( e · ¯ e , σ ′′ · ¯ σ ′ ) ∈ J t K c ∧ σ ′ = σ ′′\ x ∧ ( e · ¯ e , ¯ σ ′′ · ¯ σ ′ ) ( e · ¯ e , σ ′′′ )( e · ¯ e , σ ′ · ¯ σ ′ ) ( e · ¯ e , σ ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ′′′ ) ∈ J t K ∧ σ = σ ′′′\ x (by Lemma 4.2) = ⇒ ( e · ¯ e , σ ) ∈ J t K \ x (by definition of \ x ) = ⇒ t e −→ t ′′ ; σ ′ is derivable ∧ x / ∈ dom ( σ ′ ) ∧ { let x ; t } e −→ { let x ; t ′′ } ; σ ′ is derivable ( ¯ e , ¯ σ ′ ) ∈ J { let x ; σ ′\ x t ′′ } K c (by definition of σ ) = ⇒ t e −→ t ′′ ; σ ′ is derivable ∧ x / ∈ dom ( σ ′ ) ∧ { let x ; t } e −→ { let x ; t ′′ } ; σ ′ is derivable ( ¯ e , ¯ σ ′′ ) ∈ J σ ′ t ′′ K c ∧ ¯ σ ′ = ¯ σ ′′\ x (by coinduction over J t K c ) = ⇒ ( e · ¯ e , σ ′ · ¯ σ ′′ ) ∈ J t K c ∧ ¯ σ ′ = ¯ σ ′′\ x (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ′ · ¯ σ ′′ ) ( e · ¯ e , σ ′′′ ) ∧ σ = σ ′′′\ x (by Lemma 4.2) = ⇒ ( e · ¯ e , σ ′′′ ) ∈ J t K ∧ σ = σ ′′′\ x (by definition of J t K ) = ⇒ ( e · ¯ e , σ ) ∈ J t K \ x (by definition of \ x )4 Candeterminism and compositionality coexist inRML? ( e · ¯ e , σ ) ∈ J t K \ x = ⇒ ( e · ¯ e , σ ′ ) ∈ J t K ∧ σ = σ ′\ x (by definition of \ x ) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J t K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ′ ) (by definition of J t K ) = ⇒ t e −→ t ′ ; σ ′′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′ t ′ K c (by definition of J t K c ) = ⇒ ( x ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ σ ′′| x t ′ ; σ ′′\ x is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′ t ′ K c ) ∨ ( x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , ¯ σ ′ ) ∈ J σ ′′ t ′ K c ) (by operational semantics) = ⇒ ( e · ¯ e , σ ′′\ x · ¯ σ ′ ) ∈ J { let x ; t } K c ∧ ( e · ¯ e , ¯ σ ) ( e · ¯ e , σ ′ ) (by definition of J t K c ) = ⇒ ( e · ¯ e , σ ′′\ x · ¯ σ ′ ) ∈ J { let x ; t } K c ∧ ( e · ¯ e , σ ′′\ x ) ( e · ¯ e , σ ′\ x ) (by Lemma 4.3) = ⇒ ( e · ¯ e , ¯ σ ) ∈ J { let x ; t } K (by definition of J t K ) = ⇒ x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , σ ′′′ ) ∈ J σ ′′ t ′ K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′′ ) ∧ σ ′ = σ ′′ ∪ σ ′′′ (by definition of J t K ) = ⇒ x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , σ ′′′\ x ) ∈ J σ ′′ t ′ K \ x ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′′ ) ∧ σ ′ = σ ′′ ∪ σ ′′′ (by definition of \ x ) = ⇒ x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , σ ′′′\ x ) ∈ J { let x ; σ ′′ t ′ } K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′′ ) ∧ σ ′ = σ ′′ ∪ σ ′′′ (by coinduction over J t ′ K ) = ⇒ x / ∈ dom ( σ ′′ ) ∧ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , σ ′′′\ x ) ∈ J σ ′′ { let x ; t ′ } K ∧ ( ¯ e , ¯ σ ′ ) ( ¯ e , σ ′′′ ) ∧ σ ′ = σ ′′ ∪ σ ′′′ (by definition of σ ) = ⇒ { let x ; t } e −→ { let x ; t ′ } ; σ ′′ is derivable ∧ ( ¯ e , ¯ σ ′\ x ) ∈ J σ ′′ { let x ; t ′ } K c (by Lemma 4.2) = ⇒ ( e · ¯ e , σ ′′ · ¯ σ ′\ x ) ∈ J { let x ; t } K c (by definition of J t K c ) = ⇒ ( e · ¯ e , ¯ σ \ x ) ∈ J { let x ; t } K c ∧ ( e · ¯ e , ¯ σ \ x ) ( e · ¯ e , σ ′\ x ) (by definition of ¯ σ and Lemma 4.2) = ⇒ ( e · ¯ e , σ ′\ x ) ∈ J { let x ; t } K (by definition of J t K ) = ⇒ ( e · ¯ e , σ ) ∈ J { let x ; t } K Proof of Shuffle