Conditional Transition Systems with Upgrades
Harsh Beohar, Barbara König, Sebastian Küpper, Alexandra Silva
aa r X i v : . [ c s . S E ] J un Conditional Transition Systems with Upgrades
Harsh Beohar
Universität Duisburg-Essen
Barbara König
Universität Duisburg-Essen
Sebastian Küpper
Universität Duisburg-Essen
Alexandra Silva
University College London
Abstract —We introduce a variant of transition systems, whereactivation of transitions depends on conditions of the environ-ment and upgrades during runtime potentially create additionaltransitions. Using a cornerstone result in lattice theory, we showthat such transition systems can be modelled in two ways:as conditional transition systems (CTS) with a partial orderon conditions, or as lattice transition systems (LaTS), wheretransitions are labelled with the elements from a distributivelattice. We define equivalent notions of bisimilarity for bothvariants and characterise them via a bisimulation game.We explain how conditional transition systems are relatedto featured transition systems for the modelling of softwareproduct lines. Furthermore, we show how to compute bisimilaritysymbolically via BDDs by defining an operation on BDDs thatapproximates an element of a Boolean algebra into a lattice. Wehave implemented our procedure and provide runtime results.
I. I
NTRODUCTION
Conditional transition systems (CTS) have been introducedin [1] as a model for systems whose behaviour is guardedby different conditions. Before an execution, a condition ischosen by the environment from a pre-defined set of conditionsand, accordingly, the CTS is instantiated to a classical labelledtransition system (LTS). In this work, we consider ordered setsof conditions which allow for a change of conditions duringruntime. It is allowed to replace a condition by a smallercondition, called upgrade. An upgrade activates additional tran-sitions compared to the previous instantiation of the system.Our focus lies on formulating a notion of behaviouralequivalence, called conditional bisimilarity , that is insensitiveto changes in behaviour that may occur due to upgrades. Giventwo states, we want to determine under which conditions theyare behaviourally equivalent. To compute this, we adopt a dual,but equivalent, view from lattice theory due to Birkhoff to rep-resent a CTS by a lattice transition system (LaTS). In general,LaTSs are more compact in nature than their CTS counterparts.Moreover, we also develop an efficient procedure based onmatrix multiplication to compute conditional bisimilarity.Such questions are relevant when we compare a system withits specification or we want to modify a system in such a waythat its observable behaviour is invariant. Furthermore, onerequires minimisation procedures for transition systems thatare potentially very large and need to be made more compactto be effectively used in analysis.An application of CTSs with upgrades is to model systemsthat deteriorate over time. Consider a system that is dependent
Research partially supported by DFG project BEMEGA and ERC StartingGrant ProFoundNet (grant agreement 679127). on components that break over time or require calibration,in particular sensor components. In such systems, due toinconsistent sensory data from a sensor losing its calibration,additional behaviour in a system may be enabled (which canbe modelled as an upgrade) and chosen nondeterministically.Another field of interest, which will be explored in moredetail, are software product lines (SPLs). SPLs refer to asoftware engineering method for managing and developing acollection of similar software systems with common features.To ensure correctness of such systems in an efficient way, itis common to specify the behaviour of many products in asingle transition system and provide suitable analysis methodsbased on model-checking or behavioural equivalences (see [3],[6]–[9], [12], [15], [17], [24]).Featured transition systems (FTS) – a recent extension ofconventional transition system proposed by Classen et al.[7]– have become the standard formalism to model an SPL. Animportant issue usually missing in the theory of FTSs is thenotion of self-adaptivity [11], i.e., the view that features orproducts are not fixed a priori, but may change during runtime.We will show that FTSs can be considered as CTSs withoutupgrades where the conditions are the powerset of the features.Additionally, we propose to incorporate a notion of upgradesinto software product lines, that cannot be captured by FTSs.Furthermore, we also consider deactivation of transitions inAppendix C, to which our techniques can easily be adapted,though some mathematical elegance is lost in the process.Our contributions are as follows. First, we make the dif-ferent levels of granularity – features, products and sets ofproducts – in the specification of SPLs explicit and give atheoretical foundation in terms of Boolean algebras and lat-tices. Second, we present a theory of behavioural equivalenceswith corresponding games and algorithms and applicationsto conventional and adaptive SPLs. Third, we present ourimplementation based on binary decision diagrams (BDDs),which provides a compact encoding of a propositional formulaand also show how they can be employed in a lattice-basedsetting. Lastly, we show how a BDD-based matrix multipli-cation algorithm provides us with an efficient way to checkbisimilarity relative to the naive approach of checking allproducts separately.This paper is organised as follows. Section II recalls thefundamentals of lattice theory relevant to this paper. Then,in Section III we formally introduce CTSs and conditionalbisimilarity. In Section IV, using the Birkhoff duality, it isshown that CTSs can be represented as lattice transitionsystems (LaTSs) whose transitions are labelled with the el- (cid:13) ments from a distributive lattice. Moreover, the bisimilarityintroduced on LaTSs is shown to coincide with the conditionalbisimilarity on the corresponding CTSs. In Section V, weshow how bisimilarity can be computed using a form ofmatrix multiplication. Section VI focusses on the translationbetween an FTS and a CTS, and moreover, a BDD-basedimplementation of checking bisimilarity is laid out. Lastly, weconclude with a discussion on related work and future workin Section VII. All the proofs can be found in Appendix A.II. P
RELIMINARIES
We now recall some basic definitions concerning lattices,including the well-known Birkhoff’s duality result from [13].
Definition 1 (Lattice, Heyting Algebra, Boolean Algebra) . Let ( L , ⊑ ) be a partially ordered set. If for each pair of elements ℓ, m ∈ L there exists a supremum ℓ ⊔ m and an infimum ℓ ⊓ m , we call ( L , ⊔ , ⊓ ) a lattice . A bounded lattice has atop element and a bottom element . A lattice is complete if every subset of L has an infimum and a supremum. It is distributive if ( ℓ ⊔ m ) ⊓ n = ( ℓ ⊓ n ) ⊔ ( m ⊓ n ) holds for all ℓ, m, n ∈ L .A bounded lattice L is a Heyting algebra if for any ℓ, m ∈ L , there is a greatest element ℓ ′ such that ℓ ⊓ ℓ ′ ⊑ m . Theresiduum and negation are defined as ℓ → m = F { ℓ ′ | ℓ ⊓ ℓ ′ ⊑ m } and ¬ ℓ = ℓ → . A Boolean algebra L is a Heytingalgebra satisfying ¬¬ ℓ = ℓ for all ℓ ∈ L . Example 1.
Given a set of atomic propositions N , consider B ( N ) , the set of all Boolean expressions over N , i.e., the setof all formulae of propositional logic. We equate every subset C ⊆ N with the evaluation that assigns true to all f ∈ C and false to all f ∈ N \ C . For b ∈ B ( N ) , we write C | = b whenever C satisfies b . Furthermore we define J b K = { C ⊆ N | C | = b } ∈ P ( P ( N )) . Two Boolean expressions b , b are called equivalent whenever J b K = J b K . Furthermore b implies b ( b | = b ), whenever J b K ⊆ J b K .The set B ( N ) , quotiented by equivalence, is a Boolean alge-bra, isomorphic to P ( P ( N )) , where J b K ⊔ J b K = J b K ∪ J b K = J b ∨ b K , analogously for ⊓ , ∩ , ∧ , ¬ J b K = P ( N ) \ J b K = J ¬ b K ,and J b K → J b K = P ( N ) \ J b K ∪ J b K = J ¬ b ∨ b K . Distributive lattices and Boolean algebras give rise to an in-teresting duality result, which was first stated for finite latticesby Birkhoff and extended to the infinite case by Priestley [13].In the sequel we will focus on finite distributive lattices (whichare Heyting algebras). We first need the following concepts.
Definition 2.
Let L be a lattice. An element n ∈ L \{ } is saidto be (join-)irreducible if whenever n = ℓ ⊔ m for elements ℓ, m ∈ L , it always holds that n = ℓ or n = m . We write J ( L ) for the set of all irreducible elements of L .Let ( S, ≤ ) be a partially ordered set. A subset S ′ ⊆ S is downward-closed , whenever s ′ ∈ S ′ and s ≤ s ′ implies s ∈ S ′ .We write O ( S ) for the set of all downward-closed subsets of S and ↓ s = { s ′ | s ′ ≤ s } for the downward-closure of s ∈ S . df ca b e { a, b, e, f }{ a, b, e }{ a, b, f }{ a, b }{ a } { b } { b, e }∅ Fig. 1. An example motivating Birkhoff’s representation theorem.
Example 2.
For our example of a Boolean algebra B ( N ) ,quotiented by equivalence, the irreducibles are the completeconjunctions of literals, or, alternatively, all sets C ⊆ N . We can now state the Birkhoff’s representation theorem forfinite distributive lattices [13].
Theorem 1. If L is a finite distributive lattice, then ( L , ⊔ , ⊓ ) ∼ = ( O ( J ( L )) , ∪ , ∩ ) via the isomorphism η : L →O ( J ( L )) , defined as η ( ℓ ) = { ℓ ′ ∈ J ( L ) | ℓ ′ ⊑ ℓ } . Further-more, given a finite partially ordered set ( S, ≤ ) , the downward-closed subsets of S , ( O ( S ) , ∪ , ∩ ) form a distributive lattice,with inclusion ( ⊆ ) as the partial order. The irreducibles of thislattice are all downward-closed sets of the form ↓ s for s ∈ S . Example 3.
Consider the lattice L = { , a, b, c, d, e, f, } with the order depicted in Figure 1. The irreducible elementsare a, b, e, f , i.e. exactly those elements that have a uniquedirect predecessor. On the right we depict the dual repre-sentation of the lattice in terms of downward-closed sets ofirreducibles, ordered by inclusion. This example suggests anembedding of a distributive lattice L into a Boolean algebra,obtained by taking the powerset of irreducibles. Proposition 1 (Embedding) . A finite distributive lattice L embeds into the Boolean algebra B = P ( J ( L )) via themapping η : L → B given by η ( ℓ ) = { ℓ ′ ∈ J ( L ) | ℓ ′ ⊑ ℓ } . We will simply assume that L ⊆ B . Since an embeddingis a lattice homomorphism, supremum and infimum coincidein L and B and we write ⊔ , ⊓ for both versions. Negationand residuum may however differ and we distinguish themvia a subscript, writing ¬ L , ¬ B and → L , → B . Given such anembedding, we can approximate elements of a Boolean algebrain the embedded lattice. Definition 3.
Let a complete distributive lattice L that embedsinto a Boolean algebra B be given. Then, the approximation of ℓ ∈ B is given by: ⌊ ℓ ⌋ L = F { ℓ ′ ∈ L | ℓ ′ ⊑ ℓ } . If the lattice is clear from the context, we will in the sequeldrop the subscript L and simply write ⌊ ℓ ⌋ . For instance, in theprevious example, the set of irreducibles { a, e, f } , which isnot downward-closed, is approximated by ⌊ { a, e, f } ⌋ = { a } . Lemma 1.
Let L be a complete distributive lattice that embedsinto a Boolean algebra B . For ℓ , m ∈ B , we have ⌊ ℓ ⊓ m ⌋ = ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ and furthermore that ℓ ⊑ m implies ⌊ ℓ ⌋ ⊑ ⌊ m ⌋ . If ℓ, m ∈ L , then ⌊ ℓ ⊔ ¬ m ⌋ = m → L ℓ . Note that in general it does not hold that ⌊ ℓ ⊔ m ⌋ = ⌊ ℓ ⌋ ⊔ ⌊ m ⌋ eady received safeunsafereceive, b check, b check, b u, b u, b e, a Fig. 2. Adaptive routing protocol with the alphabet A = { receive , check , u , e } . and ⌊ ℓ ⊔ ¬ m ⌋ = ⌊ m ⌋ → L ⌊ ℓ ⌋ for arbitrary ℓ, m ∈ B . Towitness why these equations fail to hold, take ℓ = { a, e } and m = { b, f } in the previous example as counterexample.III. C ONDITIONAL T RANSITION S YSTEMS
In this section we introduce conditional transition systemstogether with a notion of behavioural equivalence based onbisimulation. In [1], such transition systems were already in-vestigated in a coalgebraic setting, where the set of conditionswas trivially ordered. In the sequel, we will always use CTSfor the variant with upgrades defined as follows:
Definition 4. A conditional transition system (CTS) over analphabet A and a finite ordered set of conditions (Φ , ≤ ) is atriple ( X, A, f ) , where X is a set of states and f : X × A → (Φ → P ( X )) is a function mapping every ordered pair in X × A to a monotone function of type (Φ , ≤ ) → ( P ( X ) , ⊇ ) .As usual, we write x a,ϕ −−→ y whenever y ∈ f ( x, a )( ϕ ) . Intuitively, a CTS evolves as follows. Before the systemstarts acting, it is assumed that a condition ϕ ∈ Φ is chosen ar-bitrarily which may represent a selection of a valid product ofthe system. Now all the transitions that have a condition greaterthan or equal to ϕ are activated, while the remaining transitionsare inactive. Henceforth, the system behaves like a standardtransition system; until at any point in the computation, thecondition is changed to a smaller one (say, ϕ ′ ) signifyinga selection of a valid, upgraded product. This, in turn, hasa propelling effect in the sense that now (de)activation oftransitions depends on the new condition ϕ ′ , rather than on theold condition ϕ . Note that due to the monotonicity restrictionwe have that x a,ϕ −−→ y and ϕ ′ ≤ ϕ imply x a,ϕ ′ −−→ y . Thatis, active transitions remain active during an upgrade, but newtransitions may become active. In Appendix C, we weakenthis requirement by discussing a mechanism for deactivatingtransitions via priorities on the alphabet. Example 4.
Consider an example (simplified from [11]) of anadaptive routing protocol modelled as a CTS in Figure 2. Thesystem has two products: the basic system, denoted b , with noencryption feature and the advanced system, denoted a , withan encryption feature. The ordering on the products is a < b .Transitions that are present due to monotonicity are omitted.Initially, the system is in state ’ready’ and is waitingto receive a message. Once a message is received thereis a check whether the system’s environment is safe orunsafe, leading to non-deterministic branching. If the en-cryption feature is present, then the system can send an encrypted message (e) from the unsafe state only; otherwise,the system sends an unencrypted message (u) regardless ofthe state being ’safe’ or ’unsafe’. Note that such a be-haviour description can easily be encoded by a transitionfunction. E.g., f ( received , check )( b ) = { safe , unsafe } and f ( received , a )( x ) = ∅ (for x ∈ { a , b } and a ∈ A \ { check } )specifies the transitions that can be fired from the receivedstate to the (un)safe states. Next, we turn our attention towards (strong) bisimulationrelations for CTSs which consider the ordering of conditionsin their transfer properties.
Definition 5.
Let ( X, A, f ) , ( Y, A, g ) be two CTSs over thesame set of conditions (Φ , ≤ ) . For a condition ϕ ∈ Φ , wedefine f ϕ ( x, a ) = f ( x, a )( ϕ ) to denote the traditional ( A -)labelled transition system induced by a CTS ( X, A, f ) . Twostates x ∈ X, y ∈ Y are conditionally bisimilar under acondition ϕ ∈ Φ , denoted x ∼ ϕ y , if there is a family ofrelations R ϕ ′ ⊆ X × Y (for every ϕ ′ ≤ ϕ ) such that(i) each relation R ϕ ′ is a traditional bisimulation relationbetween f ϕ ′ and g ϕ ′ ,(ii) whenever ϕ ′ ≤ ϕ ′′ , we have R ϕ ′ ⊇ R ϕ ′′ , and(iii) R ϕ relates x and y , i.e., ( x, y ) ∈ R ϕ . Example 5.
Consider the CTS illustrated in Figure 2 wherethe condition b of the transition ‘received check , b −−−−→ unsafe’is replaced by a . Let ready and ready denote the initialstates of the system before and after the above modifica-tion, respectively. Then, we find ready ∼ a ready ; however,ready b ready . To see why the latter fails to hold, let R b be the bisimulation relation in the traditional sense betweenthe states ready , ready under condition b . Then, one findsthat the states unsafe , safe are bisimilar in the traditionalsense, i.e., ( unsafe , safe ) ∈ R b . However, the two statescannot be related by any traditional bisimulation relationunder condition a ; thus, violating Condition 2 of Definition 5.Indeed, the two systems behave differently. In the first, itis possible to perform actions receive, check (arrive in stateunsafe), do an upgrade, and send an encrypted message (e),which is not feasible in the second system because the checktransition forces the system to be in the safe state before doingthe upgrade. However, without upgrades, the above systemswould be bisimilar for both products. We end this section by adapting the classical bisimulationgame to conditional transition systems; thus, incorporatingour intuitive explanation of upgrades with the notion ofbisimilarity.
Definition 6 (Bisimulation Game) . Given two CTSs ( X, A, f ) and ( Y, A, g ) over a poset (Φ , ≤ ) , a state x ∈ X , a state y ∈ Y , and a condition ϕ ∈ Φ , the bisimulation gameis a round-based two-player game that uses both CTSs asgame boards. Let ( x, y, ϕ ) be a game instance indicating that x, y are marked and the current condition is ϕ . The gameprogresses to the next game instance as follows: Player 1 is the first one to move. Player 1 can decideto make an upgrade, i.e., replace the condition ϕ by asmaller one (say ϕ ′ ≤ ϕ , for some ϕ ′ ∈ Φ ). • Player 1 can choose the marked state x ∈ X (or y ∈ Y )and performs a transition x a,ϕ ′ −−→ x ′ ( y a,ϕ ′ −−→ y ′ ). • Player 2 then has to simulate the last step, i.e., if Player 1made a step x a,ϕ ′ −−→ x ′ , Player 2 is required to make step y a,ϕ ′ −−→ y ′ and vice-versa. • In turn, the new game instance is ( x ′ , y ′ , ϕ ′ ) .Player 1 wins if Player 2 cannot simulate the last stepperformed by Player 1. Player 2 wins if the game neverterminates or Player 1 cannot make another step. So bisimulation is characterised as follows: Player 2 hasa winning strategy for a game instance ( x, y, ϕ ) if and onlyif x ∼ ϕ y . The proof and the computation of the winningstrategies for both players are given in Appendix A-B.IV. L ATTICE T RANSITION S YSTEMS
Recall from Section II that there is a duality between partialorders and distributive lattices. In fact, as we will show below,this result can be lifted to the level of transition systemsas follows: a conditional transition system over a poset isequivalent to a transition system whose transitions are labelledby the downward-closed subsets of the poset.
Definition 7. A lattice transition system (LaTS) over a finitedistributive lattice L and an alphabet A is a triple ( X, A, α ) with a set of states X and a transition function α : X × A × X → L . A LaTS ( X, A, α ) is finite if the sets X, A are finite.
Note that superficially, lattice transition systems resembleweighted automata [14]. However, while in weighted automatathe lattice annotations are seen as weights that are accumulated,in CTSs they play the role of guards that control which tran-sitions can be taken. Furthermore, the notions of behaviouralequivalence are quite different.Given a CTS ( X, A, f ) over (Φ , ≤ ) , we can easily constructa LaTS over O (Φ) by defining α ( x, a, x ′ ) = { ϕ ∈ Φ | x ′ ∈ f ( x, a )( ϕ ) } for x, x ′ ∈ X , a ∈ A . Due to monotonicity, α ( x, a, x ′ ) is always downward-closed. Similarly, a LaTS canbe converted into a CTS by using the Birkhoff duality and bytaking the irreducibles as conditions. Theorem 2.
The set of all CTSs over a set of conditions Φ is isomorphic to the set of all LaTSs over the lattice whoseelements are the downward-closed subsets of Φ . So every LaTS over a finite distributive lattice gives riseto a CTS in our sense (cf. Definition 4) and since finiteBoolean algebras are finite distributive lattices, conditionaltransition systems in the sense of [1] are CTSs in our sense aswell. We chose the definition of a CTS using posets insteadof the dual view using lattices, because this view yieldsa natural description which models transitions in terms ofconditions (product versions), though when computing withCTSs we often choose the lattice view. By adopting this view, conditional bisimulations can be computed symbolically andhence more efficiently (cf. Section VI-B).
Definition 8.
Let ( X, A, α ) and ( Y, A, β ) be any two LaTSsover a lattice L . A conditional relation R , i.e., a function oftype R : X × Y → L is a lattice bisimulation for α, β if andonly if the following transfer properties are satisfied.(i) For all x, x ′ ∈ X , y ∈ Y , a ∈ A , ℓ ∈ J ( L ) whenever x a,ℓ −−→ x ′ and ℓ ⊑ R ( x, y ) , there exists y ′ ∈ Y such that y a,ℓ −−→ y ′ and ℓ ⊑ R ( x ′ , y ′ ) .(ii) Symmetric to (i) with the roles of x and y interchanged.In the above, we write x a,ℓ −−→ x ′ , whenever ℓ ⊑ α ( x, a, x ′ ) . For ϕ ∈ Φ , a transition x a,ϕ −−→ x ′ exists in the CTS if andonly if there is a transition x a, ↓ ϕ −−→ x ′ in the correspondingLaTS. Hence they are denoted by the same symbol. Theorem 3.
Let ( X, A, f ) and ( Y, A, g ) be any two CTSs over Φ . Two states x ∈ X, y ∈ Y are conditionally bisimilar undera condition ϕ if and only if there is a lattice bisimulation R between the corresponding LaTSs such that ϕ ∈ R ( x, y ) . Incidentally, the order in L gives rise to a natural order onlattice bisimulations. For any two lattice bisimulations R , R : X × Y → L , we write R ⊑ R if and only if R ( x, y ) ⊑ R ( x, y ) for all x ∈ X, y ∈ Y . As a result, taking the element-wise supremum of a family of lattice bisimulations is again alattice bisimulation. Therefore, the greatest lattice bisimulationfor a LaTS always exists, just like in the traditional case. Lemma 2.
Let R i ∈ X × Y → L , i ∈ I be latticebisimulations for a pair of LaTSs ( X, A, α ) and ( Y, A, β ) .Then F { R i | i ∈ I } is a lattice bisimulation. V. C
OMPUTATION OF L ATTICE B ISIMULATION
The goal of this section is to present an algorithm that com-putes the greatest lattice bisimulation between a given pair ofLaTSs. In particular, we first characterise lattice bisimulationas a post-fixpoint of an operator F on the set of all conditionalrelations. Then, we show that this operator F is monotonewith respect to the ordering relation ⊑ ; thereby, ensuring thatthe greatest bisimulation always exists by applying the well-known Knaster-Tarski fixpoint theorem. Moreover, on finitelattices and finite sets of states, the usual fixpoint iterationstarting with the trivial conditional relation (i.e., the constant -matrix over L ) can be used to compute the greatest latticebisimulation. Lastly, we give a translation of F in terms ofmatrices using a form of matrix multiplication found in theliterature of residuated lattices [4] and database design [18]. A. A Fixpoint Approach
Throughout this section, we let α : X × A × X → L , β : Y × A × Y → L denote any two LaTSs, L denote a finitedistributive lattice, and B denote the Boolean algebra that thislattice embeds into. Definition 9.
Recall the residuum operator → on a latticeand define three operators F , F , F : ( X × Y → L ) → X × Y → L ) in the following way (for R ∈ X × Y → L , x ∈ X , y ∈ Y ): F ( R )( x, y ) = l a ∈ A,x ′ ∈ X (cid:18) α ( x, a, x ′ ) → (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) (cid:1)(cid:19) ,F ( R )( x, y ) = l a ∈ A,y ′ ∈ Y (cid:18) β ( y, a, y ′ ) → (cid:0) G x ′ ∈ X ( α ( x, a, x ′ ) ⊓ R ( x ′ , y ′ )) (cid:1)(cid:19) ,F ( R )( x, y ) = F ( R )( x, y ) ⊓ F ( R )( x, y ) . Note that the above definition is provided for a distributivelattice, viewing it in classical two-valued Boolean algebraresults in the well-known transfer properties of a bisimulation.
Theorem 4.
A conditional relation R is a lattice bisimulationif and only if R ⊑ F ( R ) . Next, it is easy to see that F is a monotone operatorwith respect to the ordering ⊑ on L since the infimum andsupremum are both monotonic, and moreover, the residuumoperation is monotonic in the second component. As a result,we can use the following fixpoint iteration to compute thegreatest bisimulation while working with finite lattices andfinite sets of states. Algorithm 1.
Let ( X, A, α ) and ( Y, A, β ) be two finite LaTSs.Fix R as R ( x, y ) = 1 for all x ∈ X, y ∈ Y . Then, compute R i +1 = F ( R i ) for all i ∈ N until R i ⊑ R i +1 . Lastly, return R i as the greatest bisimulation.Suppose α = β , it is not hard to see that the fixpointiteration must stabilise after at most | X | steps, since each R i induces equivalence relations for all conditions ϕ andrefinements regarding ϕ are immediately propagated to every ϕ ′ ≥ ϕ . An equivalence relation can be refined at most | X | times, limiting the number of iterations. B. Lattice Bisimilarity is Finer than Boolean Bisimilarity
We now show the close relation of the notions of bisimilarityfor a LaTS defined over a finite distributive lattice L and aBoolean algebra B . As usual, let ( X, A, α ) and ( Y, A, β ) beany two LaTSs together with the restriction that the lattice L embeds into the Boolean algebra B . Moreover, let F L and F B be the monotonic operators as defined in Definition 9 overthe lattice L and the Boolean algebra B , respectively. We saythat R is an L -bisimulation (resp. B -bisimulation) whenever R ⊑ F L ( R ) (resp. R ⊑ F B ( R ) ). Proposition 2. (i) If R : X × Y → L , then ⌊ F B ( R ) ⌋ = F L ( R ) .(ii) Every L -bisimulation is also a B -bisimulation.(iii) A B -bisimulation R : X × Y → B is an L -bisimulationwhenever all the entries of R are in L . However, even though the two notions of bisimilarity areclosely related, they are not identical, i.e., it is not true thatwhenever a state x is bisimilar to a state y in B that it is also bisimilar in L (see Example 4 where we encounter a B -bisimulation, which is not an L -bisimulation). C. Matrix Multiplication
An alternative way to represent a LaTS ( X, A, α ) is to viewthe transition function α as a family of matrices α a : X × X → L (one for each action a ∈ A ) with α a ( x, x ′ ) = α ( x, a, x ′ ) ,for every x, x ′ ∈ X . We use standard matrix multiplication(where ⊔ is used for addition and ⊓ for multiplication), aswell as a special form of matrix multiplication [4], [18]. Definition 10 ( ⊗ -multiplication) . Given an X × Y -matrix U : X × Y → L and a Y × Z -matrix V : Y × Z → L , wedefine the ⊗ -multiplication of U and V as follows: U ⊗ V : X × Z → L ( U ⊗ V )( x, z ) = l y ∈ Y (cid:0) U ( x, y ) → L V ( y, z ) (cid:1) . Theorem 5.
Let R : X × Y → L be a conditional relationbetween a pair of LaTSs ( X, A, α ) and ( Y, A, β ) . Then, F ( R ) = d a ∈ A (( α a ⊗ ( R · β aT )) ⊓ ( β a ⊗ ( α a · R ) T ) T ) , where A T denotes the transpose of a matrix A . We end this section by making an observation on LaTSsover a Boolean algebra. In a Boolean algebra, it is well-knownthat the residuum operator can be replaced by the negationand join operators. Thus, in this case, using only the standardmatrix multiplication and (componentwise) negation we get U ⊗ V = ¬ ( U · ( ¬ V )) which further simplifies F ( R ) as: F ( R ) = l a ∈ A (cid:0) ¬ ( α a · ¬ ( R · β Ta )) ⊓ ¬ ( ¬ ( α a · R ) · β Ta ) (cid:1) . This reduction is especially relevant to software product lineswith no upgrade features.VI. A
PPLICATION AND I MPLEMENTATION
A. Featured Transition Systems
A Software Product Line (SPL) is commonly described as“a set of software-intensive systems that share a common,managed set of features satisfying the specific needs of aparticular market segment or mission and that are developedfrom a common set of core assets [artifacts] in a prescribedway” [10]. The idea of designing a set of software systems thatshare common functionalities in a collective way is becomingprominent in the field of software engineering (cf. [21]). Inthis section we show that a featured transition system (FTS)[12] – a well-known formal model that is expressive enoughto specify an SPL – is a special instance of a CTS.
Definition 11. A featured transition system (FTS) over a finiteset of features N is a tuple F = ( X, A, T, γ ) , where X isa finite set of states, A is a finite set of actions and T ⊆ X × A × X is the set of transitions. Finally, γ : T → B ( N ) assigns a Boolean expression over N to each transition. FTSs are often accompanied by a so-called feature diagram [7], [9], [11], a Boolean expression d ∈ B ( N ) that specifiesadmissible feature combinations. Given a subset of features5 ⊆ N (called configuration or product ) such that C | = d and an FTS F = ( X, A, T, γ ) , a state x ∈ X can perform an a -transition to a state y ∈ X in the configuration C , whenever ( x, a, y ) ∈ T and C | = γ ( x, a, y ) .It is easy to see that an FTS is a CTS, where the conditionsare subsets of N satisfying d with the discrete order. Moreover,an FTS is a special case of an LaTS due to Theorem 2 and O ( J d K , =) = P ( J d K ) . Given an FTS F = ( X, A, T, γ ) and afeature diagram d , then the corresponding LaTS is ( X, A, α ) with α ( x, a, y ) = J γ ( x, a, y ) ∧ d K , if ( x, a, y ) ∈ T ; α ( x, a, y ) = ∅ , if ( x, a, y ) T .Furthermore, we can extend the notion of FTSs by fixing asubset of upgrade features U ⊆ N that induces the followingordering on configurations C, C ′ ∈ J d K : C ≤ C ′ ⇐⇒ ∀ f ∈ U ( f ∈ C ′ ⇒ f ∈ C ) ∧∀ f ∈ ( N \ U ) ( f ∈ C ′ ⇐⇒ f ∈ C ) . Intuitively, the configuration C can be obtained from C ′ by“switching” on one or several upgrade features f ∈ U . Noticethat it is this upgrade ordering on configurations which givesrise to the partially ordered set of conditions in the definitionof a CTS. Hence, in the following we will consider the lattice O ( J d K , ≤ ) (i.e., the set of all downward-closed subsets of J d K ). B. BDD-Based Representation
In this section, we discuss our implementation of latticebisimulation using a special form of binary decision diagrams(BDDs) called reduced and ordered binary decision diagrams (ROBDDs). Our implementation can handle adaptive SPLsthat allow upgrade features, using finite distributive lattices.Note that non-adaptive SPLs based on Boolean algebras area special case. BDD-based implementations of FTSs withoutupgrades have already been mentioned in [8], [12].A binary decision diagram (BDD) is a rooted, directed, andacyclic graph which serves as a representation of a Booleanfunction. Every BDD has two distinguished terminal nodes and , representing the logical constants true and false .The inner nodes are labelled by the atomic propositions of aBoolean expression b ∈ B ( N ) represented by the BDD, suchthat on each path from the root to the terminal nodes, everyvariable of the Boolean formula occurs at most once. Eachinner node has exactly two distinguished outgoing edges called high and low representing the case that the atomic propositionof the inner node has been set to true or false . Given a BDD fora Boolean expression b ∈ B ( N ) and a configuration C ⊆ N (representing an evaluation of the atomic propositions), we cancheck whether C | = b by following the path from the root nodeto a terminal node. At a node labelled f ∈ N we go to the high -successor if f ∈ C and to the low -successor if f C . Ifwe arrive at the terminal node labelled we have establishedthat C | = b , otherwise C = b .We use a special class of BDDs called ROBDDs (see [2] formore details) in which the order of the variables occurring inthe BDD is fixed and redundancy is avoided. If both the childnodes of a parent node are identical, the parent node is droppedfrom the BDD and isomorphic parts of the BDD are merged. The advantage of ROBDDs is that two equivalent Booleanformulae are represented by exactly the same ROBDD (ifthe order of the variables is fixed). Furthermore, there arepolynomial-time implementations for the basic operations –negation, conjunction, and disjunction. These are however sen-sitive to the ordering of atomic propositions and an exponentialblowup cannot be ruled out, but often it can be avoided. f f f f f f Fig. 3. BDD for b . Consider a Boolean expression b with J b K = {∅ , { f , f } , { f , f } , { f , f , f , f }} and theordering on the atomic propositionsas f , f , f , f . Figure 3 showsthe corresponding ROBDDrepresentation for b , where theinner nodes, terminal nodes, and high(low) edges are depicted as circles,rectangles, and solid (dashed) lines, respectively.Formally, an ROBDD b over a set of features N is an expres-sion in one of the following forms: , or , or ( f, b , b ) . Here, , denote the two terminal nodes and the triple ( f, b , b ) denotes an inner node with variable f ∈ N and b , b as the low - and high -successors, respectively. If b = ( f, b , b ) , wewrite root ( b ) = f , high ( b ) = b , and low ( b ) = b .Note that the elements of the Boolean algebra P ( P ( N )) correspond exactly to ROBDDs over N . We now discuss howROBDDs can be used to specify and manipulate elements ofthe lattice O ( J d K , ≤ ) . In particular, computing the infimum(conjunction) and the supremum (disjunction) in the lattice O ( J d K , ≤ ) is standard, since this lattice can be embedded into P ( P ( N )) and the infimum and supremum operations coincidein both structures. Thus, it remains to characterize the latticeelements and the residuum operation.We say that an ROBDD b is downward-closed w.r.t. ≤ (orsimply, downward-closed) if the set of configurations J b K isdownward-closed w.r.t. ≤ . The following lemma characteriseswhen an ROBDD b is downward-closed. It follows from thefact that F ∈ P ( P ( N )) is downward-closed if and only if forall C ∈ F, f ∈ U we have C ∪ { f } ∈ F . Lemma 3.
An ROBDD is downward-closed if and only if foreach node labelled with a upgrade feature, the low -successorimplies the high -successor.
Next, we compute the residuum in O ( J d K , ≤ ) by using theresiduum operation of the Boolean algebra P ( P ( N )) . Forthis, we first describe how to approximate an element of theBoolean algebra (represented as an ROBDD) in the lattice O ( P ( N ) , ≤ ) .In the above algorithm, for each non-terminal node thatcarries a label in U (line ), we replace the high -successorwith the conjunction of the low and the high -successor usingthe procedure described above. Since this might result ina BDD that is not reduced, we apply the build procedureappropriately, which simply transforms a given ordered BDDinto an ROBDD. The result of the algorithm ⌊⌊ b ⌋⌋ coincideswith the approximation ⌊ b ⌋ of the ROBDD b seen as anelement of the Boolean algebra P ( P ( N )) (Definition 3).6 lgorithm 1 Approximation ⌊⌊ b ⌋⌋ of an ROBDD b in the lattice O ( P ( N ) , ≤ ) Input:
An ROBDD b over a set of features N and a set ofupgrade features U ⊆ N . Output:
An ROBDD ⌊⌊ b ⌋⌋ , which is the best approximationof b in the lattice. procedure ⌊⌊ b ⌋⌋ if b is a leaf then return b else if root ( b ) ∈ U then return build ( root ( b ) , ⌊⌊ high ( b ) ⌋⌋ , ⌊⌊ high ( b ) ⌋⌋ ∧ ⌊⌊ low ( b ) ⌋⌋ ) else return build ( root ( b ) , ⌊⌊ high ( b ) ⌋⌋ , ⌊⌊ low ( b ) ⌋⌋ ) end if end procedure α : 0 21 b, J f K b, J f K b, J true K c, J f K β : 0 21 b, J f K b, J f K b, J f K c, J f K Fig. 4. Components for α and β , where f is viewed as a Boolean expressionindicating the presence of feature f . Lemma 4.
For an ROBDD b , ⌊⌊ b ⌋⌋ is downward-closed. Fur-thermore, ⌊⌊ b ⌋⌋ | = b and there is no other downward-closedROBDD b ′ such that ⌊⌊ b ⌋⌋ | = b ′ | = b . Hence ⌊⌊ b ⌋⌋ = ⌊ b ⌋ . For each node in the BDD we compute at most onesupremum, which is quadratic. Hence the entire runtime of theapproximation procedure is at most cubic. Finally, we discusshow to compute the residuum in O ( J d K , ≤ ) . Proposition 3.
Let b , b be two ROBDD which representelements of O ( J d K , ≤ ) , i.e., b , b are both downward-closedand b | = d , b | = d . (i) ⌊ ¬ b ∨ b ∨ ¬ d ⌋ ∧ d is the residuum b → b in the lattice O ( J d K , ≤ ) . (ii) If d is downward-closed,then this simplifies to b → b = ⌊ ¬ b ∨ b ⌋ ∧ d .Here, ¬ is the negation in the Boolean algebra P ( P ( N )) .C. Implementation and Runtime Results We have implemented an algorithm that computes the latticebisimulation relation based on the matrix multiplication (seeTheorem 5) in a generic way. Specifically, this implementationis independent of how the irreducible elements are encoded,ensuring that no implementation details of operations such asmatrix multiplication can interfere with the runtime results.For our experiments we instantiated it in two possible ways:with bit vectors representing feature combinations and withROBDDs as outlined above. Our results show a significantadvantage when we use BDDs to compute lattice bisimilarity.The implementation is written in C F be a set of features. Our example contains, for each feature f ∈ F , one disconnected component in both LaTSs that isdepicted in Figure 4: the component for α on the left, thecomponent for β is on the right. The only difference betweenthe two is in the guard of the transition from state to state . The quotient of the times taken without BDDs and withBDDs is growing exponentially by a factor of about foreach additional feature (see the table in Appendix B). Due tofluctuations, an exact rate cannot be given. By the eighteenthiteration (i.e. 18 features and copies of the basic component),the implementation using BDDs needed 17 seconds, whereasthe version without BDDs took more than 96 hours. The nine-teenth iteration exceeded the memory for the implementationwithout BDDs, but terminated within 22 seconds with BDDs.VII. C ONCLUSION , R
ELATED W ORK , AND F UTURE W ORK
In this paper, we endowed CTSs with an order on conditionsto model systems whose behaviour can be upgraded by re-placing the current condition by a smaller one. Correspondingverification techniques based on behavioural equivalences canbe important for SPLs where an upgrade to a more advancedversion of the same software should occur without unexpectedbehaviour. To this end, we proposed an algorithm, based onmatrix multiplication, that allows to compute the greatestbisimulation of two given CTSs. Interestingly, the dualitybetween lattices and downward-closed sets of posets, as wellas the embedding into a Boolean algebra proved to be fruitfulwhen developing it and proving its correctness.There are two ways in which one can extend CTSs as aspecification language: first, in some cases it makes sense tospecify that an advanced version offers improved transitionswith respect to a basic version. For instance, in our runningexample, allowing the router to send unencrypted messagesin an unsafe environment is superfluous because the advancedversion always has the encryption feature. Such a situation canbe modelled in a CTS by adding a precedence relation overthe set of actions, leading to the deactivation of transitions,which is worked out in Appendix C. The second question ishow to incorporate downgrades: one solution could be to workwith a pre-order on conditions, instead of an order. This simplymeans that two conditions ϕ = ψ with ϕ ≤ ψ , ψ ≤ ϕ canbe merged since they can be exchanged arbitrarily. Naturally,one could study more sophisticated notions of upgrade anddowngrade in the context of adaptivity.As for the related work on adaptive SPLs, literature can begrouped into either empirical or formal approaches; however,given the nature of our work, below we rather concentrate onlyon the formal ones [6], [11], [15], [23].Cordy et al. [11] model an adaptive SPL using an FTS whichencodes not only a product’s transitions, but also how some ofthe features may change via the execution of a transition. Incontrast, we encode adaptivity by requiring a partial order onthe products of an SPL and its effect on behaviour evolution7y the monotonicity requirement on the transition function.Moreover, instead of studying the model checking problem asin [11], our focus was on bisimilarity between adaptive SPLs.In [6], [15], [20], alternative ways to model adaptive SPLsby using the synchronous parallel composition on two separatecomputational models is presented. Intuitively, one modelsthe static aspect of an SPL, while the other focuses onadaptivity by specifying the dynamic (de)selection of features.For instance, Dubslaff et al. [15] used two separate Markovdecision processes (MDP) to model an adaptive SPL. Theymodelled the core behaviour in an MDP called feature module ;while dynamic (de)activation of features is modelled separatelyin a MDP called feature controller . In retrospect, our workshows that for monotonic upgrades it is possible to compactly represent an adaptive SPL over one computational model(CTSs in our case) rather than a parallel composition of two.In [23], a process calculus QFLan motivated by concurrentconstraint programming was developed. Thanks to an in-built notion of a store, various aspects of an adaptive SPLsuch as (un)installing a feature and replacing a feature byanother feature can be modelled at run-time by operationalrules. Although QFLan has constructs to specify quantitativeconstraints in the spirit of [15], their aim is to obtain statisticalevidence by performing simulations.Behavioural equivalences such as (bi)simulation relationshave already been studied in the literature of traditional SPLs.In [12], the authors proposed a definition of simulation relationbetween any two FTSs (without upgrades) to combat the stateexplosion problem by establishing a simulation relation be-tween a system and its refined version. In contrast, the authorsin [3] used simulation relations to measure the discrepancy inbehaviour caused by feature interaction, i.e., whether a featurethat is correctly designed in isolation works correctly whencombined with the other features or not.(Bi)simulation relations on lattice Kripke structures werealso studied in [19], but in a very different context (in model-checking rather than in the analysis of adaptive SPLs). Disre-garding the differences between transition systems and Kripkestructures (i.e., forgetting the role of atomic propositions),the definition of bisimulation in [19] is quite similar to ourDefinition 9 (another similar formula occurs in [12]). However,in [19] the stronger assumption of finite distributive de Morganalgebras is used, the results are quite different and symbolicrepresentations via BDDs are not taken into account. Moreover,representing the lattice elements and computing residuum overthem using the BDDs is novel in comparison with [12], [19].Lastly, Fitting [16] studied bisimulation relations in thesetting of unlabelled transition systems and gave an elegantcharacterisation of bisimulation when transition systems andthe relations over states are viewed as matrices. By restrictingourselves to LaTSs over Boolean algebras and fixing ouralphabet to be a singleton set, we can establish the followingcorrespondence between Fitting’s formulation of bisimulationand lattice bisimulation (see Appendix A-A for the proof). Theorem 6.
Let ( X, α ) be a LaTS over an atomic Boolean algebra B . Then, a conditional relation R : X × X → B isa lattice bisimulation for α if and only if R · α ⊑ α · R and R T · α ⊑ α · R T . Here we interpret α as a matrix of type X × X → L by dropping the occurrence of action labels. In hindsight, we are treating general distributive lattices thatallow us to conveniently model and reason about upgrades.
Current and future work:
In the future we plan to obtainruntime results for systems of varying sizes. In particular, weare interested in real-world applications in the field of SPLs,together with other applications, such as modelling transitionsystems with access rights or deterioration.On the more theoretical side of things, we have workedout the coalgebraic concepts for CTSs [5] and compared thematrix multiplication algorithm to the final chain algorithmpresented in [1], when applied to CTSs.
Acknowledgements:
We thank Filippo Bonchi and MathiasHülsbusch for interesting discussions on earlier drafts.R
EFERENCES[1] J. Adámek, F. Bonchi, M. Hülsbusch, B. König, S. Milius, and A. Silva.A coalgebraic perspective on minimization and determinization. In
Proc.of FOSSACS ’12 , pages 58–73. Springer, 2012. LNCS/ARCoSS 7213.[2] H. R. Andersen. An introduction to binary decision diagrams.
CourseNotes , 1997.[3] J. M. Atlee, U. Fahrenberg, and A. Legay. Measuring behaviourinteractions between product-line features. In
Proc. of Formalise ’15 ,pages 20–25, Piscataway, NJ, USA, 2015. IEEE Press.[4] R. Belohlavek and J. Konecny. Row and column spaces of matrices overresiduated lattices.
Fundam. Inf. , 115(4):279–295, December 2012.[5] Harsh Beohar, Barbara König, Sebastian Küpper, and Alexandra Silva.A coalgebraic treatment of conditional transition systems with upgrades.arXiv:1612.05002, submitted to LMCS.[6] P. Chrszon, C. Dubslaff, S. Klüppelholz, and C. Baier. Family-basedmodeling and analysis for probabilistic systems – featuring ProFeat. In
Proc. of FASE ’16 , pages 287–304. Springer, 2016. LNCS 9633.[7] A. Classen, M. Cordy, P.-Y. Schobbens, P. Heymans, A. Legay, andJ.-F. Raskin. Featured transition systems: Foundations for verifyingvariability-intensive systems and their application to LTL model check-ing.
IEEE Trans. Softw. Eng. , 39(8):1069–1089, August 2013.[8] A. Classen, P. Heymans, P.-Y. Schobbens, and A. Legay. Symbolicmodel checking of software product lines. In
Proc. of ICSE ’11 . ACM,2011.[9] A. Classen, P. Heymans, P.-Y. Schobbens, A. Legay, and J.-F. Raskin.Model checking lots of systems: efficient verification of temporalproperties in software product lines. In
Proc. of ICSE ’10 . ACM, 2010.[10] P. Clements and L. M. Northrop.
Software Product Lines: Practices andPatterns . Addison-Wesley, Boston, USA, 2001.[11] M. Cordy, A. Classen, P. Heymans, A. Legay, and P.-Y. Schobbens.Model checking adaptive software with featured transition systems. In
Assurances for Self-Adaptive Systems , pages 1–29. Springer, 2013.[12] M. Cordy, A. Classen, G. Perrouin, P.-Y. Schobbens, P. Heymans, andA. Legay. Simulation-based abstractions for software product-line modelchecking. In
Proc. of ICSE ’12 , pages 672–682. IEEE, 2012.[13] B. A. Davey and H. A. Priestley.
Introduction to lattices and order .Cambridge University Press, 2002.[14] M. Droste, W. Kuich, and H. Vogler, editors.
Handbook of WeightedAutomata . Springer, 2009.[15] C. Dubslaff, S. Klüppelholz, and C. Baier. Probabilistic model checkingfor energy analysis in software product lines. In
Proc. of MODULARITY’14 , pages 169–180. ACM, 2014.[16] M. Fitting. Bisimulations and boolean vectors. In
Advances in ModalLogic , volume 4, pages 97–126. World Scientific Publishing, 2002.[17] A. Gruler, M. Leucker, and K. Scheidemann. Modeling and modelchecking software product lines. In
Proc. of FMOODS’08 , pages 113–131. Springer, 2008. LNCS 5051.[18] L. J. Kohout and W. Bandler. Relational-product architectures forinformation processing.
Inf. Sci. , 37(1-3):25–37, December 1985.
19] O. Kupferman and Y. Lustig. Latticed simulation relations and games.
Int. Journal of Found. of Computer Science , 21(02):167–189, 2010.[20] M. Lochau, J. Bürdek, S. Hölzle, and A. Schürr. Specification andautomated validation of staged reconfiguration processes for dynamicsoftware product lines.
Software & Sys. Modeling , 16(1):125–152, 2017.[21] A. Metzger and K. Pohl. Software product line engineering andvariability management: Achievements and challenges. In
Proc. of FOSE’14 , pages 70–84, New York, NY, USA, 2014. ACM.[22] T. K. Nguyen, J. Sun, Y. Liu, J. S. Dong, and Y. Liu. Improved BDD-based discrete analysis of timed systems. In
Proc. of FM ’12 , volume7436 of
LNCS , pages 326–340. Springer, 2012.[23] M. H. ter Beek, A. Legay, A. Lluch Lafuente, and A. Vandin. Statisticalanalysis of probabilistic models of software product lines with quantita-tive constraints. In
Proc. of SPLC ’15 , pages 11–15. ACM, 2015.[24] M.H. ter Beek, A. Fantechi, S. Gnesi, and F. Mazzanti. Modellingand analysing variability in product families: Model checking of modaltransition systems with variability constraints.
JLAMP , 85(2):287 – 315,2016. A PPENDIX AP ROOFS
Here we give proofs for all lemmas and propositions forwhich we have omitted the proofs in the article.
A. Proofs concerning Lattices and Lattice Transition Systems
Proof of Lemma 1
Proof:
Let ℓ, m ∈ B . Monotonicity of the approximationis immediate from the definition.We next show ⌊ ℓ ⊓ m ⌋ ⊒ ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ : by definition we have ⌊ ℓ ⌋ ⊑ ℓ , ⌊ m ⌋ ⊑ m and hence ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ ⊑ ℓ ⊓ m . Since ⌊ ℓ ⊓ m ⌋ is the best approximation of ℓ ⊓ m and ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ isone approximation, the inequality follows.In order to prove ⌊ ℓ ⊓ m ⌋ ⊑ ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ observe that ⌊ ℓ ⌋ ⊒ ⌊ ℓ ⊓ m ⌋ and ⌊ m ⌋ ⊒ ⌊ ℓ ⊓ m ⌋ by monotonicity of the approximation.Hence ⌊ ℓ ⊓ m ⌋ is a lower bound of ⌊ ℓ ⌋ , ⌊ m ⌋ , which implies ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ ⊒ ⌊ ℓ ⌋ ⊓ ⌊ m ⌋ .Now let ℓ, m ∈ L . Recall the definitions ⌊ ℓ ⊔¬ m ⌋ = F { x ∈ L | x ⊑ ℓ ⊔ ¬ m } and m → L ℓ = F { x ∈ L | m ⊓ x ⊑ ℓ } . Wewill prove that both sets are equal.Assume x ∈ L with x ⊑ ℓ ⊔ ¬ m , then m ⊓ x ⊑ m ⊓ ( ℓ ⊔¬ m ) = ( m ⊓ ℓ ) ⊔ ( m ⊓¬ m ) = ( m ⊓ ℓ ) ⊔ m ⊓ ℓ ⊑ ℓ . For theother direction assume m ⊓ x ⊑ ℓ , then ℓ ⊔ ¬ m ⊒ ( m ⊓ x ) ⊔¬ m = ( m ⊔ ¬ m ) ⊓ ( x ⊔ ¬ m ) = 1 ⊓ ( x ⊔ ¬ m ) = x ⊔ ¬ m ⊒ x . Proof of Theorem 2
Proof:
Given a set X , a partially ordered set (Φ , ≤ ) ,and L = O (Φ) , we define an isomorphism between thesets (Φ mon. −−→ P ( X )) X × A and O (Φ) X × A × X . Consider thefollowing function mappings η : (Φ mon. −−→ P ( X )) X × A →O (Φ) X × A × X , f η ( f ) and η ′ : O (Φ) X × A × X → (Φ mon. −−→P ( X )) X × A , α η ′ ( α ) defined as: η ( f )( x, a, x ′ ) = { ϕ ∈ Φ | x ′ ∈ f ϕ ( x, a ) } ,η ′ ( α )( x, a )( ϕ ) = { x ′ | ϕ ∈ α ( x, a, x ′ ) } . Downward-closed
Let ϕ ∈ η ( f )( x, a, x ′ ) and ϕ ′ ≤ ϕ . Byusing these facts in the definition of f ϕ ′ we find x ′ ∈ f ϕ ′ ( x, a ) , i.e., ϕ ′ ∈ η ( f )( x, a, x ′ ) . Anti-monotonicity
Let ϕ ≤ ϕ ′ and x ′ ∈ η ′ ( α )( x, a )( ϕ ′ ) .Then by the above construction we find ϕ ′ ∈ α ( x, a, x ′ ) . And by downward-closedness of α ( x, a, x ′ ) we get ϕ ∈ α ( x, a, x ′ ) , i.e., x ′ ∈ η ′ ( α )( x, a )( ϕ ) .Now it suffices to show that η, η ′ are inverse of each otherbecause by the uniqueness of inverses we then have η ′ = η − .We only give the proof of η ′ ◦ η = id , the proof of the othercase ( η ◦ η ′ = id ) is similar. The former follows from thefollowing observation: x ′ ∈ f ( x, a )( ϕ ) ⇔ x ′ ∈ f ϕ ( x, a ) ⇔ ϕ ∈ η ( f )( x, a, x ′ ) ⇔ x ′ ∈ η ′ ( η ( f ))( x, a )( ϕ ) . Proof of Theorem 3
Proof:
Let x ∈ X, y ∈ Y be any two states inCTSs (LaTSs) ( X, A, f ) , ( Y, A, g ) over the conditions Φ ( ( X, A, α ) , ( Y, A, β ) over the lattice O (Φ ≤ ) ), respectively.( ⇐ ) Let ϕ ∈ Φ be a condition and let R be a latticebisimulation relation such that ϕ ∈ R ( x, y ) . Then, we canconstruct a family of relations R ϕ ′ (for ϕ ′ ≤ ϕ ) as follows: xR ϕ ′ y ⇔ ϕ ′ ∈ R ( x, y ) . For all other ϕ ′ we set R ϕ ′ = ∅ . Thedownward-closure of R ( x, y ) ensures that R ϕ ′′ ⊆ R ϕ ′ (for ϕ ′ , ϕ ′′ ≤ ϕ ), whenever ϕ ′ ≤ ϕ ′′ .Thus, it remains to show that every relation R ϕ ′ is abisimulation. Let xR ϕ ′ y and x ′ ∈ f ϕ ′ ( x, a ) . Then, x a, ↓ ϕ ′ −−−→ x ′ .Since ↓ ϕ ′ is an irreducible in the lattice, ↓ ϕ ′ ⊆ R ( x, y ) and R is a lattice bisimulation, we find y a, ↓ ϕ ′ −−−→ y ′ and ↓ ϕ ′ ⊆ R ( x ′ , y ′ ) , which implies ϕ ′ ∈ R ( x ′ , y ′ ) . That is, y ′ ∈ g ϕ ′ ( y, a ) and x ′ R ϕ ′ y ′ . Likewise, the remaining symmetric condition ofbisimulation can be proved.( ⇒ ) Let ∼ ϕ be a conditional bisimulation between the CTSs ( X, A, f ) , ( Y, A, g ) , for some ϕ ∈ Φ . Then, construct aconditional relation: R ( x, y ) = { ϕ | x ∼ ϕ y } . Clearly,the set R ( x, y ) is a downward-closed subset of Φ due toDefinition 5(ii); i.e., an element in the lattice O (Φ) . Next, weshow that R is a lattice bisimulation.Let x a, ↓ ϕ ′ −−−→ x ′ and ↓ ϕ ′ ⊆ R ( x, y ) . This implies x ′ ∈ f ϕ ′ ( x, a ) and ϕ ′ ∈ R ( x, y ) , hence x ∼ ϕ ′ y . So usingthe transfer property of traditional bisimulation, we obtain y ′ ∈ g ϕ ′ ( y, a ) and x ′ ∼ ϕ ′ y ′ . That is, y b, ↓ ϕ ′ −−−→ y ′ and ϕ ′ ∈ R ( x ′ , y ′ ) , which implies ↓ ϕ ′ ⊆ R ( x ′ , y ′ ) . Likewise thesymmetric condition of lattice bisimulation can be proved. Proof of Lemma 2
Proof:
Let x, x ′ ∈ X, a ∈ A, y ∈ Y and ℓ ∈ J ( L ) such that ℓ ⊑ F i ∈ I R i ( x, y ) and x a,ℓ −−→ x ′ . Then, there is anindex i ∈ I such that ℓ ⊑ R i ( x, y ) , since ℓ is an irreducible.Thus, there is a y ′ such that y a,ℓ −−→ y ′ and ℓ ⊑ R i ( x ′ , y ′ ) ⊑ F i ∈ I R i ( x ′ , y ′ ) . Likewise, the symmetric condition when atransition emanates from y can be proved. Proof of Theorem 4
Proof: ( ⇐ ) Let R : X × Y → L be a conditional relation over a pairof LaTSs ( X, A, α ) , ( Y, A, β ) such that R ⊑ F ( R ) . Next, weshow that R is a lattice bisimulation. For this purpose, let ℓ ∈J ( L ) , a ∈ A . Furthermore, let x a,ℓ −−→ x ′ (which implies ℓ ⊑ ( x, a, x ′ ) ) and ℓ ⊑ R ( x, y ) . From R ( x, y ) ⊑ F ( R )( x, y ) we infer ℓ ⊑ F ( R )( x, y ) . This means that ℓ ⊑ α ( x, a, x ′ ) → (cid:0) F y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) (cid:1) . Since ℓ ⊓ ( ℓ → ℓ ) ⊑ ℓ we can take the infimum with α ( x, a, x ′ ) on both sides andobtain ℓ ⊑ ℓ ⊓ α ( x, a, x ′ ) ⊑ F y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) (the first inequality holds since ℓ ⊑ α ( x, a, x ′ ) ). Since ℓ isirreducible there exists a y ′ such that ℓ ⊑ β ( y, a, y ′ ) , i.e., y a,ℓ −−→ y ′ , and ℓ ⊑ R ( x ′ , y ′ ) .Likewise, the remaining condition when a transition em-anates from y can be proved.( ⇒ ) Let R : X × Y → L be a lattice bisimulation on ( X, A, α ) , ( Y, A, β ) . Then, we need to show that R ⊑ F ( R ) ,i.e., R ⊑ F ( R ) and R ⊑ F ( R ) . We will only give theproof of the former inequality, the proof of the latter isanalogous. To show R ⊑ F ( R ) , it is sufficient to prove ℓ ⊑ R ( x, y ) ⇒ ℓ ⊑ F ( R )( x, y ) , for all x ∈ X, y ∈ Y and all irreducibles ℓ . So let ℓ ⊑ R ( x, y ) , for some x, y . Next,simplify F ( R ) as follows: F ( R )( x, y )= l a,x ′ ( α ( x, a, x ′ ) → G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )))= l a,x ′ j G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ ¬ α ( x, a, x ′ ) k (L. 1) = l a,x ′ G { m ∈ L | m ⊑ G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ ¬ α ( x, a, x ′ ) } . Thus, it is sufficient to show that ℓ ⊑ F y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔¬ α ( x, a, x ′ ) , for any a ∈ A, x ′ ∈ X . We do this bydistinguishing the following cases: either ℓ ⊑ ¬ α ( x, a, x ′ ) or ℓ ⊑ α ( x, a, x ′ ) . If the former holds (which corresponds to thecase where there is no a -labelled transition under ℓ ), then theresult holds trivially. So assume ℓ ⊑ α ( x, a, x ′ ) . Recall, fromabove, that ℓ ⊑ R ( x, y ) and R is a lattice bisimulation. Thus,there is a y ′ ∈ Y such that ℓ ⊑ β ( y, a, y ′ ) and ℓ ⊑ R ( x ′ , y ′ ) ;hence, ℓ ⊑ G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ ¬ α ( x, a, x ′ ) . Proof of Proposition 2
Proof: (i) This follows directly from Lemma 1, allowing to movethe approximations to the inside towards the implicationsand Lemma 1, allowing to approximate the implicationin B via the implication in L .(ii) If R is a bisimulation in L , then F L ( R ) ⊒ R . Since bydefinition, ⌊ Q ⌋ ⊑ Q for all conditional relations Q andwe have shown in (i) that F L ( R ) = ⌊ F B ( R ) ⌋ , we canconclude F B ( R ) ⊒ ⌊ F B ( R ) ⌋ = F L ( R ) ⊒ R . Thus, R isa B -bisimulation.(iii) Clearly R ⊑ F B ( R ) because R is a B -bisimulation. Since R has exclusively entries from L , F B ( R ) = ⌊ F B ( R ) ⌋ , and finally (i) yields that ⌊ F B ( R ) ⌋ = F L ( R ) ; thus, R is an L -bisimulation. Proof of Theorem 6
Proof: ( ⇐ ) Let R : X × X → B be a conditional relation satisfying R · α ⊑ α · R and R T · α ⊑ α · R T . Then, we need to showthat R is a lattice bisimulation. Let x ℓ −→ y such that ℓ ∈ J ( B ) and ℓ ⊑ R ( x, x ′ ) . Then, we find ℓ ⊑ α ( x, y ) . That is, ℓ ⊑ R ( x, x ′ ) ⊓ α ( x, y ) = R T ( x ′ , x ) ⊓ α ( x, y ) ⊑ ( R T · α )( x ′ , y ) ⊑ ( α · R T )( x ′ , y ) . By expanding the last term from above, we find that ℓ ⊑ α ( x ′ , y ′ ) ⊓ R T ( y ′ , y ) , for some y ′ . Thus, ℓ ⊑ α ( x ′ , y ′ ) (whichimplies x ′ ℓ −→ y ′ ) and ℓ ⊑ R ( y, y ′ ) . Similarly, the remainingcondition when the transition emanates from x ′ can be verifiedusing R · α ⊑ α · R .( ⇒ ) Let R : X × X → B be a lattice bisimulation. Then, weonly prove R · α ⊑ α · R ; the proof of R T · α ⊑ α · R T issimilar. Note that, for any x, y ′ ∈ X , we know that the element ( R · α )( x, y ′ ) can be decomposed into a set of atoms, since B is an atomic Boolean algebra. Let ( R · α )( x, y ′ ) = F i ℓ i forsome index set I such that the ℓ i are atoms or irreducibles in B .Furthermore, expanding the above inequality we get, forevery i ∈ I there is a state y ∈ X such that ℓ i ⊑ R ( x, y ) ⊓ α ( y, y ′ ) , since the ℓ i are irreducibles. That is, for every i ∈ I we have some state y such that ℓ i ⊑ R ( x, y ) and ℓ i ⊑ α ( y, y ′ ) .Now using the transfer property of R we find some state x ′ such that ℓ i ⊑ α ( x, x ′ ) and ℓ i ⊑ R ( x ′ , y ′ ) . Thus, for every i ∈ I we find that ℓ i ⊑ ( α · R )( x, y ′ ) ; hence, since ( α · R )( x, y ′ ) is an upper bound of all ℓ i , ( R · α )( x, y ′ ) ⊑ ( α · R )( x, y ′ ) . B. Strategies for the Bisimulation Game
In the main text we claimed that there exists a winningstrategy for Player 2 in the conditional bisimulation gameif and only if the start states are conditionally bisimilar. Inthis section we will describe the strategy and prove that it iscorrect.
Lemma 5.
Given two CTSs ( X, A, f ) , ( Y, A, g ) and aninstance ( x, y, ϕ ) of a bisimulation game, then whenever x ∼ ϕ y , Player 2 has a winning strategy for ( x, y, ϕ ) .Proof: The strategy of Player 2 can be directly derivedfrom the family of CTS bisimulation relations { R ϕ ′ | ϕ ′ ∈ Φ } where ( x, y ) ∈ R ϕ . The strategy works inductively. Assumeat any given point of time in the game, we have that thecurrently investigated condition is ϕ and ( x, y ) ∈ R ϕ , where x and y are the currently marked states in X respectively Y .Then Player 1 upgrades to ϕ ′ ≤ ϕ . Due to the condition onCTS bisimulations of reverse inclusion, we have R ϕ ′ ⊇ R ϕ ,therefore ( x, y ) ∈ R ϕ ′ . Then, when Player 1 makes a step x a,ϕ ′ −−→ x ′ in f , there must exist a transition y a,ϕ ′ −−→ y ′ in g such that ( x ′ , y ′ ) ∈ R ϕ ′ due to R ϕ ′ being a (traditional)bisimulation. Analogously if Player 1 chooses a transition10 a,ϕ ′ −−→ y ′ in g , there exists a transition x a,ϕ ′ −−→ x ′ in f forPlayer 2 such that ( x ′ , y ′ ) ∈ R ϕ ′ . Hence, Player 2 will beable to react and establish the inductive condition again. In thebeginning, the condition holds per definition. Thus, Player 2has a winning strategy.We will now prove the converse by explicitly constructinga winning strategy for Player 1 whenever two states are notin a bisimulation relation. Lemma 6.
Given two CTSs
A, B and an instance ( x, y, ϕ ) ofa bisimulation game, then whenever x ϕ y , Player 1 has awinning strategy for ( x, y, ϕ ) .Proof: We consider the LaTSs which correspond to theCTSs A , B and compute the fixpoint by using the matrix mul-tiplication algorithm, obtaining a sequence R , R , . . . , R n = R n +1 = . . . of lattice-valued relations R i : X × Y → O (Φ , ≤ ) .Note that instead of using exactly the matrix multiplicationmethod, we can also use the characterisation of Definition 8:whenever there exists a transition x a,ϕ −−→ x ′ , for which there isno matching transition with y y,ϕ −−→ y ′ with ϕ ∈ R i − ( x ′ , y ′ ) ,the condition ϕ and all larger conditions ϕ ′ ≥ ϕ have to beremoved from R i − ( x, y ) in the construction of R i ( x, y ) .We will now define M ϕ ′ ( x, y ) = max { i ∈ N | ϕ ′ ∈ R i ( x, y ) } , where max N = ∞ . An entry M ϕ ′ ( x, y ) = ∞ signifies that x ∼ ϕ ′ y , whereas any other entry i < ∞ meansthat x, y were separated under condition ϕ ′ at step i and hence x ϕ ′ y .Now assume we are in a game situation with game instance ( x, y, ϕ ) where Player 1 has to make a step. We will show thatif M ϕ ( x, y ) = i < ∞ , Player 1 can choose an upgrade ϕ ≤ ϕ ,an action a ∈ A and a step x a,ϕ −−→ x ′ (or y a,ϕ −−→ y ′ ) suchthat independently of the choice of the corresponding state y ′ ,respectively x ′ , which Player 2 makes, M ϕ ( x ′ , y ′ ) < i .For each ϕ ′ ≤ ϕ compute ω ( ϕ ′ )= min { min a,x ′ { max y ′ { M ϕ ′ n ( x ′ , y ′ ) | y a,ϕ ′ −−→ y ′ } | x a,ϕ ′ −−→ x ′ } , { min a,y ′ { max x ′ { M ϕ ′ n ( x ′ , y ′ ) | x a,ϕ ′ −−→ x ′ } | y a,ϕ ′ −−→ y ′ }} The formula can be interpreted as follows: The outer min corresponds to the choice of making a step in transition system A or B . The inner min corresponds to choosing the step thatyields the best, i.e. lowest, guaranteed separation value andthe max corresponds to the choice of Player 2 that yields thebest, i.e. greatest, separation value for him.Now choose a minimal condition ϕ such that ω ( ϕ ) isminimal for all ϕ ′ ≤ ϕ . Player 1 now makes an upgrade from ϕ to ϕ and chooses a transition x a,ϕ −−→ x ′ or y a,ϕ −−→ y ′ such thatthe minimum in ω ( ϕ ) is reached. This means that Player 2 canonly choose a corresponding successor state y ′ respectively x ′ such that M ϕ ( x ′ , y ′ ) ≤ ω ( ϕ ) .Now it remains to be shown that ω ( ϕ ) < i , via contradiction:assume that ω ( ϕ ) ≥ i . Since ω ( ϕ ) is minimal for all ϕ ′ ≤ ϕ , we obtain ω ( ϕ ′ ) ≥ i for all ϕ ′ ≥ ϕ . This implies that for eachstep x a,ϕ ′ −−→ x ′ there exists an answering step y a,ϕ ′ −−→ y ′ suchthat M ϕ ′ ( x ′ , y ′ ) ≥ i (analogously for every step of y ). Thecondition M ϕ ′ ( x ′ , y ′ ) ≥ i is equivalent to ϕ ′ ∈ R i ( x ′ , y ′ ) andhence we can infer that ϕ ′ ∈ R i +1 ( x, y ) . This also holds for ϕ ′ = ϕ , which is a contradiction to M ϕ ( x, y ) = i .In order to conclude, take two states x, y and a condition ϕ such that x ϕ y . Then M ϕ ( x, y ) = i < ∞ and theabove strategy allows Player 1 to force Player 2 into a gameinstance ( x ′ , y ′ , ϕ ) where M ϕ ( x ′ , y ′ ) < M ϕ ( x, y ) . Whenever M ϕ ( x, y ) = 1 Player 1 wins immediately, because then x allows a transition that y can not mimic or vice-versa, andPlayer 1 simply takes this transition. Therefore we have founda winning strategy for Player 1. C. Proofs concerning ROBDDs
Proof of Lemma 3
Proof: ( ⇒ ) Assume that low ( n ) | = high ( n ) for all nodes n of b .Let C ′ ∈ J b K and C ≤ C ′ . Without loss of generality wecan assume that C = C ′ ∪ { f } for some f ∈ U . (The restfollows from transitivity.) For the configuration C ′ there existsa path in b that leads to . We distinguish the following twocases: • There is no f -labelled node on the path. Then the pathfor C also leads to and we have C ∈ J b K . • If there is an f -labelled node n on the path, then C ′ takes the low -successor, C the high -successor of thisnode. Since low ( n ) | = high ( n ) we obtain J low ( n ) K ⊆ J high ( n ) K . Hence the remaining path for C , which con-tains the same features as the path for C ′ , will also reach .( ⇐ ) Assume by contradiction that J b K is downward-closed,but there exists a node n with low ( n ) = high ( n ) and f = root ( n ) ∈ U . Hence there must be a path from the low -successor that reaches , but does not reach from the high -successor. Prefix this with the path that reaches n fromthe root of b .In this way we obtain two configurations C = C ′ ∪{ f } , i.e., C ≤ C ′ , where C ′ ∈ J b K , but C J b K . This is a contradictionto the fact that J b K is downward-closed. Proof of Lemma 4
Proof: • We show that ⌊⌊ b ⌋⌋ as obtained by Algorithm 1 isdownward-closed. This can be seen via induction overthe number of different features occurring in the BDD b .If b only consists of a leaf node, then ⌊⌊ b ⌋⌋ is certainlydownward-closed. Otherwise, we know from the induc-tion hypothesis that ⌊⌊ high ( b ) ⌋⌋ , ⌊⌊ low ( b ) ⌋⌋ are downward-closed. If root ( b ) U , then ⌊⌊ b ⌋⌋ is downward-closed due to Lemma 3. If however root ( b ) ∈ U ,then ⌊⌊ high ( b ) ⌋⌋ ∧ ⌊⌊ low ( b ) ⌋⌋ is downward-closed (since11ownward-closed sets are closed under intersection). Fur-thermore ⌊⌊ high ( b ) ⌋⌋ ∧ ⌊⌊ low ( b ) ⌋⌋ | = ⌊⌊ high ( b ) ⌋⌋ , i.e., the new low -successor implies the high -successor. That means thatthe condition of Lemma 3 is satisfied at the root andelsewhere in the BDD and hence the resulting BDD ⌊⌊ b ⌋⌋ is downward-closed. • First, from the construction where a low -successor isalways replaced by a stronger low -successor, it is easyto see that ⌊⌊ b ⌋⌋ | = b .We now show that there is no other downward-closedROBDD b ′ such that ⌊⌊ b ⌋⌋ | = b ′ | = b : Assume to thecontrary that there exists such a downward-closed BDD b ′ . Hence there exists a configuration C ⊆ N such that C = ⌊⌊ b ⌋⌋ , C | = b ′ , C | = b . Choose C maximal wrt.inclusion.Now we show that there exists a feature f ∈ U such that f C and C ∪ { f } = C ′ = b . If this is the case, then C ′ ≤ C and C ′ = b ′ , which is a contradiction to the factthat b ′ is downward-closed.Consider the sequence b = b , . . . , b m = ⌊⌊ b ⌋⌋ of BDDsthat is constructed by the approximation algorithm (Algo-rithm 1), where the BDD structure is upgraded bottom-up.We have ⌊⌊ b ⌋⌋ = b m | = b m − | = . . . | = b = b , since ineach newly constructed BDD for some node n low ( n ) with root ( n ) ∈ U is replaced by high ( n ) ∧ low ( n ) .Since C | = b and C = ⌊⌊ b ⌋⌋ , there must be an index k such that C | = b k , C = b k +1 . Let n be the nodethat is modified in step k , where root ( n ) = f ∈ U .We must have f C , since the changes concern onlythe low -successor and if f ∈ C , the corresponding pathwould take the high -successor and nothing would changeconcerning acceptance of C from b k to b k +1 .Now assume that C ′ = C ∪ { f } | = b . This would bea contradiction to the maximality of C and hence C ∪{ f } 6| = b , as required. Proof of Proposition 3
Proof: (i) For this proof, we work with the set-based interpretation,which allows for four views, one on the Boolean algebra B = P ( P ( N )) , one on the lattice L = O ( P ( N ) , ≤ ) ,one of the Boolean algebra B ′ = P ( J d K ) and one onthe lattice L ′ = ( O ( J d K ) , ≤ ′ ) where ≤ ′ = ≤ | J d K × J d K . Wewill mostly argue in the Boolean algebra B . When talkingabout downward-closed sets, we will usually indicatewith respect to which order. Similarly, the approximationrelative to ≤ is written ⌊ _ ⌋ , whereas the approximationrelative to ≤ ′ is written ⌊ _ ⌋ ′ .We can compute: b → L ′ b ≡ ⌊ ¬ B ′ b ∨ b ⌋ ′ ≡ ⌊ ( ¬ B b ∧ d ) ∨ b ⌋ ′ To conclude the proof, we will now show that ⌊ b ⌋ ′ ≡ ⌊ b ∨ ¬ d ⌋ ∧ d for any b ∈ B ′ . We prove this via mutualimplication. • We show ⌊ b ∨ ¬ d ⌋ ∧ d | = ⌊ b ⌋ ′ : ⌊ b ∨¬ d ⌋ ∧ d | = ( b ∨¬ d ) ∧ d ≡ ( b ∧ d ) ∨ ( ¬ d ∧ d ) ≡ b ∧ d | = b Since ⌊ b ∨ ¬ d ⌋ ∧ d implies d , it certainly is in B ′ . Wenow show that it is downward-closed wrt. ≤ ′ : we usean auxiliary relation ≤ ′′ , which is the smallest partialorder on B that contains ≤ ′ , i.e., ≤ ′ extended to B . Wehave ≤ ′′ ⊆ ≤ . Since ⌊ b ∨ ¬ d ⌋ is an approximation it isdownward-closed wrt. ≤ and hence downward-closedwrt. ≤ ′ . Moreover, d is downward-closed relative to ≤ ′′ (obvious by definition). Since the intersection oftwo downward-closed sets is again downwards closed, ⌊ b ∨¬ d ⌋ ∧ d is downward-closed relative to ≤ ′′ and sincefinally, downward-closure relative to ≤ ′′ is the same asdownward-closure relative to ≤ ′ provided we discussan element from B ′ , we can conclude that ⌊ b ∨ ¬ d ⌋ ∧ d belongs to L ′ .From ⌊ b ∨ ¬ d ⌋ ∧ d ∈ L ′ and ⌊ b ∨ ¬ d ⌋ ∧ d | = b itfollows that ⌊ b ∨ ¬ d ⌋ ∧ d | = ⌊ b ⌋ ′ by definition of theapproximation. • We show ⌊ b ⌋ ′ | = ⌊ b ∨ ¬ d ⌋ ∧ d :Let any C ∈ P ( N ) be given, such that C ∈ J ⌊ b ⌋ ′ K . Weshow that in this case C ∈ J ⌊ b ∨¬ d ⌋ ∧ d K , which proves ⌊ b ⌋ ′ | = ⌊ b ∨ ¬ d ⌋ ∧ d . Let ↓ C be the downwards-closureof C wrt. ≤ .Since ⌊ b ⌋ ′ must de downward-closed relative to ≤ ′ , itholds that ↓ C ∩ J d K ⊆ J ⌊ b ⌋ ′ K . Disjunction with ¬ d onboth sides yields ↓ C ⊆ J ⌊ b ⌋ ′ ∨ ¬ d K ⊆ J b ∨ ¬ d K , since c | = c ∨ ¬ d ≡ ( c ∧ d ) ∨ ¬ d . The set ↓ C is downwards-closed wrt. ≤ , so it is contained in the approximationrelative to ≤ of this set, i.e ↓ C ⊆ J ⌊ b ∨ ¬ d ⌋ K . Thus,in particular, C ∈ J ⌊ b ∨ ¬ d ⌋ K . Since C ∈ J ⌊ b ⌋ ′ K , itfollows that C ∈ J d K , therefore we can conclude C ∈ J ⌊ b ∨ ¬ d ⌋ ∧ d K .Hence ⌊ ( ¬ B b ∧ d ) ∨ b ⌋ ′ ≡ ⌊ ( ¬ B b ∧ d ) ∨ b ∨ ¬ d ⌋ ∧ d ≡ ⌊ ¬ B b ∨ b ∨ ¬ d ⌋ ∧ d (ii) Since d is downward-closed wrt. ≤ , d = ⌊ d ⌋ , therefore,using Lemma 1, we obtain ⌊ ¬ b ∨ b ∨ ¬ d ⌋ ∧ d ≡ ⌊ ¬ b ∨ b ∨ ¬ d ⌋ ∧ ⌊ d ⌋ ≡ ⌊ ( ¬ b ∨ b ∨ ¬ d ) ∧ d ⌋ ≡ ⌊ ( ¬ b ∨ b ) ∧ d ∨ ¬ d ∧ d ⌋ ≡ ⌊ ( ¬ b ∨ b ) ∧ d ⌋ ≡ ⌊ ¬ b ∨ b ⌋ ∧ ⌊ d ⌋ ≡ ⌊ ¬ b ∨ b ⌋ ∧ d . A PPENDIX BR UN -T IME R ESULTS
In this section we present the detailed run-time results forour BDD-based implementation versus the non-BDD-basedimplementation for a sequence of CTSs.Table 5 shows the runtime results (in milliseconds) for thecomputation of the largest bisimulation for our implementationon the family of CTSs described in Section VI-C. Despitesome fluctuations, the quotient of the time taken when notusing BDDs and when using BDDs increases exponentiallyby factor of about .12 features time(BDD) time(without BDD) time(without BDD)/time(BDD)1 42 13 0.32 64 32 0.53 143 90 0.64 311 312 1.05 552 1128 2.06 1140 3242 2.87 1894 8792 4.68 1513 13256 8.89 1872 39784 2110 3208 168178 5211 5501 513356 9312 7535 1383752 18413 5637 3329418 59114 6955 8208349 118015 11719 23700878 202216 15601 57959962 371517 18226 150677674 826718 17001 347281057 2042719 22145 out of memory — Fig. 5. Run-time results (in milliseconds) A PPENDIX CD EACTIVATING T RANSITIONS
We will now work on an extension that allows transitionsto deactivate when upgrading.We have introduced conditional transition systems (CTS) asa modelling technique that can be used for modelling softwareproduct lines (SPLs). CTSs are a strictly stronger model than(standard) FTSs, allowing for upgrades. Products derived froma software product line may be upgraded to advanced versions,activating additional transitions in the system. A change in thetransition function can only be realised in one direction: byadding transitions which were previously not available, whileall previously active transitions remain active.However, this choice may not be the optimal choice in allcases, because sometimes an advanced version of a systemmay offer improved transitions over the base product. Forinstance, a free version of a system may display a commercialwhen choosing a certain transition, whereas a premium modelmay forego the commercial and offer the base functionalityright away.A practical motivation may be derived from our Example 4.In this transition system one may want to be able to modelthat in the unsafe state, the advanced version can only send anencrypted message, since we assume that the user is alwaysinterested in a secure communication, ensured either by a safechannel or by encryption. However, it is not an option tosimply drop the unencrpyted transition from the unsafe statewith respect to the base version, because then, whenever thesystem encounters an unsafe state in the base version, thesystem will remain in a deadlock unless the user decides toperform an upgrade. We will solve such a situation as follows: we will add priorities that allow to deactivate the unencryptedtransition in the presence of an encrypted transition.In order to allow for deactivation of transitions whenupgrading, we propose a slight variation of of the definitionof CTS/LaTS and the corresponding bisimulation relation. A conditional transition system with action precedence is a triple ( X, ( A, < A ) , f ) , where ( X, A, f ) is a CTS and < A is a strictorder on A .Intuitively, a CTS with action precedence evolves in a verysimilar way to standard CTS. Before the system starts acting,it is assumed that all the conditions are fixed and a condition ϕ ∈ Φ is chosen arbitrarily which represents a selection ofa valid product of the system (product line). Now all thetransitions that have a condition greater than or equal to ϕ are activated, while the remaining transitions are inactive. Thisis unchanged from standard CTS, however, if from a state x there exist two transitions x a,ϕ −−→ x ′ and x a ′ ,ϕ −−→ x ′′ where a ′ > a , i.e. a ′ takes precedence over a , then additionally x a,ϕ −−→ x ′ remains inactive . Henceforth, the system behaveslike a standard transition system; until at any point in thecomputation, the condition is changed to a smaller one (say, ϕ ′ ) signifying a selection of a valid, upgraded product. Now(de)activation of transitions depends on the new condition ϕ ′ ,rather than on the old condition ϕ . As before, active transitionsremain active during an upgrade, unless new active transitionsappear that are exiting the same state and are labelled withan action of higher priority .In the sequel we will just write CTS for CTS with actionprecedence, since for the remainder of this section we willsolely investigate this variation of CTS. This changed inter-pretation of the behaviour of a CTS of course also has an13ffect on the bisimulation. Definition 12.
Let ( X, ( A, < A ) , f ) , ( Y, ( A, < A ) , g ) be twoCTSs over the same set of conditions (Φ , ≤ ) . For a condition ϕ ∈ Φ , we define ¯ f ϕ ( x, a ) to denote the labelled transition sys-tem induced by a CTS ( X, ( A, < A ) , f ) with action precedence,where ¯ f ϕ ( x, a ) = { x ′ | x a,ϕ −−→ x ′ ∧ ¬ ( ∃ a ′ ∈ A, x ′′ ∈ X : a ′ > a ∧ x a ′ ,ϕ −−→ x ′′ ) } . Two states x ∈ X, y ∈ Y are conditionally bisimilar (wrt.action precedence) under a condition ϕ ∈ Φ , denoted x ∼ pϕ y ,if there is a family of relations R ϕ ′ (for every ϕ ′ ≤ ϕ ) suchthat(i) each relation R ϕ ′ is a traditional bisimulation relationbetween ¯ f ϕ ′ and ¯ g ϕ ′ ,(ii) whenever ϕ ′ ≤ ϕ ′′ , we have R ϕ ′ ⊇ R ϕ ′′ , and(iii) R ϕ relates x and y , i.e. ( x, y ) ∈ R ϕ . The definition of bisimilarity is analogous to traditional CTSbut refers to the new transition system given by ¯ f , whichcontains only the maximal transitions.Lattice transition systems (LaTSs) can be extended in thesame way, by adding an order on the set of actions and leavingthe remaining definition unchanged. Disregarding deactivation,there still is a duality between CTSs and LaTSs. Now, in orderto characterise bisimulation using a fixpoint operator, we canmodify the operators F , F and F to obtain G , G and G ,respecting the deactivation of transitions as follows. Definition 13.
Let ( X, ( A, < A ) , α ) and ( Y, ( A, < A ) , β ) beLaTSs (with ordered actions). Recall the residuum operator( → ) on a lattice and define three operators G , G , G : ( X × Y → L ) → ( X × Y → L ) in the following way: G ( R )( x, y ) = l a ∈ A,x ′ ∈ X (cid:18) α ( x, a, x ′ ) → (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1)(cid:19) ,G ( R )( x, y ) = l a ∈ A,y ′ ∈ Y (cid:18) β ( y, a, y ′ ) → (cid:0) G x ′ ∈ X ( α ( x, a, x ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,y ′′ ∈ Y β ( y, a ′ , y ′′ ) (cid:1)(cid:19) ,G ( R )( x, y ) = G ( R )( x, y ) ⊓ G ( R )( x, y ) . Now, we need to show that we can characterise the newnotion of bisimulations as post-fixpoints of this operator G .For the corresponding proof we will make use of the followingobservation: Lemma 7.
Let L = O (Φ) for any finite partially ordered set (Φ , ≤ ) be a lattice that embeds into B = P (Φ) . Take ϕ ∈ Φ .Then, in order to show that ϕ ∈ ( l → l ) , for any given l , l ∈ L , it suffices to show that for all ϕ ′ ≤ ϕ , ϕ ′ / ∈ l or ϕ ′ ∈ l . Proof: We have already shown that l → l = ⌊ l → l ⌋ = ⌊¬ l ⊔ l ⌋ . Now, if all ϕ ′ ≤ ϕ are not in l , i.e. in ¬ l , or in l , then all ϕ ′ ≤ ϕ are in ¬ l ⊔ l . Therefore, ↓ ϕ ⊆ ¬ l ⊔ l , and thus ϕ ∈ l → l . Theorem 7.
Let ( X, ( A, < A ) , f ) , ( Y, ( A, < A ) , g ) be two CTSsover (Φ , ≤ ) and ( X, ( A, < A ) , α ) , ( Y, ( A, < A ) , β ) over O (Φ) be the corresponding LaTS. For any two states x ∈ X, y ∈ Y it holds that x ∼ pϕ y if and only if there exists a post-fixpoint R : X × Y → L of G ( R ⊑ G ( R ) ) such that ϕ ∈ R ( x, y ) .Proof: • Assume R is a post-fixpoint of G , i.e. R ⊑ G ( R ) , let x ∈ X and y ∈ Y be given arbitrarily and ϕ ∈ R ( x, y ) .We define for each ϕ ′ ≤ ϕ a relation R ϕ ′ according to ( x ′ , y ′ ) ∈ R ϕ ′ ⇔ ϕ ′ ∈ R ( x ′ , y ′ ) . Since each set R ( x ′ , y ′ ) is downward-closed for all x ′ ∈ X, y ′ ∈ Y , it holds that R ϕ ⊆ R ϕ whenever ϕ ≥ ϕ .Moreover, since we assume ϕ ∈ R ( x, y ) , ( x, y ) ∈ R ϕ ′ must hold for all ϕ ′ ≤ ϕ .So we only need to show that all R ϕ ′ are traditionalbisimulations for ¯ f ϕ ′ . For this purpose let x ′ , y ′ , ϕ ′ begiven, such that ( x ′ , y ′ ) ∈ R ϕ ′ . Moreover, let a ∈ A and x ′′ ∈ X be given such that x ′′ ∈ ¯ f ϕ ′ ( x ′ , a ) –if no such a and x ′′ exists then the first bisimulationcondition is trivially true. For G , it must be true that ϕ ′ ∈ G ( R )( x, y ) . Thus, ϕ ′ ∈ (cid:18) α ( x ′ , a, x ′′ ) → (cid:0) G y ′′ ∈ Y ( β ( y ′ , a, y ′′ ) ⊓ R ( x ′′ , y ′′ )) ⊔ G a ′ >a,x ′′′ ∈ X α ( x ′ , a ′ , x ′′′ ) (cid:1)(cid:19) Since we also know that ϕ ′ ∈ α ( x ′ , a, x ′′ ) because x ′′ ∈ ¯ f ϕ ′ ( x ′ , a ) , it must be true that ϕ ′ ∈ (cid:18) G y ′′ ∈ Y ( β ( y ′ , a, y ′′ ) ⊓ R ( x ′′ , y ′′ )) ⊔ G a ′ >a,x ′′′ ∈ X α ( x ′ , a ′ , x ′′′ ) (cid:19) . This is true because ψ ∈ l → l ⇔ ψ ∈ ⌊¬ l ∨ l ⌋ ⇒ ψ ∈ ¬ l ∨ l (Lemma 7) and, if ψ ∈ l , hence ψ / ∈ ¬ l ,it follows that ψ ∈ l .Per definition of ¯ f ϕ ′ , there exists no a ′ > a such that ¯ f ϕ ′ ( x ′′ , a ′ ) = ∅ . Therefore, ϕ ′ / ∈ G a ′ >a,x ′′′ ∈ X α ( x ′ , a ′ , x ′′′ ) . It follows that ϕ ′ ∈ G y ′′ ∈ Y β ( y ′ , a, y ′′ ) ⊓ R ( x ′′ , y ′′ ) . Then, there must exist at least one y ′′ ∈ Y such that ϕ ′ ∈ β ( y ′ , a, y ′′ ) ⊓ R ( x ′′ , y ′′ ) . It follows that ϕ ′ ∈ R ( x ′′ , y ′′ ) ,i.e. ( x ′′ , y ′′ ) ∈ R ϕ ′ .14e will now show that y ′′ ∈ ¯ g ϕ ′ ( y ′ , a ) , holds as well.Assume, to the contrary, that y ′′ / ∈ ¯ g ϕ ′ ( y ′ , a ) , then, dueto ϕ ′ ∈ β ( y ′ , a, y ′′ ) , there must exist an a ′ > a and a y ′′′ ∈ Y such that ϕ ′ ∈ β ( y ′ , a ′ , y ′′′ ) . W.l.o.g. choose a ′ maximal. Since we required ( x ′ , y ′ ) ∈ R ϕ ′ , it has to holdthat ϕ ′ ∈ G ( R )( x ′ , y ′ ) . So in particular, ϕ ′ ∈ (cid:18) β ( y ′ , a ′ , y ′′′ ) → (cid:0) G x ′′′ ∈ X ( α ( x ′ , a ′ , x ′′′ ) ⊓ R ( x ′′′ , y ′′′ )) ⊔ G a ′′ >a ′ ,y ′′′′ ∈ Y β ( y ′ , a ′′ , y ′′′′ ) (cid:1)(cid:19) Since we chose a ′ maximal, we know that ϕ ′ / ∈ F a ′′ >a ′ ,y ′′′′ ∈ Y β ( y ′ , a ′′ , y ′′′′ ) . Moreover, since a ′ > a and x ′′ ∈ ¯ f ϕ ′ ( x ′ , a ) , there exists no x ′′′ such that ϕ ′ ∈ α ( x ′ , a ′ , x ′′′ ) . Thus, ϕ ′ is not in the right side ofthe residuum, yet it is in the left side of the residuum,therefore, it is not in the residuum. Thus, we can conclude ϕ ′ / ∈ G ( R )( x ′ , y ′ ) , which is a contradiction.Thus, the first bisimulation condition is true. The secondcondition can be proven analogously, reversing the rolesof G and G to find the answer step in ¯ f ϕ ′ . • Now, assume the other way around, that a family R ϕ ofbisimulations from ¯ f ϕ to ¯ g ϕ exists such that for all states x ∈ X , y ∈ Y and for all pairs of conditions ϕ , ϕ ∈ Φ the expression ϕ ≤ ϕ implies R ϕ ⊇ R ϕ . Moreover,let ϕ , x ∈ X and y ∈ Y be given such that ( x, y ) ∈ R ϕ .We define R : X × Y → L according to R ( x, y ) = { ϕ ′ | ( x, y ) ∈ R ϕ ′ } . Due to anti-monotonicity of the family of R ϕ ′ all entriesin R are indeed lattice elements from O (Φ , ≤ ) . Moreover,by definition, ϕ ∈ R ( x, y ) . So it only remains to beshown that R is a post-fixpoint.For this purpose, let x ′ ∈ X , y ′ ∈ Y and ϕ ′ ∈ Φ begiven, such that ϕ ′ ∈ R ( x ′ , y ′ ) . (If no such x ′ , y ′ , ϕ ′ exist,then R is the zero matrix (where all entries are ∅ ) and R ⊑ G ( R ) holds trivially.) We will now show that ϕ ′ ∈ G ( R )( x ′ , y ′ ) . The fact that ϕ ′ ∈ G ( R )( x ′ , y ′ ) can beshown analogously. We need to show that ϕ ′ ∈ (cid:18) α ( x, a, x ′ ) → (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1)(cid:19) for all x ′′ ∈ X and a ∈ A .We recall that l → L l = ⌊ l → B l ⌋ = ⌊¬ l ∨ l ⌋ (Lemma 1) and show that whenever ϕ ′ ∈ α ( x, a, x ′ ) ,it holds that ϕ ′ ∈ (cid:0) F y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ F a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1) . We distinguish according towhether a is maximal such that ϕ ′ ∈ α ( x, a, x ′′ ) : – There is no a ′ > a such that ϕ ′ ∈ α ( x, a, x ′′ ) for any x ′′ ∈ X :Then there must exist a y ′ ∈ Y such that ϕ ′ ∈ β ( y, a, y ′ ) and ( x ′ , y ′ ) ∈ R ϕ ′ , i.e. ϕ ′ ∈ R ( x ′ , y ′ ) , because R ϕ ′ is a bisimulation and for all ϕ ′′ ≤ ϕ ′ we have R ϕ ′ ⊆ R ϕ ′′ . – There is an a ′ > a such that ϕ ′ ∈ α ( x, a, x ′′ ) for some x ′′ ∈ X :Then ϕ ′ ∈ F a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) .So we have shown for all ϕ ′ ∈ R ( x ′ , y ′ ) that ϕ ′ ∈ α ( x, a, x ′ ) implies ϕ ′ ∈ (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1) , i.e. we have ϕ ′ ∈¬ α ( x, a, x ′ ) ⊔ (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1) in the Boolean algebra. Since R ( x ′ , y ′ ) is a lattice elementand therefore downward-closed, we can apply Lemma 7and conclude that ϕ ′ ∈ (cid:18) α ( x, a, x ′ ) → (cid:0) G y ′ ∈ Y ( β ( y, a, y ′ ) ⊓ R ( x ′ , y ′ )) ⊔ G a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ) (cid:1)(cid:19) in the lattice, concluding the proof.Hence we can compute the bisimulation via a fixpointiteration, as with LaTSs without an ordering on the actionlabels. Due to the additional supremum in the fixpoint operator,the matrix notation cannot be used anymore. However, sincethe additional supremum term can be precomputed for eachpair of states x ∈ X or y ∈ Y and action a ∈ A , theperformance of the algorithm should not be affected in asignificant way.Note that, different from the Boolean case, l → ( l ⊔ l ) ( l → l ) ⊔ l , which is relevant for the definition of G . In fact,moving the supremum F a ′ >a,x ′′ ∈ X α ( x, a ′ , x ′′ ))