An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata
aa r X i v : . [ c s . L O ] M a y An Efficient Normalisation Procedure for LinearTemporal Logic and Very Weak Alternating Automata
Extended Version
Salomon Sickert
Technische Universität MünchenGermany [email protected]
Javier Esparza
Technische Universität MünchenGermany [email protected]
Abstract
In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a clas-sical theorem stating that every formula of Past LTL (theextension of LTL with past operators) is equivalent to a for-mula of the form Ó ni = GF φ i ∨ FG ψ i , where φ i and ψ i containonly past operators. Some years later, Chang, Manna, andPnueli built on this result to derive a similar normal form forLTL. Both normalisation procedures have a non-elementaryworst-case blow-up, and follow an involved path from for-mulas to counter-free automata to star-free regular expres-sions and back to formulas. We improve on both points. Wepresent a direct and purely syntactic normalisation proce-dure for LTL yielding a normal form, comparable to the oneby Chang, Manna, and Pnueli, that has only a single expo-nential blow-up. As an application, we derive a simple al-gorithm to translate LTL into deterministic Rabin automata.The algorithm normalises the formula, translates it into aspecial very weak alternating automaton, and applies a sim-ple determinisation procedure, valid only for these specialautomata. CCS Concepts: • Theory of computation → Modal andtemporal logics ; Automata over infinite objects . Keywords:
Linear Temporal Logic, Normal Form, Weak Al-ternating Automata, Deterministic Automata
In seminal work carried out in the middle 80s, Lichtenstein,Pnueli, and Zuck investigated Past Linear Temporal Logic(Past LTL), a temporal logic with future and past operators.They proved the classical result stating that every formula
Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).
LICS ’20, July 8–11, 2020, Saarbrücken, Germany © 2020 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-7104-9/20/07. https://doi.org/10.1145/3373718.3394743 is equivalent to another one of the form n Û i = GF φ i ∨ FG ψ i (1)where φ i and ψ i only contain past operators [8, 24]. Shortlyafter, Manna and Pnueli introduced the safety-progress hi-erarchy, containing six classes of properties (Figure 1a), andpresented a logical characterisation of each class in terms ofsyntactic fragments of Past LTL [11, 12]. The class of reac-tivity properties , placed at the top of the hierarchy, containsall Past LTL properties, and its syntactic characterisation,given by (1), is the class of reactivity formulas .In the early 90s, LTL (which only has future operators,but is known to be as expressive as Past LTL), became thelogic of choice for most model-checking applications. Atthat time Chang, Manna, and Pnueli showed that the classesof the safety-progress hierarchy also admit syntactic charac-terisations in terms of LTL fragments [3]. In particular, theyproved that every LTL formula is equivalent to another onein which every path of the syntax tree alternates at mostonce between the “least-fixed-point” operators U and M andthe “greatest-fixed-point” operators W and R . In the nota-tion introduced in [2], which mimics the definition of the Σ i , Π i , and ∆ i classes of the arithmetical and polynomial hi-erarchies, they proved that every LTL formula is equivalentto a ∆ -formula.While these normal forms have had large conceptual im-pact in model checking, automatic synthesis, and deductiveverification (see e.g. [17] for a recent survey), the normalisa-tion procedures proving that they are indeed normal formshave had none. In particular, contrary to the case of proposi-tional or first-order logic, they have not been implementedin tools. The reason is that they are not direct, have highcomplexity, and their correctness proofs are involved. Letus elaborate on this. In [24], Zuck gives a detailed descrip-tion of the normalisation procedure of [8]. First, Zuck trans-lates the initial Past LTL formula into a counter-free semi-automaton, then applies the Krohn-Rhodes decompositionand other results to translate the automaton into a star-freeregular expression, and finally translates this expression intoa reactivity formula with a non-elementary blow-up. In [11, ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza
12] the procedure is not even presented, the reader is re-ferred to [24] and/or to previous results . The normalisationprocedure of [3] for LTL calls the translation procedure of[8, 24] for Past LTL as a subroutine, and so it is not anysimpler . Finally, while Maler and Pnueli present in [10] animproved translation of star-free regular languages to PastLTL, their work still leads to a triple exponential normalisa-tion procedure for Past LTL. Further, it is not clear to us ifthis translation can also be used to obtain ∆ -formulas.In this paper we present a novel normalisation procedurethat translates any LTL formula into an equivalent ∆ -formula.Our procedure is: • Direct . It does not require any detour through automa-ta or regular expressions. • Syntax-guided . It consists of a few syntactic rewriterules—not unlike the rules for putting a boolean for-mula in conjunctive or disjunctive normal form—thatcan be described in less than a page. • Single exponential . The length of the ∆ -formula is atmost exponential in the length of the original formula,a dramatic improvement on the previous non-elemen-tary and triple exponential bounds.The correctness proof of the procedure consists of a fewlemmas, all of them with routine proofs by structural induc-tion. It is presented in Sections 4 to 6, modulo the omissionof some straightforward induction cases. To make this pa-per self-contained, the proofs of three lemmas taken from[5, 21] are reproduced in Appendix A. We have mechanisedthe complete correctness proof in Isabelle/HOL [15], build-ing upon previous work [1, 19, 20]. The formalised proofconsists of roughly 1000 lines, from which one can extract aformally verified normalisation procedure consisting of ca.200 lines of Standard ML code, excluding standard defini-tions added by the code-generation. Both the formalisationand instructions for extracting code are located in [22].In the second part of the paper (Sections 7 and 8) we usethe new normalisation procedure to derive a simple transla-tion of LTL into deterministic Rabin automata (DRW). First,we show that every formula of ∆ can be translated into avery weak alternating Büchi automaton (A1W) in which ev-ery path has at most one alternation between accepting andnon-accepting states. Further, we provide a simple determin-isation procedure for these automata, based on a breakpointconstruction. The LTL-to-DRW translation normalises theformula, transforms it into an A1W with at most one alter-nation, and determinises this intermediate automaton.Due to space constraints we do not provide an overviewof LTL-to-DRW translations and refer the reader to [21, Ch. Including papers by Burgess, McNaughton and Pappet, Choueka, Thomas,and Gabby, Pnueli, Shela, and Stavi. Further, [3] only contains a short sketch of the translation of reactivityformulas into ∆ -formulas. Let Σ be a finite alphabet. A word w over Σ is an infinite se-quence of letters a a a . . . with a i ∈ Σ for all i ≥
0, and alanguage is a set of words. A finite word is a finite sequenceof letters. As usual, the set of all words (finite words) is de-noted Σ ω ( Σ ∗ ). We let w [ i ] (starting at i =
0) denote the i -thletter of a word w . The finite infix w [ i ] w [ i + ] . . . w [ j − ] isabbreviated with w ij and the infinite suffix w [ i ] w [ i + ] . . . with w i . We denote the infinite repetition of a finite word σ . . . σ n by ( σ . . . σ n ) ω = σ . . . σ n σ . . . σ n σ . . . . Definition 1 (LTL syntax) . LTL formulas over a set Ap ofatomic propositions are constructed by the following syntax: φ F tt | ff | a | ¬ a | φ ∧ φ | φ ∨ φ | X φ | φ U φ | φ W φ | φ R φ | φ M φ where a ∈ Ap is an atomic proposition and X , U , W , R , and M are the next, (strong) until, weak until, (weak) release, andstrong release operators, respectively. The inclusion of both the strong and weak until opera-tors as well as the negation normal form are essential to ourapproach. The operators R and M , however, are added toensure that every formula of length n in the standard syn-tax, with negation but only the until operator, is equivalentto a formula of length O ( n ) in our syntax. They could beremoved, if we accept an exponential blow-up incurred byexpressing R with W . The semantics is defined as usual: Definition 2 (LTL semantics) . Let w be a word over the al-phabet Ap and let φ be a formula. The satisfaction relation w | = φ is inductively defined as follows: w | = tt w = ff w | = a iff a ∈ w [ ] w | = ¬ a iff a < w [ ] w | = φ ∧ ψ iff w | = φ and w | = ψw | = φ ∨ ψ iff w | = φ or w | = ψw | = X φ iff w | = φw | = φ U ψ iff ∃ k . w k | = ψ and ∀ j < k . w j | = φw | = φ M ψ iff ∃ k . w k | = φ and ∀ j ≤ k . w j | = ψw | = φ R ψ iff ∀ k . w k | = ψ or w | = φ M ψw | = φ W ψ iff ∀ k . w k | = φ or w | = φ U ψ We let L( φ ) ≔ { w ∈ ( Ap ) ω : w | = φ } denote the languageof φ . We overload the definition of | = and write φ | = ψ as ashorthand for L( φ ) ⊆ L( ψ ) . We use the standard abbreviations F φ ≔ tt U φ (eventu-ally) and G φ ≔ ff R φ (always). Finally, we introduce thenotion of equivalence of formulas, and equivalence withina language. n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany Definition 3.
Two formulas φ and ψ are equivalent , denoted φ ≡ ψ , if L( φ ) = L( ψ ) . Given a language L ⊆ ( Ap ) ω , twoformulas φ and ψ are equivalent within L , denoted φ ≡ L ψ ,if L( φ ) ∩ L = L( ψ ) ∩ L . We recall the hierarchy of temporal properties studied byManna and Pnueli [11] following the formulation of Černáand Pelánek [2]. In the ensuing sections we describe struc-tures that have a direct correspondence to this hierarchyand in this sense the hierarchy provides a map to navigatethe results of this paper.
Definition 4 ([2, 11]) . Let P ⊆ Σ ω be a property over Σ . • P is a safety property if there exists a language of fi-nite words L ⊆ Σ ∗ such that for every w ∈ P all finiteprefixes of w belong to L . • P is a guarantee property if there exists a language offinite words L ⊆ Σ ∗ such that for every w ∈ P thereexists a finite prefix of w which belongs to L . • P is an obligation property if it can be expressed as apositive boolean combination of safety and guaranteeproperties. • P is a recurrence property if there exists a language offinite words L ⊆ Σ ∗ such that for every w ∈ P infinitelymany prefixes of w belong to L . • P is a persistence property if there exists a language offinite words L ⊆ Σ ∗ such that for every w ∈ P all butfinitely many prefixes of w belong to L . • P is a reactivity property if P can be expressed as a pos-itive boolean combination of recurrence and persistenceproperties. The inclusions between these classes are shown in Fig-ure 1a. Chang, Manna, and Pnueli give in [3] a syntacticcharacterisation of the classes of the safety-progress hierar-chy in terms of fragments of LTL. The following is a corol-lary of the proof of [3, Thm. 8]:
Definition 5 (Adapted from [2]) . We define the followingclasses of LTL formulas: • The class Σ = Π = ∆ is the least set containing allatomic propositions and their negations, and is closedunder the application of conjunction and disjunction. • The class Σ i + is the least set containing Π i and is closedunder the application of conjunction, disjunction, andthe X , U , and M operators. • The class Π i + is the least set containing Σ i and is closedunder the application of conjunction, disjunction, andthe X , R , and W operators. • The class ∆ i + is the least set containing Σ i + and Π i + and is closed under the application of conjunction anddisjunction. reactivityrecurrencepersistenceobligationsafetyguarantee ⊃ ⊂ ⊂ ⊃ ⊃ ⊂ (a) Safety-progress hierarchy [11] ∆ Π Σ ∆ Π Σ (b) Syntactic-future hierarchy
Figure 1.
Both hierarchies, side-by-side, indicating the cor-respondence of Theorem 6
Theorem 6 (Adapted from [2]) . A property that is specifi-able in LTL is a guarantee (safety, obligation, persistence, re-currence, reactivity, respectively) property if and only if it isspecifiable by a formula from the class Σ , ( Π , ∆ , Σ , Π , ∆ ,respectively ) . Fix an LTL formula φ over a set of atomic propositions Ap .Our new normal form is based on two notions: • A partition of the universe U ≔ ( Ap ) ω of all wordsinto equivalence classes of words that, loosely speak-ing, exhibit the same “limit-behaviour” with respectto φ . • The notion of stable word with respect to φ . A partition of U . Let µ ( φ ) and ν ( φ ) be the sets contain-ing the subformulas of φ of the form ψ opψ for op ∈ { U , M } and op ∈ { W , R } , respectively. Given a word w , define: GF φw ≔ { ψ : ψ ∈ µ ( φ ) ∧ w | = GF ψ }F G φw ≔ { ψ : ψ ∈ ν ( φ ) ∧ w | = FG ψ } (To simplify the notation, when φ is clear from the contextwe simply write GF w and F G w .) Two words w , v have thesame limit-behaviour w.r.t. φ if GF w = GF v and F G w = F G v . Having the same limit-behaviour is an equivalence re-lation, which induces the partition P = {P M , N ⊆ U : M ⊆ µ ( φ ) , N ⊆ ν ( φ )} given by: P M , N ≔ { w ∈ U : M = GF w ∧ N = F G w } (2) Example 7.
Let φ = G a ∨ b U c . We have µ ( φ ) = { b U c } and ν ( φ ) = { G a } . The partition P has four equivalence classes: • P ∅ , ∅ contains all words such that b U c holds only finitelyoften and G a fails infinitely often (which in this caseimplies that G a never holds), e.g. { b } ω or { c }{ b } ω . • P ∅ , { G a } contains all words such that b U c holds finitelyoften and G a fails finitely often, e.g. { a } ω or { c }{ a } ω . • P { b U c } , ∅ contains all words such that b U c holds infin-itely often and G a fails infinitely often, e.g. ({ a }{ c }) ω or { a }{ c } ω . ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza { c }{ b } ω { b } ω ({ a }{ c }) ω { a }{ c } ω { c }{ a } ω { a } ω ({ a , c }{ a }) ω { b }{ a , c } ω P ∅ , ∅ P ∅ , { G a } P { b U c } , ∅ P { b U c } , { G a } Figure 2.
Partition of ( { a , b , c } ) ω according to φ = G a ∨ b U c . • P { b U c } , { G a } contains all words such that b U c holds in-finitely often and G a fails finitely often, e.g. { b }{ a , c } ω or ({ a , c }{ a }) ω .The partition is graphically shown in Figure 2. The equiva-lence classes are shown in blue, red, yellow, and green (ignorethe inner part in darker colour for the moment). Stable words.
A word w is stable with respect to φ if ev-ery formula of µ ( φ ) holds either never or infinitely oftenalong w (i.e., either none or infinitely many of its suffixessatisfy the formula), and every formula of ν ( φ ) fails never orinfinitely often along w . In particular, for a stable word noformula of µ ( φ ) can hold a finite, nonzero number of timesbefore it fails forever, and no formula of ν ( φ ) can fail a finite,nonzero number of times before it holds forever. It followsimmediately from this definition that not every word is sta-ble, but every word eventually stabilises, meaning that allbut finitely many of its suffixes are stable. Let S φ denotethe set of stable words with respect to φ . Defining F φw ≔ { ψ : ψ ∈ µ ( φ ) ∧ w | = F ψ }G φw ≔ { ψ : ψ ∈ ν ( φ ) ∧ w | = G ψ } we easily obtain: S φ ≔ { w ∈ U : F φw = GF φw ∧ G φw = F G φw } (3) Example 8.
Let φ = G a ∨ b U c . The words { c } n { a } ω for n ≥ are not stable w.r.t. φ , because b U c holds exactly n times alongthe word. However, the suffix { a } ω is stable. Figure 2 repre-sents the stable words of each element P M , N of the partitionin darker colour, and gives examples of stable words for eachclass. The starting point of this paper is the observation thatsome results of [5, 21] allow us to easily derive a normalform for LTL, albeit only when LTL is interpreted on stablewords. More precisely, in Section 5 we show that for every M ⊆ µ ( φ ) and N ⊆ ν ( φ ) there exist formulas φ [ M ] Π ∈ Π and φ [ N ] Σ ∈ Σ such that: φ ≡ S φ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ [ M ] Π ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ (4) Further, φ [ M ] Π and φ [ N ] Σ are obtained from φ , M , and N by means of a simple, linear-time syntactic substitution pro-cedure. Observe that the right-hand side is a formula of ∆ ,and that we write ≡ S φ , i.e., the equivalence is only validwithin the universe of stable words. In this paper we lift thisrestriction. In Section 6 we define a formula φ [ M ] Σ ∈ Σ bymeans of another linear-time, syntactic substitution proce-dure, such that: φ ≡ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ [ M ] Σ ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ (5) Example 9.
For φ = F ( a ∧ G ( b ∨ F c )) ∈ Σ , the still-to-be-defined normal form (4) will yield: φ ≡ S φ ( GF a ∧ FG b ) ∨ ( GF a ∧ GF c ) Indeed, since φ ∈ µ ( φ ) , every stable word satisfying φ mustsatisfy it infinitely often, and so equivalence for stable wordsholds, although the formulas are not equivalent. For Equation(5) we will obtain: φ ≡ F ( a ∧ (( b ∨ F c ) U G b )) ∨ ( F a ∧ GF c ) Observe that the right-hand-side belongs to ∆ . φ [ M ] Π and φ [ N ] Σ We recall the definitions of the formulas φ [ M ] Π and φ [ N ] Σ ,introduced in [5, 21] with a slightly different notation. The formula φ [ M ] Π . Define P M ≔ Ð N ⊆ ν ( φ ) P M , N . Ob-serve that P M is the language of the words w such that M = GF w . The formula φ [ M ] Π is defined with the goal ofsatisfying the following identity: φ ≡ S φ ∩P M φ [ M ] Π (6)Intuitively, the identity states that within the universe ofthe stable words of P M , the formula φ can be replaced bythe simpler formula φ [ M ] Π .All insights required to define φ [ M ] Π are illustrated bythe following examples, where we assume that w ∈ S φ ∩P M : • φ = F a ∧ G b and M = { F a } . Since M = GF w , we have F a ∈ GF w , which implies w | = GF a . So w | = F a ∧ G b iff w | = G b , and so we can set φ [ M ] Π ≔ tt ∧ G b , i.e.,we can define φ [ M ] Π as the result of substituting tt for F a in φ . The yet-to-be-defined substitution in-factreplaces the abbreviation F a = ttU a by ttW a ≡ tt . • φ = F a ∧ G b and M = ∅ . Since M = F w , we have F a < F w , and so w = F a . In other words, w | = F a ∧ G b iff w | = ff , and so we can set φ [ M ] Π ≔ ff ∧ G b . • φ = G ( b U c ) and M = { b U c } . Since M = GF w , wehave b U c ∈ GF w , and so w | = GF ( b U c ) . This doesnot imply w i | = b U c for all suffixes of w , but it im-plies that c will hold infinitely often in the future. So n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany w | = G ( b U c ) iff w | = G ( b W c ) , and so we can define φ [ M ] Π ≔ G ( b W c ) . Definition 10 ([5, 21]) . Let M ⊆ µ ( φ ) be a set of formulas.The formula φ [ M ] Π is inductively defined as follows: ( φ U ψ )[ M ] Π ≔ ( φ [ M ] Π W ψ [ M ] Π if φ U ψ ∈ M ff otherwise. ( φ M ψ )[ M ] Π ≔ ( φ [ M ] Π R ψ [ M ] Π if φ M ψ ∈ M ff otherwise.All other cases are defined homomorphically, e.g., a [ M ] Π ≔ a for every a ∈ Ap , ( X φ )[ M ] Π ≔ X ( φ [ M ] Π ) , and ( φ W ψ )[ M ] Π ≔ ( φ [ M ] Π ) W ( ψ [ M ] Π ) . The following lemma, proved in [5, 21], shows that φ [ M ] Π indeed satisfies Equation (6). Since the notation of [5, 21] isslightly different, we include proofs with the new notationfor the cited results in Appendix A for convenience. Lemma 11 ([5, 21]) . Let w be a word, and let M ⊆ µ ( φ ) be aset of formulas.1. If F φw ⊆ M and w | = φ , then w | = φ [ M ] Π .2. If M ⊆ GF φw and w | = φ [ M ] Π , then w | = φ .3. φ ≡ S φ ∩P M φ [ M ] Π Observe that the first two statements do not assume that w is stable. This is an aspect we will later make use of forthe definition of the normalisation procedure. The formula φ [ N ] Σ . Let P N ≔ Ð M ⊆ µ ( φ ) P M , N . The for-mula φ [ N ] Σ is designed to satisfy φ ≡ S φ ∩P N φ [ N ] Σ (7)and its definition is completely dual to that of φ [ M ] Π . Definition 12 ([5, 21]) . Let N ⊆ ν ( φ ) be a set of formulas.The formula φ [ N ] Σ is inductively defined as follows: ( φ R ψ )[ N ] Σ = ( tt if φ R ψ ∈ Nφ [ N ] Σ M ψ [ N ] Σ otherwise. ( φ W ψ )[ N ] Σ = ( tt if φ W ψ ∈ Nφ [ N ] Σ U ψ [ N ] Σ otherwise.All other cases are defined homomorphically. The dual of Lemma 11 also holds:
Lemma 13 ([5, 21]) . Let w be a word, and let N ⊆ ν ( φ ) be aset of formulas.1. If F G φw ⊆ N and w | = φ , then w | = φ [ N ] Σ .2. If N ⊆ G φw and w | = φ [ N ] Σ , then w | = φ .3. φ ≡ S φ ∩P N φ [ N ] Σ A normal form for stable words.
We use the followingresult from [5, 21] to characterise the stable words of a par-tition P M , N that satisfy φ : Lemma 14 ([5, 21]) . Let w be a word, and let M ⊆ µ ( φ ) and N ⊆ ν ( φ ) . Then define: Φ ( M , N ) ≔ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) We have:1. If M = GF w and N = F G w , then w | = Φ ( M , N ) .2. If w | = Φ ( M , N ) , then M ⊆ GF w and N ⊆ F G w . Equipped with this lemma, let us show that a stable wordof P M , N satisfies φ iff it satisfies φ [ M ] Π ∧ Φ ( M , N ) . Let w be a stable word of P M , N . If w satisfies φ , then it satisfies φ [ M ] Π by Lemma 11.3 and Φ ( M , N ) by Lemma 14.1 (recallthat, since w ∈ P M , N , we have M = GF w and N = GF w byEquation (2)). For the other direction, assume that w satisfies φ [ M ] Π ∧ Φ ( M , N ) . Then we have M ⊆ GF w by Lemma 14.2and so w satisfies φ by Lemma 11.2. (This direction does noteven require stability.)Since every word belongs to some element of the parti-tion, we obtain a normal form for stable words: Proposition 15. φ ≡ S φ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ [ M ] Π ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ Proof.
Define Φ ( M , N ) as in Lemma 14 and let w ∈ S φ be astable word. We show that w satisfies φ iff it satisfies φ [ M ] Π and Φ ( M , N ) for some M ⊆ µ ( φ ) and N ⊆ ν ( φ ) .Assume w | = φ . Let M ≔ GF w and N ≔ F G w . ByLemma 14.1 w | = Φ ( M , N ) holds. Since w is stable, we have F w = GF w = M (see Equation (3)). By Lemma 11.1 we have w | = φ [ M ] Π , and we are done.Assume w | = (cid:0) φ [ M ] Π ∧ Φ ( M , N ) (cid:1) for some M ⊆ µ ( φ ) and N ⊆ ν ( φ ) . Using the second part of Lemma 14 we get M ⊆GF w . Applying Lemma 11.2 we get w | = φ . (cid:3) Example 16.
Let φ = F ( a ∧ G ( b ∨ F c )) . We have µ ( φ ) = { φ , F c } and ν ( φ ) = { G ( b ∨ F c )} . So there are four possiblechoices for M , and two for N . It follows that the right-hand-side of Proposition 15 has eight disjuncts. However, all dis-juncts with φ < M are equivalent to ff because then φ [ M ] Π = ff , and the same holds for all disjuncts with φ ∈ M and N = ∅ because φ [∅] Σ = ff .The two remaining disjuncts are M = { φ } , N = { G ( b ∨ F c )} , and M = { φ , F c } , N = { G ( b ∨ F c )} . For both we have φ [ M ] Π ≡ φ [ M ] Π ≡ tt . Further, for the first disjunct we have GF ( φ [ N ] Σ ) ∧ FG (( G ( b ∨ F c ))[ M ] Π ) ≡ GF a ∧ FG b ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza and for the second we get GF ( φ [ N ] Σ ) ∧ GF (( F c )[ N ] Σ ) ∧ FG (( G ( b ∨ F c ))[ M ] Π )≡ GF a ∧ GF c ∧ FG ( Gtt ) ≡ GF a ∧ GF c . Together we obtain F ( a ∧ G ( b ∨ F c )) ≡ S φ GF a ∧ ( FG b ∨ GF c ) . Proposition 15 has little interest in itself because of the re-striction to stable words. However, it serves as the startingpoint for our search for an unrestricted normal form, validfor all words. Observe that Lemma 14 does not depend on w being stable. Contrary, Lemma 11.1 refers to F w and wecrucially depend on stability to replace it by GF w . Conse-quently, we only need to find a replacement for the first con-junct and can leave the rest of the structure, i.e. the enumer-ation of all possible combinations of M ⊆ µ ( φ ) and Φ ( M , N ) ,unchanged. More precisely, we search for a mapping φ h·i that assigns to every M ⊆ µ ( φ ) a formula φ h M i ∈ Σ suchthat: φ ≡ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ h M i ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ (8) The following lemma gives sufficient conditions for φ h M i . Lemma 17.
For every M ⊆ µ ( φ ) , let φ h M i be a formula sat-isfying:(a) For every M ′ ⊆ µ ( φ ) : M ⊆ M ′ = ⇒ φ h M i | = φ h M ′ i (b) For every word w : w | = φ ⇐⇒ w | = φ hGF φw i Then Equation (8) holds.Proof.
Assume that ( a , b ) hold, and let w be a word. We showthat w satisfies φ iff it satisfies the right-hand-side of (8).( ⇒ ) Assume w satisfies φ . By (b) we have w | = φ hGF w i . Weclaim that the disjunct of the right-hand-side of Equation (8)with M ≔ GF w and N ≔ F G w holds. Indeed, w | = φ h M i trivially holds, and the rest follows from Lemma 14.1.( ⇐ ) Assume w satisfies the right-hand side of Equation (8).Then there exist M ⊆ µ ( φ ) and N ⊆ ν ( φ ) such that w | = φ h M i holds, w | = GF ( ψ [ N ] Σ ) holds for every ψ ∈ M , and w | = FG ( ψ [ M ] Π ) holds for every ψ ∈ N . Lemma 14.2 yields M ⊆ GF w , and (a) yields φ hGF w i . Applying (b) we get w | = φ . (cid:3) Note that Lemma 17 can also be dualised and we couldsearch for a mapping φ h·i that assigns to every N ⊆ ν ( φ ) aformula φ h N i ∈ Π such that Equation (8) holds.Unfortunately we cannot simply take φ h M i ≔ φ [ M ] Π or φ h N i ≔ φ [ N ] Σ : Both choices satisfy condition (a) of Lemma17, as proven by Lemma 18 , but fail to satisfy condition (b)as shown by Example 19. This lemma is needed again for the proof of Theorem 23.
Lemma 18. φ [·] Π and φ [·] Σ have the following properties:For every M , M ′ ⊆ µ ( φ ) and N , N ′ ⊆ ν ( φ ) : M ⊆ M ′ = ⇒ φ [ M ] Π | = φ [ M ′ ] Π N ⊆ N ′ = ⇒ φ [ N ] Σ | = φ [ N ′ ] Σ Proof. (a) By induction on φ . We show only two cases, sinceall other cases are either trivial or analogous.Case φ = ψ U ψ . Assume w | = φ [ M ] Π holds. Due to the defi-nition of φ [ M ] Π we have φ ∈ M and thus also φ ∈ M ′ . Thuswe have w | = ( ψ [ M ] Π ) W ( ψ [ M ] Π ) and applying the induc-tion hypothesis we get w | = ( ψ [ M ′ ] Π ) W ( ψ [ M ′ ] Π ) . Hence w | = φ [ M ′ ] Π .Case φ = ψ W ψ . Assume w | = φ [ N ] Σ holds. If φ ∈ N ′ then w | = φ [ N ′ ] Σ trivially holds. If φ < N ′ then also φ < N ,and we get w | = ( ψ [ N ] Σ ) U ( ψ [ N ] Σ ) . Using the inductionhypothesis we get w | = ( ψ [ N ′ ] Σ ) U ( ψ [ N ′ ] Σ ) , and we aredone. (cid:3) Example 19.
Let us first exhibit a formula φ and a word w such that w | = φ , but w = φ [GF φw ] Π . For this take φ = F a and w = { a }{} ω . Thus w | = φ and GF w = ∅ . However, ( F a )[∅] Π = ff and hence w = ( F a )[GF φw ] Π .We now move to the second case. Let us exhibit φ and w suchthat w = φ and w | = φ [F G φw ] Σ . Dually, let φ = G a and w = {}{ a } ω . Then w = φ , but F G w = { G a } and ( G a )[{ G a }] Σ = tt and hence w | = ( G a )[F G φw ] Π . The key to finding a mapping φ h·i satisfying both condi-tions of Lemma 17 is the technical result below, for whichwe offer the following intuition. The following equivalenceis a valid law of LTL: G φ ≡ φ U G φ (9)In order to prove that a word w satisfies the right-hand-sidewe can take an arbitrary index i ≥
0, prove that w j | = φ holds for every j < i , and then prove that w i | = G φ . Sincewe are free to choose i , we can pick it such that w i is a stableword, which allows us to apply the machinery of Section 5and obtain: Lemma 20.
For every word w : w | = G φ ⇐⇒ w | = φ U G ( φ [GF φw ] Π ) Proof.
We prove both directions separately.( ⇒ ) Assume w | = G φ holds. Let w i be a stable suffix of w .By the definition of stability we have F φw i = F φw j = GF φw forevery j ≥ i . By Lemma 11.1, we have w j | = φ = ⇒ w j | = φ [GF φw ] Π for every j ≥ i and so in particular w i | = G ( φ [GF φw ] Π ) . We proceed as fol-lows: w | = G φ = ⇒ w i | = G ( φ [GF φw ] Π ) ∧ ∀ k < i . w k | = φ = ⇒ w | = φ U G ( φ [GF φw ] Π ) n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany ( ⇐ ) This is an immediate consequence of Lemma 11.2. (cid:3) With the help of the standard LTL-equivalences φ W ψ ≡ φ U ( ψ ∨ G φ ) (10) φ R ψ ≡ ( φ ∨ G ψ ) M ψ (11)Lemma 20 can be extended to a more powerful proposition. Proposition 21.
For all formulas φ , ψ , and for every word w : w | = φ W ψ ⇐⇒ w | = φ U (cid:0) ψ ∨ G ( φ [GF φw ] Π ) (cid:1) w | = φ R ψ ⇐⇒ w | = (cid:0) φ ∨ G ( ψ [GF ψw ] Π ) (cid:1) M ψ Proof.
We only prove the first statement. The proof of thesecond is dual.( ⇒ ) Assume w | = φ W ψ . We split this branch of the prooffurther, by a case distinction on whether w | = G φ holds.If w | = G φ holds, then by Lemma 20 we have w | = φ UG ( φ [GF φw ] Π ) , and so w | = φ U ( ψ ∨ G ( φ [GF φw ] Π )) holds.Assume now that w = G φ . Then we simply derive: w | = φ W ψ ⇐⇒ w | = φ U ψ ( w = G φ ) = ⇒ w | = φ U ( ψ ∨ G ( φ [GF φw ] Π )) ( ⇐ ) By Lemma 11.2 we have ( w j | = φ [GF φw ] Π = ⇒ w j | = φ ) for all j ≥
0. Thus w j | = ( G φ )[GF φw ] Π = ⇒ w j | = G φ for all j ≥ w | = φ U ( ψ ∨ G ( φ [GF φw ] Π )) = ⇒ w | = φ U ( ψ ∨ G φ ) (Lemma 11.2) ⇐⇒ w | = φ W ψ (Equation (10)) (cid:3) Proposition 21 gives us all we need to define a formula φ [ M ] Σ satisfying Equation (8). Definition 22.
Let φ be a formula and let M ⊆ µ ( φ ) . Theformula φ [ M ] Σ is inductively defined as follows for R and W ( φ R ψ )[ M ] Σ = ( φ [ M ] Σ ∨ G ( ψ [ M ] Π )) M ψ [ M ] Σ ( φ W ψ )[ M ] Σ = φ [ M ] Σ U ( ψ [ M ] Σ ∨ G ( φ [ M ] Π )) and homomorphically for all other cases. A straightforward induction on φ shows that φ [ M ] Σ ∈ Σ ,justifying our notation. We prove that φ [ M ] Σ satisfies (8) bychecking that it satisfies the conditions of Lemma 17. Theorem 23.
Let φ be a formula. Then: φ ≡ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ [ M ] Σ ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ Proof.
We show that conditions (a) and (b) of Lemma 17hold.(a) The proof is an easy induction on φ , applying Lemma 18where necessary. (b) We prove that ∀ w . w | = φ ⇐⇒ w | = φ [GF φw ] Σ (12)holds by structural induction on φ . We make use of the iden-tity ψ [ M ] Σ = ψ [ M ∩ µ ( ψ )] Σ (13)which follows immediately from the fact that formulas in M \ µ ( ψ ) are not subformulas of ψ .The base of the induction is φ ∈ { tt , ff , a , ¬ a } . In all thesecases we have φ = φ [GF w ] Σ by definition, and so (12) holdsvacuously. All other cases in which φ [ M ] Σ is defined homo-morphically are handled in the same way. We consider onlyone of them:Case φ = ψ U ψ . By assumption, the induction hypothesis(12) holds for ψ and ψ , giving: ∀ u . ( u | = ψ ⇐⇒ u | = ψ [GF ψ u ] Σ ) (14) ∀ v . ( v | = ψ ⇐⇒ v | = ψ [GF ψ v ] Σ ) (15)In order to use these two equivalences for the inductionstep, we need to replace GF ψ u and GF ψ v by GF φw in thecontext of ·[·] Σ . For this we instantiate u ≔ w i and v ≔ w j for arbitrary i , j ≥ u and v are suffixes of w , and so thus we get GF φu = GF φv = GF φw . Notice further that, by intersection with µ (·) , we have GF ψ u = GF φw ∩ µ ( ψ ) and GF ψ u = GF φw ∩ µ ( ψ ) . From (13)we obtain: ∀ i . ( w i | = ψ ⇐⇒ w i | = ψ [GF φw ] Σ ) (16) ∀ j . ( w j | = ψ ⇐⇒ w j | = ψ [GF φw ] Σ ) (17)Applying (16) and (17) we get: w | = ψ U ψ ⇐⇒ ∃ k . w k | = ψ ∧ ( ∀ ℓ < k . w ℓ | = ψ )⇐⇒ ∃ k . w k | = ψ [GF φw ] Σ ∧ ( ∀ ℓ < k . w ℓ | = ψ [GF φw ] Σ )⇐⇒ w | = ( ψ U ψ )[GF φw ] Σ which concludes the proof.The remaining cases are φ = ψ R ψ and φ = ψ W ψ .Again, we only consider one of them, the other one beinganalogous.Case φ = ψ W ψ . The argumentation is only slightly morecomplicated than that of the ψ U ψ case. By induction hy-pothesis (16) and (17) hold. With the help of Lemma 20 wederive: w | = ψ W ψ ⇐⇒ w | = ψ U ( ψ ∨ G ( ψ [GF ψ w ] Π )) (Proposition 21) ⇐⇒ w | = ψ U ( ψ ∨ G ( ψ [GF φw ] Π )) ( ψ [ M ] Π = ψ [ M ∩ µ ( ψ )] Π ) ⇐⇒ w | = ψ [GF φw ] Σ U ( ψ [GF φw ] Σ ∨ G ( ψ [GF φw ] Π )) ((16) and (17)) ⇐⇒ w | = ( ψ W ψ )[GF φw ] Σ ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza (cid:3)
Example 24.
Let φ = F ( a ∧ G ( b ∨ F c )) . We have µ ( φ ) = { φ , F c } and ν ( φ ) = { G ( b ∨ F c )} , and so the right-hand-sideof Theorem 23 has eight disjuncts. However, contrary to Ex-ample 16, we have φ [ M ] Σ , ff for every M ⊆ { φ , F c } . Let Φ ( M , N ) be the disjunct for given sets M , N . We consider twocases:Case M ≔ ∅ , N ≔ ∅ . In this case Φ (∅ , ∅) = φ [∅] Σ , becausethe conjunctions over M and N are vacuous. We have: Φ (∅ , ∅) = φ [∅] Σ = F (cid:0) a ∧ (cid:0) G ( b ∨ F c )[∅] Σ (cid:1)(cid:1) = F (cid:0) a ∧ (cid:0) (( b ∨ F c ) W ff )[∅] Σ (cid:1)(cid:1) = F (cid:16) a ∧ (cid:16) ( b ∨ F c )[∅] Σ U (cid:16) ff ∨ G (( b ∨ F c )[∅] Π ) (cid:17)(cid:17) (cid:17) = F ( a ∧ (( b ∨ F c ) U G b )) Case M ≔ { F c } , N ≔ { G ( b ∨ F c )} . We get: φ [ M ] Σ = F (cid:16) a ∧ (cid:16) ( b ∨ F c )[ M ] Σ U (cid:16) ff ∨ G (( b ∨ F c )[ M ] Π ) (cid:17) (cid:17) (cid:17) = F ( a ∧ (( b ∨ F c ) U ( ff ∨ tt ))) = F a Further, we have FG ( G ( b ∨ F c )[ M ] Π ) = FG ( Gtt ) = tt and GF (( F c )[ N ] Σ ) = GF ( F c ) = GF c . So in this case we obtain Φ ({ F c } , { G ( b ∨ F c )}) = F a ∧ GF c .Repeating this process for all possible sets M , N and bring-ing the resulting formula in disjunctive normal form we fi-nally get φ ≡ F ( a ∧ (( b ∨ F c ) U G b )) ∨ ( F a ∧ GF c ) We show that the normalisation procedure has at most sin-gle exponential blowup in the length of the formula, improv-ing on the previously known non-elementary bound.
Proposition 25.
Let φ be a formula with length n . Then thereexists an equivalent formula φ ∆ in ∆ of length n + O( ) .Proof. Let ψ be an arbitrary formula. We let | ψ | denote thelength of formula and start by giving bounds on ψ [ M ] Π , ψ [ N ] Σ , and ψ [ M ] Σ . For this let M ⊆ µ ( ψ ) and N ⊆ ν ( ψ ) besets of formulas. We obtain by induction on the structure of ψ that | ψ [ M ] Π | ≤ | ψ | , | ψ [ N ] Σ | ≤ | ψ | , and | ψ [ M ] Σ | ≤ | ψ | + .Consider now the right-hand side of Theorem 23 as thepostulated φ ∆ . Using these bounds we calculate the maxi-mal size of a disjunct and obtain:2 n + + n ( n + ) + n ( n + ) + = n + + n + n + n , i.e. n >
5, we can bound this by 2 n + .There exist at most 2 n disjuncts and thus the formula is atmost of size 2 n + for n > (cid:3) We obtained Theorem 23 by relying on the LTL equivalence(10) and (11) for W and R . Using dual LTL-equivalences for U and M , φ U ψ ≡ ( φ ∧ F ψ ) W ψ and φ M ψ ≡ φ R ( ψ ∧ F φ ) , wecan also obtain a dual normalisation procedure: Definition 26.
Let φ be a formula and let N ⊆ ν ( φ ) be aset of formulas. The formula φ [ N ] Π is inductively defined asfollows for U and M : ( φ U ψ )[ N ] Π = ( φ [ N ] Π ∧ F ( ψ [ N ] Σ )) W ψ [ N ] Π ( φ M ψ )[ N ] Π = φ [ N ] Π R ( ψ [ N ] Π ∧ F ( φ [ N ] Σ )) and homomorphically for all other cases. Theorem 27.
Let φ be a formula. Then: φ ≡ Ü M ⊆ µ ( φ ) N ⊆ ν ( φ ) © « φ [ N ] Π ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ We apply our ∆ -normalisation procedure to derive a newtranslation from LTL to DRW via weak alternating automata(AWW). While the previously existing normalisation proce-dures could also be used to translate LTL into DRW, the re-sulting DRW could have non-elementary size in the lengthof the formula, making them impractical. We show that, thanksto the single exponential blow-up of the new procedure, thenew translation has double exponential blow-up, which isasymptotically optimal.It is well-known [14, 23] that an LTL formula φ of length n can be translated into an AWW with O ( n ) states. We showthat, if φ is in normal form, i.e., a disjunction as in Theo-rem 23, then the AWW can be chosen so that every paththrough the automaton switches at most once between ac-cepting and non-accepting states. We then prove that deter-minising AWWs satisfying this additional property is muchsimpler than the general case.The section is structured as follows: Section 7.1 introducesbasic definitions, Section 7.2 shows how to translate an ∆ -formula into AWWs with at most one switch, and Section 7.3presents the determinisation procedure for this subclass ofAWWs. Let X be a finite set. The set of positive Boolean formulasover X , denoted by B + ( X ) , is the closure of X ∪ { tt , ff } un-der disjunction and conjunction. A set S ⊆ X is a model of θ ∈ B + ( X ) if the truth assignment that assigns true to theelements of S and false to the elements of X \ S satisfies θ .Observe, that if S is a model of θ and S ⊆ S ′ then S ′ is alsoa model. A model S is minimal if no proper subset of S is a n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany model. The set of minimal models is denoted M θ . Two for-mulas are equivalent, denoted θ ≡ θ ′ , if their set of minimalmodels is equal, i.e., M θ = M θ ′ . Alternating automata.
An alternating Büchi word au-tomaton over an alphabet Σ is a tuple A = h Σ , Q , θ , δ , α i ,where Q is a finite set of states, θ ∈ B + ( Q ) is an initialformula, δ : Q × Σ
7→ B + ( Q ) is the transition function, and α ⊆ Q is the acceptance condition. A run of A on the word w is a directed acyclic graph G = ( V , E ) satisfying the fol-lowing properties: • V ⊆ Q × N , and E ⊆ Ð l ≥ (( Q × { l }) × ( Q × { l + })) . • There exists a minimal model S of θ such that ( q , ) ∈ V iff q ∈ S . • For every ( q , l ) ∈ V , either δ ( q , w [ l ]) ≡ ff or the set { q ′ : (( q , l ) , ( q ′ , l + )) ∈ E } is a minimal model of δ ( q , w [ l ]) . • For every ( q , l ) ∈ V \ ( Q × { }) there exists q ′ ∈ Q such that (( q ′ , l − ) , ( q , l )) ∈ E .Runs can be finite or infinite. A run G is accepting if(a) δ ( q , w [ l ]) . ff for every ( q , l ) ∈ V , and(b) every infinite path of G visits α -nodes (that is, nodes ( q , l ) such that q ∈ α ) infinitely often.In particular, every finite run satisfying (a) is accepting. A accepts a word w iff it has an accepting run G on w . The lan-guage L(A) recognised by A is the set of words acceptedby A . Two automata are equivalent if they recognise thesame language.Alternating co-Büchi automata are defined analogously,changing condition (b) by the co-Büchi condition (every in-finite path of G only visits α -nodes finitely often). Finally, inalternating Rabin automata α is a set of Rabin pairs ( F , I ) ⊆ Q × Q , and (b) is replaced by the Rabin condition (there ex-ists a Rabin pair ( F , I ) ∈ α such that every infinite path visitsstates of F only finitely often and states of I infinitely often).An automaton is deterministic if for every state q ∈ Q andevery letter a ∈ Σ there exists q ′ ∈ Q such that δ ( q , a ) = q ′ ,and non-deterministic if for every q ∈ Q and every a ∈ Σ there exists Q ′ ⊆ Q such that δ ( q , a ) = Ô q ′ ∈ Q ′ q ′ .The following definitions are useful for reasoning aboutruns: A set U ⊆ Q is called a level . If U ⊆ α , then U is an α -level . A level U ′ is a successor of U w.r.t. a ∈ Σ , also called a -successor, if for every q ∈ U there is a minimal model S q of δ ( q , a ) such that U ′ = Ð q ∈ U S q . The k -th level of arun G = ( V , E ) is the set { q : ( q , k ) ∈ V } . Observe that alevel can be empty, and empty levels are α -levels. Further,by definition a level has no successors w.r.t. a iff it containsa state q such that δ ( q , a ) ≡ ff . In particular, every level ofan accepting run has at least one successor. Weak and very weak automata.
Let A = h Σ , Q , θ , δ , α i be an alternating (co-)Büchi automaton. We write q −→ q ′ if there is a ∈ Σ such that q ′ belongs to some minimal model q q q δ ( q , σ ) = ( q ∨ q if a ∈ σq otherwise. δ ( q , σ ) = ( q if b ∈ σq ∧ q otherwise. δ ( q , σ ) = ( tt if c ∈ σq otherwise. Figure 3.
A1W for φ = F ( a ∧ XG ( b ∨ XF c )) with Σ = { a , b , c } , θ = q , and α = { q } .of δ ( q , a ) . A is weak if there is a partition Q , . . . , Q m of Q such that • for every q , q ′ ∈ Q , if q −→ q ′ then there are i ≤ j such that q ∈ Q i and q ′ ∈ Q j , and • for every 1 ≤ i ≤ m : Q i ⊆ α or Q i ∩ α = ∅ . A is very weak or linear if it is weak and every class Q i ofthe partition is a singleton ( | Q i | = A has height n if every path q → q ′ → q ′′ · · · of A alternates at most n − α and Q \ α . For example, the automaton in Figure 3 hasheight 3. We let AWW [ n ] (A1W [ n ] ) denote the sets of all(very-)weak alternating automata with height at most n . Fur-ther, we let AWW [ n , A ] (resp. AWW [ n , R ] ) denote the set ofautomata of AWW [ n ] whose initial formula satisfies θ ∈B( α ) + (resp. θ ∈ B( Q \ α ) + ). For example the automaton ofFigure 3 belongs to A1W [ , R ] . A1W [ ] In the standard translation [23] of LTL to A1W, the statesof the A1W for a formula φ are subformulas of φ , or nega-tions thereof. We show that, at the price of a slightly morecomplicated translation, the resulting A1W for a ∆ i -formulabelongs to A1W [ i ] . Thus by using Theorem 23 every LTLformula can be translated to an A1W [ ] . The idea of the con-struction is to use subformulas as states ensuring that1. the transition relation can only lead from a formula toanother formula at the same level or a lower level inthe syntactic-future hierarchy (Figure 1b), and2. accepting states are Π i subformulas. ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza
This immediately leads to “at most one alternation”. How-ever, there is a little technical problem: the level of a for-mula is not always well-defined, because some formulas donot belong to one single lowest level of the hierarchy. Forexample, X a belongs to both Π and Σ . So we need a mech-anism to disambiguate these states. Formally we proceed asfollows:A formula is proper if it is neither a Boolean constant ( tt , ff ) nor a conjunction or disjunction. A state in our modifiedtranslation is an expression of the form h ψ i Γ , where ψ is aproper formula, and Γ is a smallest class of the syntactic-future hierarchy without the zeroth-level (Definition 5) thatcontains ψ . Hence we start with the classes Σ and Π and Γ lies strictly above ∆ . Observe that for some formulas thereis more than one smallest class. For example, since X a ∈ Σ ∩ Π , both Σ and Π are smallest classes containing X a ,and so both h X a i Σ and h X a i Π are states. For other formulasthe class is unique. For example, the only state for a W b is h a W b i Π .We assign to every formula ψ of LTL and every class Γ aBoolean combination of states, denoted [ ψ ] ≤ Γ , as follows: • [ tt ] ≤ Γ = tt and [ ff ] ≤ Γ = ff . • [ ψ ∨ ψ ] ≤ Γ = [ ψ ] ≤ Γ ∨ [ ψ ] ≤ Γ • [ ψ ∧ ψ ] ≤ Γ = [ ψ ] ≤ Γ ∧ [ ψ ] ≤ Γ • If ψ is a proper formula, then [ ψ ] ≤ Γ = Ô Γ ′ ≤ Γ h ψ i Γ ′ ,where Γ ′ ≤ Γ means that Γ ′ = Γ or Γ ′ is below Γ .For example, we obtain [ X a ] ≤ Σ = h X a i Σ ∨ h X a i Π and [ X a ] ≤ Σ = h X a i Σ . Moreover, [ F a ] ≤ Π = ff , since there is no Γ ′ ≤ Π such that F a ∈ Γ ′ .Let φ ∈ ∆ i for some i ≥
0, and let sf ( φ ) be the set of propersubformulas of φ . The automaton A φ = h Ap , Q , θ , δ , α i isdefined as follows: • Q = {h ψ i Γ : ψ ∈ sf ( φ ) , Γ ≤ ∆ i } . • θ = [ φ ] ≤ ∆ i . • α = {h ψ i Π i ∈ Q : i > } . • δ is the restriction to Q × Σ of the function δ : B + ( Q ) × Σ → B + ( Q ) (notice that we overload δ ) defined induc-tively as follows: δ (h a i Γ , σ ) = ( tt if a ∈ σ ff otherwise δ (h¬ a i Γ , σ ) = ( tt if a < σ ff otherwise δ (h X ψ i Γ , σ ) = [ ψ ] ≤ Γ δ (h φ U ψ i Γ , σ ) = δ ([ ψ ∨ ( φ ∧ X ( φ U ψ ))] ≤ Γ , σ ) δ (h φ W ψ i Γ , σ ) = δ ([ ψ ∨ ( φ ∧ X ( φ W ψ ))] ≤ Γ , σ ) δ (h φ R ψ i Γ , σ ) = δ ([ ψ ∧ ( φ ∨ X ( φ R ψ ))] ≤ Γ , σ ) δ (h φ M ψ i Γ , σ ) = δ ([ ψ ∧ ( φ ∨ X ( φ M ψ ))] ≤ Γ , σ ) All other cases ( tt , ff , ∧ , and ∨ ) are defined homomor-phically. Observe that the Γ -bound for the U , W , R , and M cases suffice, since every Γ is closed under con-junction, disjunction and application of X .An example of this construction is displayed in Figure 3.The states are labelled q = h φ i Σ , q = h G ( b ∨ XF c )i Π , and q = h F c i Σ . Lemma 28.
Let φ be a formula of ∆ i . The automaton A φ belongs to A1W [ i ] , has | sf ( φ )| states, and recognises L( φ ) .Proof. Let us first show that A φ belongs to A1W [ i ] . It fol-lows immediately from the definition of A φ that for everytwo states h ψ i Γ , h ψ ′ i Γ ′ of A φ , if h ψ i Γ −→ h ψ ′ i Γ ′ then Γ ′ ≤ Γ .So in every path there are at most ( i − ) alternations be-tween Σ and Π classes. Since the states of α are those anno-tated with Π classes, there are also at most ( i − ) alternationsbetween α and non- α states in a path.To show that A φ has at most 2 | sf ( φ )| states, observe thatfor every formula ψ there are at most two smallest classesof the syntactic-future hierarchy containing ψ . So A φ hasat most two states for each formula of sf ( φ ) .To prove that A φ recognises L( φ ) one shows by induc-tion on ψ that A φ recognises L( ψ ) from every Boolean com-bination of states [ ψ ] ≤ Γ such that ψ ∈ Γ . The proof is com-pletely analogous to the one appearing in [23]. (cid:3) AWW [ ] We present a determinisation procedure for AWW [ , R ] andAWW [ , A ] inspired by the break-point construction from[13]. We only describe the construction for AWW [ , R ] , asthe one for AWW [ , A ] is dual. The following lemma statesthe key property of AWW [ , R ] : Lemma 29.
Let A be an AWW [ , R ] . A accepts a word w ifand only if there exists a run G = ( V , E ) of A on w such that • δ ( q , w [ l ]) . ff for every ( q , l ) ∈ V , and • there is a threshold k ≥ such that for every l ≥ k andfor every node ( q , l ) ∈ V the state q is accepting.Proof. Assume that A accepts w . Let G = ( V , E ) be an ac-cepting run of A on w . Since A is an AWW [ , R ] , everypath has by definition at most one alternation of acceptingand rejecting states and all states occurring in the initial for-mula are marked as rejecting. Hence if a node ( q , l ) ∈ V isaccepting, i.e. q ∈ α ), then all its descendants are accepting.Let V r ⊆ V be the set of rejecting nodes of V , i.e., the nodes ( q , l ) ∈ V such that q < α . Since the descendants of accept-ing nodes are accepting, the subgraph G r = ( V r , E ∩( V r × V r )) is acyclic and connected. If V r is infinite, then by Königslemma G r has an infinite path of non-accepting nodes, con-tradicting that G is an accepting run. So G r is finite, and wecan choose the threshold k as the largest level of a node of V r , plus one.Assume such a run G = ( V , E ) exists. Condition (a) of anaccepting run holds by hypothesis. For condition (b), justobserve that, since the descendants of accepting nodes areaccepting, and every infinite path of G contains a node of n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany the form ( q , k ) , where k is the threshold level, every infinitepath visits accepting nodes infinitely often. (cid:3) However, Lemma 29 does not hold for AWW [ , R ] : Example 30.
Let A be the automaton shown in Figure 3 andlet w = { a }({ b }{ c }) ω . Observe that A accepts w . We proveby contradiction that no run of A on w satisfies the proper-ties described in Lemma 29. Assume such a run exists. By thedefinition of δ , the run must be infinite. Further, by assump-tion there exists a threshold k such that all successor levels ofthe run are exactly { q } . But there exists k ′ > k such that w [ k ′ ] = { c } . Since δ ( q , { c }) = q ∧ q , the ( k ′ + ) -th levelof the run contains q . Contradiction. Given an automaton A from AWW [ , R ] , we constructa deterministic co-Büchi automaton D such that L (A) = L (D) . A state of the DCW D is a pair ( Levels , Promising ) ,where Levels ⊆ Q and Promising ⊆ α ∩ Levels . It fol-lows that D has at most 3 n states. Intuitively, after read-ing a finite word w k = a . . . a k the automaton D is in thestate ( Levels k , Promising k ) , where Levels k contains the k -thlevels of every run of A on all words with w k as prefix,and Promising k ⊆ Levels k contains the α -levels of Levels k that can still “generate” an accepting run. For this, when D reads a i + , it moves from ( Levels i , Promising i ) to ( Levels i + , Promising i + ) , where Levels i + contains the successors w.r.t. a i + of Levels i , and Promising i + is defined as follows: • If Promising i , ∅ , then Promising i + contains the suc-cessors w.r.t a i + of Promising i . • If Promising i = ∅ , then Promising i + contains the α -levels of Levels i + .Finally, the co-Büchi condition contains the states ( Levels , Promising ) such that Promising = ∅ .Intuitively, during its run on a word w , the automaton D tracks the promising levels, removing those without succes-sors, because they can no longer produce an accepting run.If the Promising set becomes empty infinitely often, then ev-ery run of A on w contains a level without successors, andso A does not accept w . If after some number of steps, say k ,the Promising set never becomes empty again, then A has arun on w such that every level is an α -level and has at leastone successor, and so this run is accepting.For the formal definition of D it is convenient to identifysubsets of 2 Q and 2 α with formulas of B + ( Q ) , B + ( α ) (i.e.,we identify a formula and its set of models). Further, we lift δ : Q × Σ
7→ B( Q ) + to δ : B + ( Q ) × Σ
7→ B + ( Q ) in the canon-ical way. Finally, given φ ∈ B + ( Q ) and S ⊆ Q , we let φ [ ff / S ] denote the result of substituting ff for every state of Q \ α in δ ( q , a ) . With these notations, the deterministic Büchi au-tomaton D equivalent to A can be described in four lines: D = h Σ , Q ′ , q ′ , δ ′ , α ′ i , where Q ′ = B + ( Q ) × B + ( α ) , q ′ = ( θ , ff ) , α ′ = {( θ , ff ) : θ ∈ B + ( Q )} , and δ ′ (( q , p ) , a ) = ( ( δ ( q , a ) , δ ( p , a )) if p . ff ( δ ( q , a ) , δ ( q , a )[ ff / Q \ α ]) otherwise. Lemma 31.
For every
A ∈
AWW [ , R ] with n states, thedeterministic co-Büchi automaton D defined above satisfies L (A) = L (D) , and has n states. Dually, for every A ′ ∈ AWW [ , A ] with n ′ states, there exists a deterministic Büchiautomaton D ′ that has n ′ states and that satisfies L (A ′ ) = L (D ′ ) .Proof. Assume w is accepted by A . Let G = ( V , E ) be anaccepting run of A on w . By Lemma 29 there exists an in-dex k such that all levels of G after the k -th one are con-tained in α and have at least one successor. Therefore, therun ( Levels , Promising ) , ( Levels , Promising ) . . . of D on w satisfies Promising i , ∅ for almost all i , and so D accepts.Assume w is accepted by D . Let ( Levels , Promising ) , ( Levels , Promising ) . . . be the run of D on w . By defini-tion, there is a k ≥ Promising i , ∅ for every i ≥ k . Choose levels U , U , . . . , U k such that • U k ∈ Promising k , and • for every 1 ≤ i ≤ k , choose U i − as a predecessor of U i (this is always possible by the definition of δ ′ ).Further, for every i ≥ k choose U i + as a successor of U i .Now, let G = ( V , E ) be the graph given by • for every l ≥ ( q , l ) ∈ V iff q ∈ U l ; and • (( q , l ) , ( q ′ , l + )) ∈ E iff q ∈ U l and q ′ ∈ S q , where S q isthe minimal model of δ ( q , w [ l ]) used in the definitionof successor level.It follows immediately from the definitions that G is an ac-cepting run of A . The second part is proven by complement-ing A ′ , applying the just described construction, and replac-ing the co-Büchi condition by a Büchi condition. (cid:3) This result leads to a determinisation procedure for AWW [ ] . Lemma 32.
For every A = h Σ , Q , θ , δ , α i ∈ AWW [ ] with n = | Q | states and m = |M θ | minimal models of θ thereexists an equivalent deterministic Rabin automaton D with n + log2 m + states and with m Rabin pairs.Proof.
Let A = h Σ , Q , θ , δ , α i . Given Q ′ ⊆ Q , let A Q ′ bethe AWW[2] obtaining from A by substituting Ó q ∈ Q ′ q forthe initial formula θ . We claim that for each minimal model S ∈ M θ we can construct a deterministic Rabin automaton(DRW) D S with at most 2 n + states and a single Rabin pair,recognising the same language as A S . Let us first see howto construct D , assuming the claim holds. By the claim wehave L(A) = Ð S ∈M θ L(A S ) . So we define D as the unionof all the automata D S . Recall that given two DRWs with n , n states and p , p Rabin pairs we can construct a DRWfor the union of their languages with n × n states and n + n ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza pairs. Since θ has m models, D has at most m Rabin pairsand (cid:16) n + (cid:17) m = n + log2 m + states.It remains to prove the claim. Partition S into S ∩ α and S \ α . We have A S ∩ α ∈ AWW [ , A ] and A S \ α ∈ AWW [ , R ] .By Lemma 31 there exists a deterministic Büchi automaton D S ∩ α and a deterministic co-Büchi automaton D S \ α equiv-alent to A S ∩ α and A S \ α , respectively, both with at most 3 n states. Intersecting these two automata yields a determinis-tic Rabin automaton with at most 3 n + ≤ n + states and asingle Rabin pair, and we are done. (cid:3) We combine the normalisation procedure and the transla-tion of LTL to A1W of the previous section to obtain forevery formula of LTL an equivalent DRW of double expo-nential size. Given a formula φ we have: φ ≡ Ô M ⊆ µ ( φ ) N ⊆ ν ( φ ) φ M , N where φ M , N = © « φ [ M ] Π ∧ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) ª®¬ Using the results of Section 7.2, we translate each formula φ M , N to an A1W [ ] , and then, applying the determinisationalgorithm of Section 7.3, to a DRW. Finally, using the well-known union operation for DRWs, we obtain a DRW for φ .In order to bound the number of states of the final DRW,we first need to determine the number of states of the A1Wfor each φ M , N . Lemma 33.
Let φ be a formula. For every M ⊆ µ ( φ ) and N ⊆ ν ( φ ) , there exists an A1W [ ] with O (| sf ( φ )|) states thatrecognises L( φ M , N ) .Proof. By Lemma 28, some A1W [ ] with O (| sf ( φ M , N )|) statesrecognises L( φ M , N ) . So it suffices to show that | sf ( φ M , N )| ∈ O (| sf ( φ )|) , which follows from these claims, proved in Ap-pendix B:1. | Ð { sf ( ψ [ M ] Π ) : ψ ∈ sf ( φ )}| ≤ | sf ( φ )| ;2. | Ð { sf ( ψ [ N ] Σ ) : ψ ∈ sf ( φ )}| ≤ | sf ( φ )| ;3. | sf ( φ [ M ] Σ )| ≤ | sf ( φ )| . (cid:3) Proposition 34.
Let φ be a formula with n proper subformu-las. There exists a deterministic Rabin automaton recognising L( φ ) with O( n ) states and n Rabin pairs.Proof.
By Lemma 33 the set sf ( φ M , N ) has at most O ( n ) el-ements for every M , N . Further, due to Lemma 28 the au-tomaton A φ M , N belongs to A1W [ ] and has at most O ( n ) states. Applying the construction of Lemma 32 we obtain aDRW with 2 O ( n ) states and a single Rabin pair. Using theunion operation for DRWs we obtain a DRW for φ with (cid:16) O ( n ) (cid:17) n = O ( n ) states. (cid:3) Remark 35.
The construction of Lemma 31 is close to Miyanoand Hayashi’s translation of alternating automata to non-de-terministic automata [13], and to Schneider’s translation of Σ formulas to deterministic co-Büchi automata [18, p.219],all based on the break-point idea. We now determinise AWW [ ] . A deterministic automaton is terminal-accepting if all states are rejecting except a singleaccepting sink with a self-loop, and terminal-rejecting if allstates are accepting except a single rejecting sink with a self-loop. It is easy to see that terminal-accepting and terminal-rejecting deterministic automata are closed under union andintersection. When applied to AWW [ , A ] , the constructionof Lemma 31, yields automata whose states have a trivial Promising set (either the empty set or the complete level).Further, the successor of an α -level is also an α -level. Fromthese observations we easily get: Corollary 36.
Let A be an automaton with n states. • If A ∈
AWW [ , R ] (resp. A ∈
AWW [ , A ] ), then thereexists a deterministic terminal-accepting (resp. termi-nal-rejecting) automaton recognising L(A) with n states. • If A ∈
AWW [ ] , then there exists deterministic weakautomaton recognising L(A) with n + log2 |M θ | + states. We expect the LTL-to-DRW translation of this paper to pro-duce automata similar in size (number of states, Rabin pairs)to the translations presented in [5, 21], which have been im-plemented using Owl [7] and have been extensively tested.Indeed, the “Master Theorem” of [5, 21] characterises thewords satisfying a formula φ as those for which there existsets M , N of subformulas satisfying three conditions, andso it has the same rough structure as our normal form. Fur-ther, for each disjunct of our normal form the automata con-structions used in [5, 21] and the ones used in this paper aresimilar. Finally, in preliminary experiments we have com-pared the LTL-to-DRW translations from [21] and a proto-type implementation, without optimisations, of the normal-isation procedure of this paper. As benchmark sets we usedthe “Dwyer”-patterns [4], pre-processed as described in [21,Ch. 8], and the “Parametrised” formula set from [21, Ch. 8].We observed that on the first set for 60% of the formulasthe number of states of the resulting DRWs was equal, for17% the number of states obtained using the constructionof this paper was smaller, and for 23% the number of stateswas larger. On the second set the ratios were: 76% equal, 21%smaller, and 3% larger. For both sets combined we observedthat in 85% of all 164 cases the difference in number of stateswas less than or equal to three. n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany ω -regular = AWW G [ ] DBW ∪ DCW = AWW G [ ] safety ∪ co-safety = AWW G [ ] ∆ = A1W PS [ , A ] ∩ A1W PS [ , R ] Π = A1W PS [ , A ] Σ A1W PS [ , R ] = ∆ = A1W PS [ , A ] ∩ A1W PS [ , R ] Π = A1W PS [ , A ] Σ A1W PS [ , R ] = Figure 4.
Expressive power of AWWs after Gurumurthy etal. [6], and of A1Ws after Pelánek and Strejcek [16].We concluded that the main advantage of our translationis not its performance, but its modularity (it splits the proce-dure into a normalisation and a simplified translation phase)and its suitability for symbolic automata constructions. Weleave a detailed experimental comparison and possible inte-gration in Owl [7] (which in particular requires to examinedifferent options for formula and automata simplification,as well as an extensive comparison to existing translations)for future work.
The expressive power of weak and very weak alternatingautomata has been studied by Gurumurthy et al. in [6] andby Pelánek and Strejcek in [16], respectively. Both papersidentify the number of alternations between accepting andnon-accepting states as an important parameter, and definea hierarchy of automata classes based on it. Let AWW G [ k ] denote the class of AWW with at most ( k − ) alternationsdefined in [6]. Similarly, let A1W PS [ k , A ] and A1W PS [ k , R ] denote the classes of A1W with at most ( k − ) alternationsand accepting or non-accepting initial state, respectively, de-fined in [16]. Finally, define A1W PS [ k ] = A1W PS [ k , A ] ∪ A1W PS [ k , R ] . Figure 4 shows the results of [6] and [16]. Weabuse language, and, for example, write Π = A1W PS [ , A ] to denote that the class of languages satisfying formulas in Π and the class of languages recognized by automata inA1W PS [ , A ] coincide.Unfortunately, the results of [6] and [16] do not “match”.Due to slight differences in the definitions of height, e.g.the treatment of δ (·) = ff and δ (·) = tt , the restriction tovery weak automata of AWW G [ k ] does not match any classA1W PS [ k ′ ] (that is, AWW G [ k ] ∩ A1W , A1W PS [ k ′ ] ) and,vice versa, extending A1W PS [ k ] does not yield any AWW G [ k ′ ] . We show that our new definition of height unifies the In [16] the classes have different names. ω -regular = AWW [ ] DCWAWW [ , R ] = DBW = AWW [ , A ] DWW = AWW [ ] co-safetyAWW [ , R ] = safety = AWW [ , A ] ∆ = A1W [ ] Π = A1W [ , A ] Σ A1W [ , R ] = ∆ = A1W [ ] Π = A1W [ , A ] Σ A1W [ , R ] = Figure 5.
Expressive power of AWWs and A1Wstwo hierarchies, yielding the pleasant result shown in Fig-ure 5. The result follows from Lemmas 28, 31 and 32, Corol-lary 36, and from constructions appearing in [6, 9, 16]. Aproof sketch is located in Appendix B.
Proposition 37.
AWW [ ] = ω -regular, AWW [ , A ] = DBW , AWW [ , R ] = DCW , AWW [ ] = DWW , AWW [ , A ] = safety, AWW [ , R ] = co-safety, A1W [ , R ] = Σ , A1W [ , A ] = Π , A1W [ ] = ∆ , A1W [ , R ] = Σ , A1W [ , A ] = Π , A1W [ ] = ∆ . Moreover, our single exponential normalisation procedurefor LTL transfers to a single exponential normalisation pro-cedure for A1W:
Lemma 38.
Let A be an A1W with n states over an alphabetwith m letters. There exists A ′ ∈ A1W [ ] with O( nm ) statessuch that L(A) = L(A ′ ) .Proof. The translation from A1W to LTL used in Proposi-tion 37 (an adaption of [9]) yields a formula χ A with atmost O( mn ) proper subformulas. Applying our normalisa-tion procedure to χ A yields an equivalent formula in ∆ with at most 2 O( mn ) proper subformulas (Lemma 33). Apply-ing Lemma 28 we obtain the postulated automaton A ′ . (cid:3) We have presented a purely syntactic normalisation proce-dure for LTL that transforms a given formula into an equiv-alent formula in ∆ , i.e., a formula with at most one alter-nation between least- and greatest-fixpoint operators. Theprocedure has single exponential blow-up, improving on theprohibitive non-elementary cost of previous constructions.The much better complexity of the new procedure (recallthat normalisation procedures for CNF and DNF are alsoexponential) makes it attractive for its implementation anduse in tools. We have presented a first promising application,namely a novel translation from LTL to DRW with doubleexponential blow-up. Finally, we have shown that the nor-malisation procedure for LTL can be transferred to a nor-malisation procedure for very weak alternating automata. ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza
Currently we do not know if our normalisation procedureis asymptotically optimal. We conjecture that this is the case.For the translation of AWW to AWW [ ] we also have no fur-ther insight, besides the straightforward double exponentialupper bound. Acknowledgments
The authors want to thank Orna Kupferman for the sugges-tion to examine the expressive power of weak alternatingautomata and the anonymous reviewers for their helpfulcomments and remarks.This work is partly funded by the German Research Foun-dation (DFG) project “Verified Model Checkers” (317422601)and partly funded by the European Research Council (ERC)under the European Union’s Horizon 2020 research and in-novation programme under grant agreement PaVeS (No 787-367).
References [1] Julian Brunner, Benedikt Seidl, and Salomon Sickert. 2019. A Ver-ified and Compositional Translation of LTL to Deterministic Ra-bin Automata. In , John Harrison, John O’Leary, and Andrew Tolmach (Eds.),Vol. 141. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 11:1–11:19. https://doi.org/10.4230/LIPIcs.ITP.2019.11 [2] Ivana Černá and Radek Pelánek. 2003. Relating Hierarchy of Tem-poral Properties to Model Checking. In
Mathematical Foundationsof Computer Science 2003, 28th International Symposium, MFCS 2003,Bratislava, Slovakia, August 25-29, 2003, Proceedings (Lecture Notes inComputer Science) , Branislav Rovan and Peter Vojtás (Eds.), Vol. 2747.Springer, 318–327. https://doi.org/10.1007/978-3-540-45138-9_26 [3] Edward Y. Chang, Zohar Manna, and Amir Pnueli. 1992. Char-acterization of Temporal Property Classes. In
Automata, Lan-guages and Programming, 19th International Colloquium, ICALP92, Vi-enna, Austria, July 13-17, 1992, Proceedings (Lecture Notes in Com-puter Science) , Werner Kuich (Ed.), Vol. 623. Springer, 474–486. https://doi.org/10.1007/3-540-55719-9_97 [4] Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. 1998.Property specification patterns for finite-state verification. In
FMSP .7–15. https://doi.org/10.1145/298595.298598 [5] Javier Esparza, Jan Kretínský, and Salomon Sickert. 2018. OneTheorem to Rule Them All: A Unified Translation of LTL into ω -Automata. In Proceedings of the 33rd Annual ACM/IEEE Sympo-sium on Logic in Computer Science, LICS 2018, Oxford, UK, July09-12, 2018 , Anuj Dawar and Erich Grädel (Eds.). ACM, 384–393. https://doi.org/10.1145/3209108.3209161 [6] Sankar Gurumurthy, Orna Kupferman, Fabio Somenzi, and Moshe Y.Vardi. 2003. On Complementing Nondeterministic Büchi Automata.In
Correct Hardware Design and Verification Methods, 12th IFIP WG10.5 Advanced Research Working Conference, CHARME 2003, L’Aquila,Italy, October 21-24, 2003, Proceedings (Lecture Notes in Computer Sci-ence) , Daniel Geist and Enrico Tronci (Eds.), Vol. 2860. Springer, 96–110. https://doi.org/10.1007/978-3-540-39724-3_10 [7] Jan Kretínský, Tobias Meggendorfer, and Salomon Sickert. 2018.Owl: A Library for ω -Words, Automata, and LTL. In Auto-mated Technology for Verification and Analysis - 16th Interna-tional Symposium, ATVA 2018, Los Angeles, CA, USA, October 7-10, 2018, Proceedings (Lecture Notes in Computer Science) , Shu-vendu K. Lahiri and Chao Wang (Eds.), Vol. 11138. Springer, 543–550. https://doi.org/10.1007/978-3-030-01090-4_34 [8] Orna Lichtenstein, Amir Pnueli, and Lenore D. Zuck. 1985. TheGlory of the Past. In
Logics of Programs, Conference, Brooklyn Col-lege, New York, NY, USA, June 17-19, 1985, Proceedings (Lecture Notesin Computer Science) , Rohit Parikh (Ed.), Vol. 193. Springer, 196–218. https://doi.org/10.1007/3-540-15648-8_16 [9] Christof Löding and Wolfgang Thomas. 2000. Alternating Automataand Logics over Infinite Words. In
Theoretical Computer Science, Ex-ploring New Frontiers of Theoretical Informatics, International Confer-ence IFIP TCS 2000, Sendai, Japan, August 17-19, 2000, Proceedings (Lec-ture Notes in Computer Science) , Jan van Leeuwen, Osamu Watanabe,Masami Hagiya, Peter D. Mosses, and Takayasu Ito (Eds.), Vol. 1872.Springer, 521–535. https://doi.org/10.1007/3-540-44929-9_36 [10] Oded Maler and Amir Pnueli. 1990. Tight Bounds on the Com-plexity of Cascaded Decomposition of Automata. In . IEEE Computer Society, 672–682. https://doi.org/10.1109/FSCS.1990.89589 [11] Zohar Manna and Amir Pnueli. 1990. A Hierarchy of Tempo-ral Properties. In
Proceedings of the Ninth Annual ACM Sympo-sium on Principles of Distributed Computing, Quebec City, Quebec,Canada, August 22-24, 1990 , Cynthia Dwork (Ed.). ACM, 377–410. https://doi.org/10.1145/93385.93442 [12] Zohar Manna and Amir Pnueli. 1992.
The temporal logic of reactiveand concurrent systems - specification . Springer.[13] Satoru Miyano and Takeshi Hayashi. 1984. Alternating Finite Au-tomata on omega-Words.
Theor. Comput. Sci.
32 (1984), 321–330. https://doi.org/10.1016/0304-3975(84)90049-5 [14] David E. Muller, Ahmed Saoudi, and Paul E. Schupp. 1988. Weak Alter-nating Automata Give a Simple Explanation of Why Most Temporaland Dynamic Logics are Decidable in Exponential Time. In
Proceed-ings of the Third Annual Symposium on Logic in Computer Science (LICS’88), Edinburgh, Scotland, UK, July 5-8, 1988 . IEEE Computer Society,422–427. https://doi.org/10.1109/LICS.1988.5139 [15] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel.2002.
Isabelle/HOL - A Proof Assistant for Higher-Order Logic .Lecture Notes in Computer Science, Vol. 2283. Springer. https://doi.org/10.1007/3-540-45949-9 [16] Radek Pelánek and Jan Strejcek. 2005. Deeper Connections BetweenLTL and Alternating Automata. In
Implementation and Application ofAutomata, 10th International Conference, CIAA 2005, Sophia Antipolis,France, June 27-29, 2005, Revised Selected Papers (Lecture Notes in Com-puter Science) , Jacques Farré, Igor Litovsky, and Sylvain Schmitz (Eds.),Vol. 3845. Springer, 238–249. https://doi.org/10.1007/11605157_20 [17] Nir Piterman and Amir Pnueli. 2018. Temporal Logic and Fair Dis-crete Systems. In
Handbook of Model Checking , Edmund M. Clarke,Thomas A. Henzinger, Helmut Veith, and Roderick Bloem (Eds.).Springer, 27–73. https://doi.org/10.1007/978-3-319-10575-8_2 [18] Klaus Schneider. 2004.
Verification of Reactive Sys-tems - Formal Methods and Algorithms . Springer. https://doi.org/10.1007/978-3-662-10778-2 [19] Benedikt Seidl and Salomon Sickert. 2019. A Com-positional and Unified Translation of LTL into ω -Automata. Archive of Formal Proofs [20] Salomon Sickert. 2016. Linear Temporal Logic.
Archive of FormalProofs [21] Salomon Sickert. 2019.
A Unified Translation of Lin-ear Temporal Logic to ω -Automata . Ph.D. Disser-tation. Technical University of Munich, Germany. http://nbn-resolving.de/urn:nbn:de:bvb:91-diss-20190801-1484932-1-4 [22] Salomon Sickert. 2020. An Efficient Normalisation Pro-cedure for Linear Temporal Logic: Isabelle/HOL For-malisation. Archive of Formal Proofs [23] Moshe Y. Vardi. 1994. Nontraditional Applications of Au-tomata Theory. In
Theoretical Aspects of Computer Software,International Conference TACS ’94, Sendai, Japan, April 19-22,1994, Proceedings (Lecture Notes in Computer Science) , MasamiHagiya and John C. Mitchell (Eds.), Vol. 789. Springer, 575–597. https://doi.org/10.1007/3-540-57887-0_116 [24] Lenore D. Zuck. 1986.
Past Temporal Logic . Ph.D. Dissertation. TheWeizmann Institute of Science, Israel.
A Proofs for the Lemmas from [5, 21]
Since we had to change the notations of [5, 21], we includefor convenience proofs in the new notation.
Lemma 11 ([5, 21]) . Let w be a word, and let M ⊆ µ ( φ ) be aset of formulas.1. If F φw ⊆ M and w | = φ , then w | = φ [ M ] Π .2. If M ⊆ GF φw and w | = φ [ M ] Π , then w | = φ .3. φ ≡ S φ ∩P M φ [ M ] Π Proof.
All parts are proved by a straightforward structuralinduction on φ . Here we only present two cases of the in-duction for (1) and (2). (3) then follows from (1) and (2).(1) Assume F φw ⊆ M . Then F φw i ⊆ M for all i ≥
0. Weprove the following stronger statement via structural induc-tion on φ . We consider one representative of the “interest-ing” cases and one of the “straightforward” cases: ∀ i . ( ( w i | = φ ) = ⇒ ( w i | = φ [ M ] Π ) ) Case φ = ψ U ψ . Let i ≥ w i | = ψ U ψ . Then ψ U ψ ∈ F φw i and so φ ∈ M . We prove w i | = ( ψ U ψ )[ M ] Π : w i | = ψ U ψ = ⇒ w i | = ψ W ψ ⇐⇒ ∀ j . w i + j | = ψ ∨ ∃ k ≤ j . w i + k | = ψ = ⇒ ∀ j . w i + j | = ψ [ M ] Π ∨ ∃ k ≤ j . w i + k | = ψ [ M ] Π (I.H.) = ⇒ w i | = ( ψ [ M ] Π ) W ( ψ [ M ] Π )⇐⇒ w i | = ( ψ U ψ )[ M ] Π Case φ = ψ ∨ ψ . Let i ≥ w i | = ψ ∨ ψ : w i | = ψ ∨ ψ ⇐⇒ w i | = ψ ∨ w i | = ψ = ⇒ w i | = ψ [ M ] Π ∨ w i | = ψ [ M ] Π (I.H.) ⇐⇒ w i | = ( ψ ∨ ψ )[ M ] Π (2) Assume M ⊆ GF φw . Then M ⊆ GF φw i for all i ≥
0. Weprove the following stronger statement via structural induc-tion on φ : ∀ i . ( ( w i | = φ [ M ] Π ) = ⇒ ( w i | = φ ) ) Case φ = ψ U ψ . If φ < M , then by definition φ [ M ] Π = ff .So w i = φ [ M ] Π = ff for all i and thus the implication ( w i | = φ [ M ] Π ) = ⇒ ( w i | = φ ) holds for every i ≥
0. Assumenow φ ∈ M . Since M ⊆ GF φw we have w i | = GF φ and so inparticular w i | = F ψ . To prove the implication assume w i | = ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza ( ψ U ψ )[ M ] Π for an arbitrary fixed i . We show w i | = ψ U ψ : w i | = ( ψ U ψ )[ M ] Π ⇐⇒ w i | = ( ψ [ M ] Π ) W ( ψ [ M ] Π )⇐⇒ ∀ j . w i + j | = ψ [ M ] Π ∨ ∃ k ≤ j . w i + k | = ψ [ M ] Π = ⇒ ∀ j . w i + j | = ψ ∨ ∃ k ≤ j . w i + k | = ψ (I.H) ⇐⇒ w i | = ψ W ψ ⇐⇒ w i | = ψ U ψ Case φ = ψ ∨ ψ . Let i ≥ w i | = ψ ∨ ψ .We have: w i | = ( ψ ∨ ψ )[ M ] Π ⇐⇒ w i | = ψ [ M ] Π ∨ ( w i | = ψ [ M ] Π = ⇒ w i | = ψ ∨ w i | = ψ (I.H.) ⇐⇒ w i | = ψ ∨ ψ (cid:3) Lemma 13 ([5, 21]) . Let w be a word, and let N ⊆ ν ( φ ) be aset of formulas.3. If F G φw ⊆ N and w | = φ , then w | = φ [ N ] Σ .4. If N ⊆ G φw and w | = φ [ N ] Σ , then w | = φ .5. φ ≡ S φ ∩P N φ [ N ] Σ Proof.
All parts are proved by a straightforward structuralinduction on φ . Here we only present two cases of the in-duction for (1) and (2). (3) then follows from (1) and (2).(1) Assume F G φw ⊆ N . Then F G φw i ⊆ N for all i . Weprove the following stronger statement via structural induc-tion on φ : ∀ i . ( ( w i | = φ ) = ⇒ ( w i | = φ [ N ] Σ ) ) Case φ = ψ W ψ . Let i ≥ w i | = φ .If φ ∈ N then φ [ N ] Σ = tt and so w i | = φ [ N ] Σ triviallyholds. Assume now φ < N . Since F G φw i ⊆ N we have w i = FG φ and so in particular w i = G ψ . We prove w i | = ( ψ W ψ )[ N ] Σ : w i | = ψ W ψ ⇐⇒ w i | = ψ U ψ ⇐⇒ ∃ j . w i + j | = ψ ∧ ∀ k < j . w i + k | = ψ = ⇒ ∃ j . w i + j | = ψ [ N ] Σ ∧ ∀ k < j . w i + k | = ψ [ N ] Σ (I.H.) ⇐⇒ w i | = ( ψ [ N ] Σ ) U ( ψ [ N ] Σ )⇐⇒ w i | = ( ψ W ψ )[ N ] Σ Case φ = ψ ∨ ψ . Let i ≥ w i | = ψ ∨ ψ .We have: w i | = ψ ∨ ψ ⇐⇒ w i | = ψ ∨ w i | = ψ = ⇒ w i | = ψ [ N ] Σ ∨ w i | = ψ [ N ] Σ (I.H.) ⇐⇒ w i | = ( ψ ∨ ψ )[ N ] Σ (2) Assume N ⊆ G φw . Then N ⊆ G φw i for all i . We provethe following stronger statement via structural induction on φ : ∀ i . ( ( w i | = φ [ N ] Σ ) = ⇒ ( w i | = φ ) ) Case φ = ψ W ψ . If φ ∈ N , then since N ⊆ G φw we have w i | = G φ and so w i | = φ . Assume now that φ < N and w i | = ( ψ W ψ )[ N ] Σ for an arbitrary fixed i . We prove w i | = ψ W ψ : w i | = ( ψ W ψ )[ N ] Σ ⇐⇒ w i | = ( ψ [ N ] Σ ) U ( ψ [ N ] Σ )⇐⇒ ∃ j . w i + j | = ψ [ N ] Σ ∧ ∀ k < j . w i + k | = ψ [ N ] Σ = ⇒ ∃ j . w i + j | = ψ ∧ ∀ k < j . w i + k | = ψ (I.H.) ⇐⇒ w i | = ψ U ψ = ⇒ w i | = ψ W ψ Case φ = ψ ∨ ψ . We derive in a straightforward mannerfor an arbitrary and fixed i : w i | = ( ψ ∨ ψ )[ N ] Σ ⇐⇒ w i | = ψ [ N ] Σ ∨ w i | = ψ [ N ] Σ = ⇒ w i | = ψ ∨ w i | = ψ (I.H.) ⇐⇒ w i | = ψ ∨ ψ (cid:3) Lemma 14 ([5, 21]) . Let w be a word, and let M ⊆ µ ( φ ) and N ⊆ ν ( φ ) . Then define: Φ ( M , N ) ≔ Û ψ ∈ M GF ( ψ [ N ] Σ ) ∧ Û ψ ∈ N FG ( ψ [ M ] Π ) We have:1. If M = GF w and N = F G w , then w | = Φ ( M , N ) .2. If w | = Φ ( M , N ) , then M ⊆ GF w and N ⊆ F G w .Proof. Let us first focus on part (1) and then move to part(2).(1) Let ψ ∈ GF φw . We have w | = GF ψ , and so w i | = ψ for infinitely many i ≥
0. Since
F G φw i = F G φw for every i ≥
0, Lemma 13.1 can be applied to w i , F G φw i , and ψ . Thisyields w i | = ψ [F G φw ] Σ for infinitely many i ≥ w | = GF ( ψ [F G φw ] Σ ) .Let ψ ∈ F G φw . Since w i | = FG ψ , there is an index j suchthat w j + k | = ψ for every k ≥
0. The index j can be chosenso that it also satisfies GF φw = F φw j + k = GF φw j + k for every k ≥
0. So Lemma 11.1 can be applied to F φw j + k , w j + k , and ψ . This yields w j + k | = ψ [GF φw ] Π for every k ≥ w | = FG ( ψ [GF φw ] Π ) .(2) Let M ⊆ µ ( φ ) and N ⊆ ν ( φ ) . Observe that M ∩ N = ∅ .Let n ≔ | M ∪ N | . Let ψ , . . . , ψ n be an enumeration of M ∪ N compatible with the subformula order, i.e., if ψ i is a subfor-mula of ψ j , then i ≤ j . Let ( M , N ) , ( M , N ) , . . . , ( M n , N n ) be the unique sequence of pairs satisfying: • ( M , N ) = (∅ , ∅) and ( M n , N n ) = ( M , N ) . n Efficient Normalisation Procedure for LTL and A1W LICS ’20, July 8–11, 2020, Saarbrücken, Germany • For every 0 < i ≤ n , if ψ i ∈ M then M i \ M i − = { ψ i } and N i = N i − , and if ψ i ∈ N , then M i = M i − and N i \ N i − = { ψ i } .We prove M i ⊆ GF φw and N i ⊆ F G φw for every 0 ≤ i ≤ n by induction on i . For i = M = ∅ = N . For i > • ψ i ∈ N , i.e., M i = M i − and N i \ N i − = { ψ i } .By induction hypothesis and M i = M i − we have M i ⊆GF φw and N i − ⊆ F G φw . We prove ψ i ∈ F G φw , i.e., w | = FG ψ i , in three steps. – Claim 1: ψ i [ M ] Π = ψ i [ M i ] Π .By the definition of ·[·] Π , ψ i [ M ] Π is completely de-termined by the µ -subformulas of ψ i that belong to M . By the definition of the sequence ( M , N ) , . . . , ( M n , N n ) , a µ -subformula of ψ i belongs to M if andonly if it belongs to M i , and we are done. – Claim 2: M i ⊆ GF φw k for every k ≥ M i ⊆ GF φw . – Proof of w | = FG ψ i .By the assumption of (2) we have w | = FG ( ψ i [ M ] Π ) ,and so, by Claim 1, w | = FG ( ψ i [ M i ] Π ) . So there ex-ists an index j such that w j + k | = ψ i [ M i ] Π for every k ≥
0. By Claim 2 we further have M i ⊆ GF φw j + k for every j , k ≥
0. So we can apply Lemma 11.2 to M i , w j + k , and ψ i , which yields w j + k | = ψ i for every k ≥
0. So w | = FG ψ i . • ψ i ∈ M , i.e., M i \ M i − = { ψ i } and N i = N i − .By induction hypothesis we have in this case M i − ⊆GF φw and N i ⊆ F G φw . We prove ψ i ∈ GF φw , i.e., w | = GF ψ i in three steps. – Claim 1: ψ i [ N ] Σ = ψ i [ N i ] Σ .The claim is proved as in the previous case. – Claim 2: There is an j ≥ N i ⊆ G φw k forevery k ≥ j .Follows immediately from N i ⊆ F G φw . – Proof of w | = GF ψ i .By the assumption of (2) we have w | = GF ( ψ i [ N ] Σ ) .Let j be the index of Claim 2. By Claim 1 we have w | = GF ( ψ i [ N i ] Σ ) , and so there exist infinitely many k ≥ j such that w k | = ψ i [ N i ] Σ . By Claim 2 we fur-ther have N i ⊆ G φw k . So we can apply Lemma 13.2 to N i , w k , and ψ i , which yields w k | = ψ i for infinitelymany k ≥ j . So w | = GF ψ i . (cid:3) B Omitted Proofs
Lemma 33.
Let φ be a formula. For every M ⊆ µ ( φ ) and N ⊆ ν ( φ ) , there exists an A1W [ ] with O (| sf ( φ )|) states thatrecognises L( φ M , N ) .Proof. By Lemma 28, some A1W [ ] with O (| sf ( φ M , N )|) statesrecognises L( φ M , N ) . So it suffices to show that | sf ( φ M , N )| ∈ O (| sf ( φ )|) . This follows from the following claims: 1. | Ð { sf ( ψ [ M ] Π ) : ψ ∈ sf ( φ )}| ≤ | sf ( φ )| | Ð { sf ( ψ [ N ] Σ ) : ψ ∈ sf ( φ )}| ≤ | sf ( φ )| | sf ( φ [ M ] Σ )| ≤ | sf ( φ )| Let ψ , . . . , ψ n be an enumeration of sf ( φ ) compatible withthe subformula order, i.e., if ψ i is a subformula of ψ j , then i ≤ j . Let X = ∅ , and X i = X i − ∪ { ψ i } for every 1 ≤ i ≤ n To prove (1-3) we show that for every 0 ≤ i ≥ n (1’) (cid:12)(cid:12)Ð { sf ( ψ [ M ] Π ) : ψ ∈ X i } (cid:12)(cid:12) ≤ i (2’) (cid:12)(cid:12)Ð { sf ( ψ [ N ] Σ ) : ψ ∈ X i } (cid:12)(cid:12) ≤ i (3’) (cid:12)(cid:12)Ð { sf ( ψ [ M ] Σ ) ∪ sf ( ψ [ M ] Π ) : ψ ∈ X i } (cid:12)(cid:12) ≤ i Since X n = sf ( φ ) , (1) and (2) follow immediately from (1’)and (2’), while (3) follows from (3’) and the inclusion sf ( φ [ M ] Σ ) ⊆ Ø { sf ( ψ [ M ] Σ ) ∪ sf ( ψ [ M ] Π ) : ψ ∈ sf ( φ )} which follows easily from the definitions.We only prove (1’) and (3’), since (2’) is analogous to (1’).For i = (cid:12)(cid:12)(cid:12)Ø { sf ( ψ [ M ] Π ) : ψ ∈ X i } (cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)Ø { sf ( ψ [ M ] Π ) : ψ ∈ X i − } (cid:12)(cid:12)(cid:12) + (∗) (cid:12)(cid:12)(cid:12)Ø { sf ( ψ [ M ] Σ ) ∪ sf ( ψ [ M ] Π ) : ψ ∈ X i } (cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)Ø { sf ( ψ [ M ] Σ ) ∪ sf ( ψ [ M ] Π ) : ψ ∈ X i − } (cid:12)(cid:12)(cid:12) + (∗∗) We prove (∗) and (∗∗) by a case distinction on ψ i . We onlyshow one case as an example, since all other cases are eitherstraightforward or analogous.Case ψ i = ψ ′ i W ψ ′′ i . Observe that the subformula orderingensures sf ( ψ ′ i ) ⊆ X i − and sf ( ψ ′′ i ) ⊆ X i − . Thus the only new proper subformulas we derive are the ones that are directlyderived from ψ ′ i W ψ ′′ i . Inserting the definitions for sf , ·[·] Π ,and ·[·] Σ we obtain the following two set inclusions fromwhich the bound on the cardinality follows: sf ( ψ i [ M ] Π ) ⊆ Ø { sf ( ψ [ M ] Π ) : ψ ∈ X i − }∪ {( ψ ′ i [ M ] Π ) W ( ψ ′′ i [ M ] Π )} sf ( ψ i [ M ] Σ ) ⊆ Ø { sf ( ψ [ M ] Σ ) ∪ sf ( ψ [ M ] Π ) : ψ ∈ X i − }∪ {( ψ ′ i [ M ] Σ ) U ( ψ ′′ i [ M ] Σ ∨ G ( ψ ′ i [ M ] Π )) , G ( ψ ′ i [ M ] Π )} (cid:3) Proposition 37.
AWW [ ] = ω -regular, AWW [ , A ] = DBW , AWW [ , R ] = DCW , AWW [ ] = DWW , AWW [ , A ] = safety, AWW [ , R ] = co-safety, A1W [ , R ] = Σ , A1W [ , A ] = Π , A1W [ ] = ∆ , A1W [ , R ] = Σ , A1W [ , A ] = Π , A1W [ ] = ∆ .Proof. Let us sketch the proof.(AWW): The ⊆ -inclusion follows immediately from Lem-mas 31 and 32 and Corollary 36. The ⊇ -inclusion is a slightadaptation of similar proofs in [6]. In order to translate a ICS ’20, July 8–11, 2020, Saarbrücken, Germany Salomon Sickert and Javier Esparza
DCW into a AWW [ , R ] we duplicate the set of states intotwo sets of marked and unmarked states. We remove fromthe marked states all rejecting states, and add transitionsthat allow unmarked states to nondeterministically chooseto move to another unmarked state, or to its marked copy.Finally, we define all unmarked states to be rejecting and allmarked states to be accepting. The proof of AWW [ , A ] ⊇ DBW is dual. The inclusion AWW [ ] ⊇ ω -regular followsfrom the previous two results, because every DRW is equiv-alent to a Boolean combination of DBWs and DCWs, whichwe can express in our initial formula θ . The proofs for theremaining inclusions are analogous.(A1W): The ⊇ -inclusion for ∆ i is proven in Lemma 28.For a formula φ that belongs to Σ i ( Π i ) we also rely onLemma 28, but add a new initial state, h φ i Σ i ( h φ i Π i ) thatis marked as rejecting (accepting) such that the automatonbelongs to A1W [ i , R ] (A1W [ i , A ] ). For the ⊆ -inclusion, let A = h Σ , Q , θ , δ , α i be a very weak alternating automatonwith Σ = Ap . We use the translation from A1W to LTL pre-sented in [9, Thm. 6], with minimal modifications, to definea formula χ A such that L( χ A ) = L(A) . Then, we showthat when A belongs to one of the classes in the hierarchy, χ A belongs to the corresponding class of formulas. For theproof of correctness of the translation we refer the reader to[9].For the definition of χ A , we assign to every θ ∈ B + ( Q ) anLTL formula χ ( θ ) such that L( χ ( θ )) = L(A θ ) , where A θ denotes A with θ as initial formula, and set χ A : = χ ( θ ) .Similarly, for the definition of χ ( θ ) , we first assign a formula χ ( q ) to every state q , and then define χ ( θ ) as the result ofsubstituting χ ( q ) for q in θ , for every state q . It remains todefine χ ( q ) . Using that A is very weak, we proceed induc-tively, i.e., we assume that χ ( q ′ ) has already been definedfor all q ′ such that q → q ′ and q , q ′ .For every q ∈ Q and σ ∈ Ap , let θ q , σ and θ ′ q , σ be formulassuch that δ ( q , σ ) ≡ ( q ∧ θ q , σ ) ∨ θ ′ q , σ (it is easy to see thatthey exist). Define χ ( q ) = ( φ q U φ ′ q if q < αφ q W φ ′ q if q ∈ α with: φ q = Ü σ ⊆ Σ (cid:0) ψ σ ∧ X χ ( θ q , σ ) (cid:1) φ ′ q = Ü σ ⊆ Σ (cid:16) ψ σ ∧ X χ ( θ ′ q , σ ) (cid:17) ψ σ = Û a ∈ σ a ∧ Û a < σ ¬ a Since this translation assigns to each U -formula a reject-ing state and to each W -formula an accepting state, the syn-tax tree of χ A has an alternation between U and W exactlywhen there is an alternation between accepting and non-accepting states. This yields all the desired inclusions in Σ , Π , . . . , ∆ ..